id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
9,594,112 | https://en.wikipedia.org/wiki/Emios | Emios (an acronym for Environmental Memory Interoperable Open Service) is an MDD / MDE platform that aims to provide a range of services for storing and sharing information about environmental research activities.
History
Emios, initiated by C. Faucher in June 2006, is based on the environmental memory concepts developed by the "Motive" CNRS committee and specifically by Franck Guarnieri in 2003.
Emios is a set of Eclipse plugins based on EMF, distributed under the terms of the EPL License.
The current version is the 0.0.1 and mainly contains the Geographic Information Standards Manager: GISM. GISM is the first part of Emios and implements the ISO 19100 series of International Standards from ISO TC211.
Scientific References
Faucher C., Gourmelon F., Lafaye J.Y., Rouan M., "Mise en place d’une mémoire environnementale adaptée aux besoins d’un observatoire du domaine cotier : MEnIr", Revue Internationale de Géomatique, Hermès/Lavoisier, vol 19/1, , pp. 7–26, 2009, http://geo.e-revues.com/
Faucher C., Lafaye J.Y., 2007. Model-Driven Engineering for implementing the ISO 19100 series of international standards, "CoastGIS 07, the 8th International Symposium on GIS and Computer Mapping for Coastal Zone Management", vol. 2, p. 424-433, 7–10 October, Santander, Spain.
Related link
Emios web site
Environmental science software
Geographic information systems
Domain-specific programming languages | Emios | [
"Technology",
"Environmental_science"
] | 351 | [
"Information systems",
"Environmental science software",
"Geographic information systems"
] |
9,594,549 | https://en.wikipedia.org/wiki/Gravity%20Pipe | Gravity Pipe (abbreviated GRAPE) is a project which uses hardware acceleration to perform gravitational computations. Integrated with Beowulf-style commodity computers, the GRAPE system calculates the force of gravity that a given mass, such as a star, exerts on others. The project resides at University of Tokyo.
The GRAPE hardware acceleration component "pipes" the force computation to the general-purpose computer serving as a node in a parallelized cluster as the innermost loop of the gravitational model.
The GRAPE project designed an ASIC component with mathematical logic and operations to generate the required computations. This means the latter generations of GRAPE supercomputers, despite not providing a Turing complete computational processing power, are powerful for heavily mathematical super-computing usages. The MD-GRAPE 3 supercomputer was also used in protein folding simulations.
Its shortened name, GRAPE, was chosen as an intentional reference to the Apple Inc. line of computers.
Method
The primary calculation in GRAPE hardware is a summation of the forces between a particular star and every other star in the simulation.
Several versions (GRAPE-1, GRAPE-3 and GRAPE-5) use the logarithmic number system (LNS) in the pipeline to calculate the approximate force between two stars and take the antilogarithms of the x, y and z components before adding them to their corresponding total. The GRAPE-2, GRAPE-4 and GRAPE-6 use floating-point arithmetic for more accurate calculation of such forces. The advantage of the logarithmic-arithmetic versions is that they allow more and faster parallel pipes for a given hardware cost because all but the sum portion of the GRAPE algorithm (1.5 power of the sum of the squares of the input data divided by the input data) is easy to perform with LNS.
GRAPE-DR consists of a large number of simple processors, all operating in the SIMD fashion.
Application
GRAPE computes approximate solutions to the historically intractable n-body problem, which is of interest in astrophysics and celestial mechanics. n refers to the number of celestial bodies in a given problem. While the 2-body problem was solved by Kepler's laws in the 17th century, any calculation where n > 2 has historically been a nigh-impossible challenge. An analytical solution exists for n = 3, although the resulting series converges too slowly to be of practical use. For n > 2, solutions are generally calculated numerically by determining the interaction between all particles. Thus, the calculation scales as n2.
GRAPE assists in calculations of interactions between particles where the interaction scales as r−2. This dependence is hardwired, drastically improving calculation times. These problems include the evolution of galaxies (gravitation force scales as r−2). Similar problems exist in molecular chemistry and biology, where the force considered would be electrical rather than gravitational.
In 1999, Marseilles Observatory published a study on simulating the formation of proto-planets and plantessimals with a large planetary body. This simulation used the GRAPE-4 system.
Prizes
The LNS-based GRAPE-5 architecture won the Price Performance category of the Gordon Bell Prize in 1999, at about $7 per MegaFLOPS. This category measures the price efficiency of a particular machine in terms of the price in dollars per megaFLOPS. The particular implementation "Grape-6" also won prizes in 2000 and 2001 (see external links).
Grape-DR was ranked first in the June 2010 Little Green500 List, a ranking of supercomputer's performance per unit power consumption published by the Green500.org.
See also
The Gordon Bell Prize, named in honor of Gordon Bell, is administered by the Association for Computing Machinery.
Supercomputer and High-performance computing are main articles on the general subject.
gravitySimulator is a cluster containing 32 GRAPEs.
References
External links
The GRAPE site at the University of Tokyo
Gordon Bell prize history
The Top 500 List
The GRAPE-6 Implementation
Brief historical overview of the GRAPE
Supercomputers
Supercomputing in Japan | Gravity Pipe | [
"Technology"
] | 823 | [
"Supercomputers",
"Supercomputing"
] |
9,594,699 | https://en.wikipedia.org/wiki/Risk%20difference | The risk difference (RD), excess risk, or attributable risk is the difference between the risk of an outcome in the exposed group and the unexposed group. It is computed as , where is the incidence in the exposed group, and is the incidence in the unexposed group. If the risk of an outcome is increased by the exposure, the term absolute risk increase (ARI) is used, and computed as . Equivalently, if the risk of an outcome is decreased by the exposure, the term absolute risk reduction (ARR) is used, and computed as .
The inverse of the absolute risk reduction is the number needed to treat, and the inverse of the absolute risk increase is the number needed to harm.
Usage in reporting
It is recommended to use absolute measurements, such as risk difference, alongside the relative measurements, when presenting the results of randomized controlled trials. Their utility can be illustrated by the following example of a hypothetical drug which reduces the risk of colon cancer from 1 case in 5000 to 1 case in 10,000 over one year. The relative risk reduction is 0.5 (50%), while the absolute risk reduction is 0.0001 (0.01%). The absolute risk reduction reflects the low probability of getting colon cancer in the first place, while reporting only relative risk reduction, would run into risk of readers exaggerating the effectiveness of the drug.
Authors such as Ben Goldacre believe that the risk difference is best presented as a natural number - drug reduces 2 cases of colon cancer to 1 case if you treat 10,000 people. Natural numbers, which are used in the number needed to treat approach, are easily understood by non-experts.
Inference
Risk difference can be estimated from a 2x2 contingency table:
The point estimate of the risk difference is
The sampling distribution of RD is approximately normal, with standard error
The confidence interval for the RD is then
where is the standard score for the chosen level of significance
Bayesian interpretation
We could assume a disease noted by , and no disease noted by , exposure noted by , and no exposure noted by . The risk difference can be written as
Numerical examples
Risk reduction
Risk increase
See also
Population Impact Measures
Relative risk reduction
References
Epidemiology
Medical statistics | Risk difference | [
"Environmental_science"
] | 456 | [
"Epidemiology",
"Environmental social science"
] |
9,594,858 | https://en.wikipedia.org/wiki/Heidelberg%20Institute%20for%20International%20Conflict%20Research | The Heidelberg Institute for International Conflict Research (HIIK) is an independent and interdisciplinary registered association located at the Department of Political Science at the University of Heidelberg. Since 1991, the HIIK has been committed to the distribution of knowledge about the emergence, course and settlement of interstate and intrastate political conflicts. The Conflict Barometer is published annually and contains the current research results.
Conflict Barometer
The HIIK's annual publication Conflict Barometer describes the recent trends in global conflict developments, escalations, de-escalations, and settlements. Coup d'état, attempted coups d'états, as well as implemented measures of conflict resolution are also reported. It is subdivided into five world regions and presents all present conflicts in detailed charts and short descriptions. The methodological approach consists of the conflict definition and the measuring of the conflict intensity.
Methodology after 2011
Following a methodological revision in 2011, changes in the Heidelberg methodology included allocating intensities not only based on state units and calendar years, but also on the level of subnational units and calendar months. Furthermore, the establishment of intensities now follows an analysis of clearly conceived proxy indicators, used for the assessment of means and consequences of a conflict measure. This analysis continues to be based on the actions of conflict actors, as well as the communication between them. Through the conceptual refinement and standardization of data collection and documentation, the HIIK data achieves a higher degree of precision, reliability, and comprehensibility regarding information on political conflicts.
Conflict definition
A political conflict is a positional difference between at least two assertive and directly involved actors regarding values relevant to a society (the conflict items including (territory, secession, decolonization, autonomy, system/ideology, national power, regional predominance, international power, resources, other)) which is carried out using observable and interrelated conflict measures that lie outside established regulatory procedures and threaten core state functions, the international order, or hold the prospect of doing so.
According to the Heidelberg (1998) methodology and ideology, the essence of a political conflict lies in a contradiction, adequately represented by the concept of a “positional difference”: a positional difference is a perceived incompatibility of ideas and beliefs. It presupposes the presence of the following elements:
(1) There must be at least two entities possessing intellectual capacity and vision, and who are capable of communicating. Such an entity is called an actor.
(2) In order for the actors to sense incompatibility between their ideas and beliefs, there must be reciprocal actions and acts of communication between said actors. These actions and acts of communication are called measures.
(3) A communicating act always refers to a specific issue, an action always refers to a certain object. The subject behind a measure is called item.
For the purpose of defining the term political conflict more precisely, the three elements aforementioned shall be further defined. These elements are necessary requirements for the existence of a political conflict.
Conflict intensities
Annual conflict summary
The table below presents data after the 2011 methodology revision. Violent conflicts include highly violent ones, the latter being classified into limited and full-scale wars.
See also
Peace and conflict studies
Conflict resolution
External links
Heidelberg Institute for International Conflict Research (HIIK)
Conflict Barometer published since 1992 as PDF-Download
Methodological Approach
Conflict (process)
Political science organizations
1991 establishments in Germany
Organizations established in 1991 | Heidelberg Institute for International Conflict Research | [
"Biology"
] | 697 | [
"Behavior",
"Aggression",
"Human behavior",
"Conflict (process)"
] |
9,594,871 | https://en.wikipedia.org/wiki/Statutory%20reserve | In the business of insurance, statutory reserves are those assets an insurance company is legally required to maintain on its balance sheet with respect to the unmatured obligations (i.e., expected future claims) of the company. Statutory reserves are a type of actuarial reserve.
Purpose
Statutory reserves are intended to ensure that insurance companies are able to meet future obligations created by insurance policies. These reserves must be reported in statements filed with insurance regulatory bodies. They are calculated with a certain level of conservatism in order to protect policyholders and beneficiaries.
Methods
There are two types of methods for calculation of statutory reserves. Reserve methodology may be fully prescribed by law, which is often called formula-based reserving. This is in contrast to principles-based reserves, where actuaries are given latitude to use professional judgement in determining methodology and assumptions for reserve calculation. In the United States, where formula-based reserves are used, the National Association of Insurance Commissioners plans to implement principles-based reserves in 2017.
Life insurance in the United States
In the U.S. life insurance industry, statutory reserves are most commonly computed using the Commissioner's Reserve Valuation Method, or CRVM, the method prescribed by law for computing minimum required reserves.
The size of a CRVM reserve, as with most life reserves, is affected by the age and sex of the insured person, how long the policy for which it is computed has been in force, the plan of insurance offered by the policy, the rate of interest used in the calculation, and the mortality table with which the actuarial present values are computed.
The Commissioner's Reserve Valuation Method was itself established by the Standard Valuation Law (SVL), which was created by the NAIC and adopted by the several states shortly after World War II. The first mortality table prescribed by the SVL was the 1941 CSO (Commissioner's Standard Ordinary) table, at a maximum interest rate of 3½%. Subsequent amendments to the Standard Valuation Law have permitted the use of more modern mortality tables and higher rates of interest. The effect of these changes has in general been to reduce the amount of the reserves which life insurance companies are legally required to hold.
See also
Actuarial reserves
Elizur Wright
Statutory accounting principles
Notes
References
N. L. Bowers, H. U. Gerber, J. C. Hickman, D. A. Jones, & C. J. Nesbitt, Actuarial Mathematics, Society of Actuaries (1986).
External links
NAIC Official Website
Statutory Accounting Principles Working Group
Actuarial science
Accounting in the United States
Capital requirement | Statutory reserve | [
"Mathematics"
] | 534 | [
"Applied mathematics",
"Actuarial science"
] |
9,596,101 | https://en.wikipedia.org/wiki/Shared%20mesh | A shared mesh (also known as 'traditional' or 'best effort' mesh) is a wireless mesh network that uses a single radio to communicate via mesh backhaul links to all the neighboring nodes in the mesh. This is a first generation mesh where the total available bandwidth of the radio channel is ‘shared’ between all the neighboring nodes in the mesh. The capacity of the channel is further consumed by traffic being forwarded from one node to the next in the mesh – reducing the end to end traffic that can be passed. Because bandwidth is shared amongst all nodes in the mesh, and because every link in the mesh uses additional capacity, this type of network offers much lower end to end transmission rates than a switched mesh and degrades in capacity as nodes are added to the mesh.
Wireless mesh nodes typically include both mesh backhaul links and client access. A dual radio shared mesh node uses separate access and mesh backhaul radios. Only the mesh backhaul radio is shared. In a single radio shared mesh node, access and mesh backhaul are collapsed onto a single radio. Now the available bandwidth is shared between both the mesh links and client access, further reducing the end to end traffic available.
See also
Wireless mesh network
IEEE 802.11
Mesh networking
Switched mesh
Wi-Fi
Wireless LAN
802.16
External links
White Paper: Capacity of Wireless Mesh Networks Understanding single radio, dual radio and multi radio wireless mesh networks.
What is Third Generation Mesh? Review of three generation of mesh networking architectures.
Ugly Truths About Mesh Networks Performance issues of First and Second Generation Mesh products.
Wireless networking
Network topology
Radio technology | Shared mesh | [
"Mathematics",
"Technology",
"Engineering"
] | 327 | [
"Information and communications technology",
"Telecommunications engineering",
"Network topology",
"Wireless networking",
"Computer networks engineering",
"Radio technology",
"Topology"
] |
9,596,122 | https://en.wikipedia.org/wiki/Switched%20mesh | A switched mesh is a type of wireless mesh network network that uses multiple dedicated radios to communicate between each neighboring node in the mesh via dedicated mesh backhaul links. Nodes in a switched mesh use separate access and backhaul radios.
Each dedicated mesh link is on a separate channel, ensuring that forwarded traffic does not use any bandwidth from any other link in the mesh. At each mesh point, traffic is "switched" from one channel to the next, giving rise to the name. As a result, a switched mesh is capable of much higher capacities and transmission rates than a shared mesh and grows in capacity as nodes are added to the mesh. All of the available bandwidth of each separate radio channel is dedicated to the link to the neighboring node, meaning that total available bandwidth is the sum of the bandwidth of each of the links.
Context in wireless mesh networking
Switched mesh is one of three distinct types of configuration of wireless mesh networking products in the market today:
single radio shared mesh in the first type one radio provides both backhaul (packet relaying) and client services (access to a laptop).
dual radio shared mesh in the second type one radio relays packets over multiple hops while another provides client access. This significantly improves backhaul bandwidth and latency.
switched mesh the third type uses two or more radios for the backhaul for higher bandwidth and low latency. Third generation wireless mesh networking products are replacing previous generation products as more demanding applications like voice and video need to be relayed over many hops in the mesh network.
See also
Shared mesh
Mesh networking
IEEE 802.11
802.16
Wireless LAN
Wi-Fi
References
Wireless networking
Network topology
Radio technology | Switched mesh | [
"Mathematics",
"Technology",
"Engineering"
] | 339 | [
"Information and communications technology",
"Telecommunications engineering",
"Network topology",
"Wireless networking",
"Computer networks engineering",
"Radio technology",
"Topology"
] |
9,596,904 | https://en.wikipedia.org/wiki/F%C3%BCrst-Plattner%20Rule | The Fürst-Plattner rule (also known as the trans-diaxial effect) describes the stereoselective addition of nucleophiles to cyclohexene derivatives.
Introduction
Cyclohexene derivatives, such as imines, epoxides, and halonium ions, react with nucleophiles in a stereoselective fashion, affording trans-diaxial addition products. The term “Trans-diaxial addition” describes the mechanism of the addition, however the products are likely to equilibrate by ring flip to the lower energy conformer, placing the new substituents in the equatorial position.
Mechanism and Stereochemistry
Epoxidation of a substituted cyclohexene affords a product where the R group resides in the pseudo-equatorial position. Nucleophilic ring-opening of this class of epoxides can occur by an attack at either the C1 or C2-position. It is well known that nucleophilic ring-opening reactions of these substrates can proceed with excellent regioselectivity. The Fürst-Plattner rule attributes this regiochemical control to a large preference for the reaction pathway that follows the more stable chair-like transition state (attack at the C1-position) compared to the one proceeding through the unfavored twist boat-like transition state (attack at the C2-position). The attack at the C1-position follows a substantially lower reaction barrier of around 5 kcal mol–1 depending on the specific conditions. Similarly, the Fürst-Plattner rule applies to nucleophilic additions to imines and halonium ions.
Examples
Epoxide addition
A recent example of the Fürst-Plattner rule can be seen from Chrisman et al. where limonene is epoxidized to give a 1:1 mixture of diastereomers. Exposure to a nitrogen nucleophile in water at reflux provides only one ring opened product in 75-85% ee.
Mechanism
The half-chair conformation indicates that attack occurs stereoselectively on the diastereomer where the electrophilic carbon can receive the nucleophile and proceed to the favored chair conformation.
Woodward's Reserpine Synthesis
Although not well understood at the time, the Fürst-Plattner rule played a critical role during R. B. Woodward's synthesis of Reserpine. The problematic stereocenter is highlighted in red, below.
Woodward's synthetic strategy used a Bischler-Napieralski reaction to form the tetrahydrocarbazole portion of Reserpine. The subsequent imine intermediate was treated with sodium borohydride, affording the wrong stereoisomer due to the Fürst-Plattner effect.
Examining the intermediate structure shows that the hydride preferentially added to the 3-carbon via the top face of the imine to avoid an unfavorable twist-boat intermediate. Unfortunately, this outcome required Woodward to perform several additional steps to complete the total synthesis of reserpine with the proper stereochemistry.
References
Stereochemistry | Fürst-Plattner Rule | [
"Physics",
"Chemistry"
] | 648 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
9,597,496 | https://en.wikipedia.org/wiki/Ataxia%20telangiectasia%20and%20Rad3%20related | Serine/threonine-protein kinase ATR, also known as ataxia telangiectasia and Rad3-related protein (ATR) or FRAP-related protein 1 (FRP1), is an enzyme that, in humans, is encoded by the ATR gene. It is a large kinase of about 301.66 kDa. ATR belongs to the phosphatidylinositol 3-kinase-related kinase protein family. ATR is activated in response to single strand breaks, and works with ATM to ensure genome integrity.
Function
ATR is a serine/threonine-specific protein kinase that is involved in sensing DNA damage and activating the DNA damage checkpoint, leading to cell cycle arrest in eukaryotes. ATR is activated in response to persistent single-stranded DNA, which is a common intermediate formed during DNA damage detection and repair. Single-stranded DNA occurs at stalled replication forks and as an intermediate in DNA repair pathways such as nucleotide excision repair and homologous recombination repair. ATR is activated during more persistent issues with DNA damage; within cells, most DNA damage is repaired quickly and faithfully through other mechanisms. ATR works with a partner protein called ATRIP to recognize single-stranded DNA coated with RPA. RPA binds specifically to ATRIP, which then recruits ATR through an ATR activating domain (AAD) on its surface. This association of ATR with RPA is how ATR specifically binds to and works on single-stranded DNA—this was proven through experiments with cells that had mutated nucleotide excision pathways. In these cells, ATR was unable to activate after UV damage, showing the need for single stranded DNA for ATR activity. The acidic alpha-helix of ATRIP binds to a basic cleft in the large RPA subunit to create a site for effective ATR binding. Many other proteins exist that are recruited to the cite of ssDNA that are needed for ATR activation. While RPA recruits ATRIP, the RAD9-RAD1-HUS1 (9-1-1) complex is loaded onto the DNA adjacent to the ssDNA; though ATRIP and the 9-1-1 complex are recruited independently to the site of DNA damage, they interact extensively through massive phosphorylation once colocalized. The 9-1-1 complex, a ring-shaped molecule related to PCNA, allows the accumulation of ATR in a damage specific way. For effective association of the 9-1-1 complex with DNA, RAD17-RFC is also needed. This complex also brings in topoisomerase binding protein 1 (TOPBP1) which binds ATR through a highly conserved AAD. TOPBP1 binding is dependent on the phosphorylation of the Ser387 residue of the RAD9 subunit of the 9-1-1 complex. This is likely one of the main functions of the 9-1-1 complex within this DNA damage response. Another important protein that binds TR was identified by Haahr et al. in 2016: Ewings tumor-associated antigen 1 (ETAA1). This protein works in parallel with TOPBP1 to activate ATR through a conserved AAD. It is hypothesized that this pathway, which works independently of TOPBP1 pathway, is used to divide labor and possibly respond to differential needs within the cell. It is hypothesized that one pathway may be most active when ATR is carrying out normal support for replicating cells, and the other may be active when the cell is under more extreme replicative stress.
It is not just ssDNA that activates ATR, though the existence of RPA associated ssDNA is important. Instead, ATR activation is heavily dependent on the existence of all the proteins previously described, that colocalize around the site of DNA damage. An experiment where RAD9, ATRIP, and TOPBP1 were overexpressed proved that these proteins alone were enough to activate ATR in the absence of ssDNA, showing their importance in triggering this pathway.
Once ATR is activated, it phosphorylates Chk1, initiating a signal transduction cascade that culminates in cell cycle arrest. It acts to activate Chk1 through a claspin intermediate which binds the two proteins together. This claspin intermediate needs to be phosphorylated at two sites in order to do this job, something that can be carried out by ATR but is most likely under the control of some other kinase. This response, mediated by Chk1, is essential to regulating replication within a cell; through the Chk1-CDC25 pathway, which effects levels of CDC2, this response is thought to reduce the rate of DNA synthesis in the cell and inhibit origin firing during replication. In addition to its role in activating the DNA damage checkpoint, ATR is thought to function in unperturbed DNA replication. The response is dependent on how much ssDNA accumulates at stalled replication forks. ATR is activated during every S phase, even in normally cycling cells, as it works to monitor replication forks to repair and stop cell cycling when needed. This means that ATR is activated at normal, background levels within all healthy cells. There are many points in the genome that are susceptible to stalling during replication due to complex sequences of DNA or endogenous damage that occurs during the replication. In these cases, ATR works to stabilize the forks so that DNA replication can occur as it should.
ATR is related to a second checkpoint-activating kinase, ATM, which is activated by double strand breaks in DNA or chromatin disruption. ATR has also been shown to work on double strand breaks (DSB), acting a slower response to address the common end resections that occur in DSBs, and thus leave long strands of ssDNA (which then go on to signal ATR). In this circumstance, ATM recruits ATR and they work in partnership to respond to this DNA damage. They are responsible for the “slow” DNA damage response that can eventually trigger p53 in healthy cells and thus lead to cell cycle arrest or apoptosis.
ATR as an essential protein
Mutations in ATR are very uncommon. The total knockout of ATR is responsible for early death of mouse embryos, showing that it is a protein with essential life functions. It is hypothesized that this could be related to its likely activity in stabilizing Okazaki fragments on the lagging strands of DNA during replication, or due to its job stabilizing stalled replication forks, which naturally occur. In this setting, ATR is essential to preventing fork collapse, which would lead to extensive double strand breakage across the genome. The accumulation of these double strand breaks could lead to cell death.
Clinical significance
Mutations in ATR are responsible for Seckel syndrome, a rare human disorder that shares some characteristics with ataxia telangiectasia, which results from ATM mutation.
ATR is also linked to familial cutaneous telangiectasia and cancer syndrome.
Inhibitors
ATR/ChK1 inhibitors can potentiate the effect of DNA cross-linking agents such as cisplatin and nucleoside analogues such as gemcitabine. The first clinical trials using inhibitors of ATR have been initiated by AstraZeneca, preferably in ATM-mutated chronic lymphocytic leukaemia (CLL), prolymphocytic leukaemia (PLL) or B-cell lymphoma patients and by Vertex Pharmaceuticals in advanced solid tumours. ATR provided and exciting point for potential targeting in these solid tumors, as many tumors function through activating the DNA damage response. These tumor cells rely on pathways like ATR to reduce replicative stress within the cancerous cells that are uncontrollably dividing, and thus these same cells could be very susceptible to ATR knockout. In ATR-Seckel mice, after exposure to cancer-causing agents, the damage DNA damage response pathway actually conferred resistance to tumor development (6). After many screens to identify specific ATR inhibitors, currently four made it into phase I or phase II clinical trials since 2013; these include AZD6738, M6620 (VX-970), BAY1895344 (Elimusertib). and M4344 (VX-803) (10). These ATR inhibitors work to help the cell proceed through p53 independent apoptosis, as well as force mitotic entry that leads to mitotic catastrophe.
One study by Flynn et al. found that ATR inhibitors work especially well in cancer cells which rely on the alternative lengthening of telomeres (ALT) pathway. This is due to RPA presence when ALT is being established, which recruits ATR to regulate homologous recombination. This ALT pathway was extremely fragile with ATR inhibition and thus using these inhibitors to target this pathway that keeps cancer cell immortal could provide high specificity to stubborn cancer cells.
Examples include
Berzosertib
Aging
Deficiency of ATR expression in adult mice leads to the appearance of age-related alterations such as hair graying, hair loss, kyphosis (rounded upper back), osteoporosis and thymic involution. Furthermore, there are dramatic reductions with age in tissue-specific stem and progenitor cells, and exhaustion of tissue renewal and homeostatic capacity. There was also an early and permanent loss of spermatogenesis. However, there was no significant increase in tumor risk.
Seckel syndrome
In humans, hypomorphic mutations (partial loss of gene function) in the ATR gene are linked to Seckel syndrome, an autosomal recessive condition characterized by proportionate dwarfism, developmental delay, marked microcephaly, dental malocclusion and thoracic kyphosis. A senile or progeroid appearance has also been frequently noted in Seckel patients. For many years, the mutation found in the two families first diagnosed with Seckel Syndrome were the only mutations known to cause the disease.
In 2012, Ogi and colleagues discovered multiple new mutations that also caused the disease. One form of the disease, which involved mutation in genes encoding the ATRIP partner protein, is considered more severe that the form that was first discovered. This mutation led to severe microcephaly and growth delay, microtia, micrognathia, dental crowding, and skeletal issues (evidenced in unique patellar growth). Sequencing revealed that this ATRIP mutation occurred most likely due to missplicing which led to fragments of the gene without exon 2. The cells also had a nonsense mutation in exon 12 of the ATR gene which led to a truncated ATR protein. Both of these mutations resulted in lower levels of ATR and ATRIP than in wild-type cells, leading to insufficient DNA damage response and the severe form of Seckel Syndrome noted above.
Researchers also found that heterozygous mutations in ATR were responsible for causing Seckel Syndrome. Two novel mutations in one copy of the ATR gene caused under-expression of both ATR and ATRIP.
Homologous recombinational repair
Somatic cells of mice deficient in ATR have a decreased frequency of homologous recombination and an increased level of chromosomal damage. This finding implies that ATR is required for homologous recombinational repair of endogenous DNA damage.
Drosophila mitosis and meiosis
Mei-41 is the Drosophila ortholog of ATR. During mitosis in Drosophila DNA damages caused by exogenous agents are repaired by a homologous recombination process that depends on mei-41(ATR). Mutants defective in mei-41(ATR) have increased sensitivity to killing by exposure to the DNA damaging agents UV , and methyl methanesulfonate. Deficiency of mei-41(ATR) also causes reduced spontaneous allelic recombination (crossing over) during meiosis suggesting that wild-type mei-41(ATR) is employed in recombinational repair of spontaneous DNA damages during meiosis.
Interactions
Ataxia telangiectasia and Rad3-related protein has been shown to interact with:
BRCA1,
CHD4,
HDAC2,
MSH2,
P53
RAD17, and
RHEB.
See also
Ceralasertib, investigational new drug
References
Further reading
External links
Drosophila meiotic-41 - The Interactive Fly
Proteins
EC 2.7.11 | Ataxia telangiectasia and Rad3 related | [
"Chemistry"
] | 2,621 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
9,597,934 | https://en.wikipedia.org/wiki/Landslide%20classification | There have been known various classifications of landslides. Broad definitions include forms of mass movement that narrower definitions exclude. For example, the McGraw-Hill Encyclopedia of Science and Technology distinguishes the following types of landslides:
fall (by undercutting)
fall (by toppling)
slump
rockslide
earthflow
sinkholes, mountain side
rockslide that develops into rock avalanche
Influential narrower definitions restrict landslides to slumps and translational slides in rock and regolith, not involving fluidisation. This excludes falls, topples, lateral spreads, and mass flows from the definition.
The causes of landslides are usually related to instabilities in slopes. It is usually possible to identify one or more landslide causes and one landslide trigger. The difference between these two concepts is subtle but important. The landslide causes are the reasons that a landslide occurred in that location and at that time and may be considered to be factors that made the slope vulnerable to failure, that predispose the slope to becoming unstable. The trigger is the single event that finally initiated the landslide. Thus, causes combine to make a slope vulnerable to failure, and the trigger finally initiates the movement. Landslides can have many causes but can only have one trigger. Usually, it is relatively easy to determine the trigger after the landslide has occurred (although it is generally very difficult to determine the exact nature of landslide triggers ahead of a movement event).
Classification factors
Various scientific disciplines have developed taxonomic classification systems to describe natural phenomena or individuals, like for example, plants or animals. These systems are based on specific characteristics like shape of organs or nature of reproduction. Differently, in landslide classification, there are great difficulties because phenomena are not perfectly repeatable; usually being characterised by different causes, movements and morphology, and involving genetically different material. For this reason, landslide classifications are based on different discriminating factors, sometimes very subjective. In the following write-up, factors are discussed by dividing them into two groups: the first one is made up of the criteria utilised in the most widespread classification systems that can generally be easily determined. The second one is formed by those factors that have been utilised in some classifications and can be useful in descriptions.
A1) Type of movement
This is the most important criterion, even if uncertainties and difficulties can arise in the identification of movements, being the mechanisms of some landslides often particularly complex. The main movements are falls, slides and flows, but usually topples, lateral spreading and complex movements are added to these.
A2) Involved material
Rock, earth and debris are the terms generally used to distinguish the materials involved in the landslide process. For example, the distinction between earth and debris is usually made by comparing the percentage of coarse grain size fractions. If the weight of the particles with a diameter greater than 2 mm is less than 20%, the material will be defined as earth; in the opposite case, it is debris.
A3) Activity
The classification of a landslide based on its activity is particularly relevant in the evaluation of future events. The recommendations of the WP/WLI (1993) define the concept of activity with reference to the spatial and temporal conditions, defining the state, the distribution and the style. The first term describes the information regarding the time in which the movement took place, permitting information to be available on future evolution, the second term describes, in a general way, where the landslide is moving and the third term indicates how it is moving.
A4) Movement velocity
This factor has a great importance in the hazard evaluation. A velocity range is connected to the different type of landslides, on the basis of observation of case history or site observations.
B1) The age of the movement
Landslide dating is an interesting topic in the evaluation of hazard. The knowledge of the landslide frequency is a fundamental element for any kind of probabilistic evaluation. Furthermore, the evaluation of the age of the landslide permits to correlate the trigger to specific conditions, as earthquakes or periods of intense rains. It is possible that phenomena could be occurred in past geological times, under specific environmental conditions which no longer act as agents today. For example, in some Alpine areas, landslides of the Pleistocene age are connected with particular tectonic, geomorphological and climatic conditions.
B2) Geological conditions
This represent a fundamental factor of the morphological evolution of a slope. Bedding attitude and the presence of discontinuities or faults control the slope morphogenesis.
B3) Morphological characteristics
As the landslide is a geological volume with a hidden side, morphological characteristics are extremely important in the reconstruction of the technical model.
B4) Geographical location
This criterion describes, in a general way, the location of landslides in the physiographic context of the area. Some authors have therefore identified landslides according to their geographical position so that it is possible to describe "alpine landslides", "landslides in plains", "hilly landslides" or "cliff landslides". As a consequence, specific morphological contexts are referred characterised by slope evolution processes.
B5) Topographical criteria
With these criteria, landslides can be identified with a system similar to that of the denomination of formations. Consequently, it is possible to describe a landslide using the name of a site. In particular, the name will be that of the locality where the landslide happened with a specific characteristic type.
B6) Type of climate
These criteria give particular importance to climate in the genesis of phenomena for which similar geological conditions can, in different climatic conditions, lead to totally different morphological evolution. As a consequence, in the description of a landslide, it can be interesting to understand in what type of climate the event occurred.
B7) Causes of the movements
In the evaluation of landslide susceptibility, causes of the triggers is an important step. Terzaghi describes causes as "internal" and "external" referring to modifications in the conditions of the stability of the bodies. Whilst the internal causes induce modifications in the material itself which decrease its resistance to shear stress, the external causes generally induce an increase of shear stress, so that block or bodies are no longer stable. The triggering causes induce the movement of the mass. Predisposition to movement due to control factors is determining in landslide evolution. Structural and geological factors, as already described, can determine the development of the movement, inducing the presence of mass in kinematic freedom.
Types and classification
In traditional usage, the term landslide has at one time or another been used to cover almost all forms of mass movement of rocks and regolith at the Earth's surface. In 1978, in a very highly cited publication, David Varnes noted this imprecise usage and proposed a new, much tighter scheme for the classification of mass movements and subsidence processes. This scheme was later modified by Cruden and Varnes in 1996, and influentially refined by Hutchinson (1988) and Hungr et al. (2001). This full scheme results in the following classification for mass movements in general, where bold font indicates the landslide categories:
Under this definition, landslides are restricted to "the movement... of shear strain and displacement along one or several surfaces that are visible or may reasonably be inferred, or within a relatively narrow zone", i.e., the movement is localised to a single failure plane within the subsurface. He noted landslides can occur catastrophically, or that movement on the surface can be gradual and progressive. Falls (isolated blocks in free-fall), topples (material coming away by rotation from a vertical face), spreads (a form of subsidence), flows (fluidised material in motion), and creep (slow, distributed movement in the subsurface) are all explicitly excluded from the term landslide.
Under the scheme, landslides are sub-classified by the material that moves, and by the form of the plane or planes on which movement happens. The planes may be broadly parallel to the surface ("translational slides") or spoon-shaped ("rotational slides"). Material may be rock or regolith (loose material at the surface), with regolith subdivided into debris (coarse grains) and earth (fine grains).
Nevertheless, in broader usage, many of the categories that Varnes excluded are recognised as landslide types, as seen below. This leads to ambiguity in usage of the term.
The following clarifies the usages of the various terms in the table. Varnes and those who later modified his scheme only regard the slides category as forms of landslide.
Falls
Description: " the detachment of soil or rock from a steep slope along a surface on which little or no shear displacement takes place. The material then descends mainly through the air by falling, bouncing, or rolling" (Varnes, 1996).
Secondary falls: "Secondary falls involves rock bodies already physically detached from cliff and merely lodged upon it" (Hutchinson, 1988)
Speed: from very to extremely rapid
Type of slope: slope angle 45–90 degrees
Control factor: Discontinuities
Causes: Vibration, undercutting, differential weathering, excavation, or stream erosion
Topples
Description: "Toppling is the forward rotation out of the slope of a mass of soil or rock about a point or axis below the centre of gravity of the displaced mass. Toppling is sometimes driven by gravity exerted by material upslope of the displaced mass and sometimes by water or ice in cracks in the mass" (Varnes, 1996)
Speed: extremely slow to extremely rapid
Type of slope: slope angle 45–90 degrees
Control factor: Discontinuities, lithostratigraphy
Causes: Vibration, undercutting, differential weathering, excavation, or stream erosion
Slides
"A slide is a downslope movement of soil or rock mass occurring dominantly on the surface of rupture or on relatively thin zones of intense shear strain." (Varnes, 1996)
Translational slide
Description: "In translational slides the mass displaces along a planar or undulating surface of rupture, sliding out over the original ground surface." (Varnes, 1996)
Speed: extremely slow to extremely rapid (>5 m/s)
Type of slope: slope angle 20-45 degrees
Control factor: Discontinuities, geological setting
Rotational slides
Description: "Rotational slides move along a surface of rupture that is curved and concave" (Varnes, 1996)
Speed: extremely slow to extremely rapid
Type of slope: slope angle 20–40 degrees
Control factor: morphology and lithology
Causes: Vibration, undercutting, differential weathering, excavation, or stream erosion
Spreads
"Spread is defined as an extension of a cohesive soil or rock mass combined with a general subsidence of the fractured mass of cohesive material into softer underlying material." (Varnes, 1996).
"In spread, the dominant mode of movement is lateral extension accommodated by shear or tensile fractures" (Varnes, 1978)
Speed: extremely slow to extremely rapid (>5 m/s)
Type of slope: angle 45–90 degrees
Control factor: Discontinuities, lithostratigraphy
Causes: Vibration, undercutting, differential weathering, excavation, or stream erosion
Flows
A flow is a spatially continuous movement in which surfaces of shear are short-lived, closely spaced, and usually not preserved. The distribution of velocities in the displacing mass resembles that in a viscous liquid. The lower boundary of displaced mass may be a surface along which appreciable differential movement has taken place or a thick zone of distributed shear (Cruden & Varnes, 1996)
Flows in rock
Rock Flow
Description: "Flow movements in bedrock include deformations that are distributed among many large or small fractures, or even microfracture, without concentration of displacement along a through-going fracture" (Varnes, 1978)
Speed: extremely slow
Type of slope: angle 45–90 degrees
Causes: Vibration, undercutting, differential weathering, excavation, or stream erosion
Rock avalanche (Sturzstrom)
Description: "Extremely rapid, massive, flow-like motion of fragmented rock from a large rock slide or rock fall" (Hungr, 2001)
Speed: extremely rapid
Type of slope: angle 45–90 degrees
Control factor: Discontinuities, lithostratigraphy
Causes: Vibration, undercutting, differential weathering, excavation or stream erosion
Flows in soil
Debris flow
Description: "Debris flow is a very rapid to extremely rapid flow of saturated non-plastic debris in a steep channel" (Hungr et al.,2001)
Speed: very rapid to extremely rapid (>5 m/s)
Type of slope: angle 20–45 degrees
Control factor: torrent sediments, water flows
Causes: High intensity rainfall
Debris avalanche
Description: "Debris avalanche is a very rapid to extremely rapid shallow flow of partially or fully saturated debris on a steep slope, without confinement in an established channel." (Hungr et al., 2001)
Speed: very rapid to extremely rapid (>5 m/s)
Type of slope: angle 20–45 degrees
Control factor: morphology, regolith
Causes: High intensity rainfalls
Earth flow
Description: "Earth flow is a rapid or slower, intermittent flow-like movement of plastic, clayey earth." (Hungr et al.,2001)
Speed: slow to rapid (>1.8 m/h)
Type of slope: slope angle 5–25 degrees
Control factor: lithology
Mudflow
Description: "Mudflow is a very rapid to extremely rapid flow of saturated plastic debris in a channel, involving significantly greater water content relative to the source material (Plasticity index> 5%)." (Hungr et al.,2001)
Speed: very rapid to extremely rapid (>5 m/s)
Type of slope: angle 20–45 degrees
Control factor: torrent sediments, water flows
Causes: High intensity rainfall
Complex movement
Description: Complex movement is a combination of falls, topples, slides, spreads and flows
Causes
Landslide causes include geological factors, morphological factors, physical factors and factors associated with human activity.
Geological causes
Weathered materials
Sheared materials
Jointed or fissured materials
Adversely orientated discontinuities
Permeability contrasts
Material contrasts
Rainfall and snow fall
Earthquakes
Morphological causes
Slope angle
Uplift
Rebound
Fluvial erosion
Wave erosion
Glacial erosion
Erosion of lateral margins
Subterranean erosion
Internal erosion
Slope loading
Vegetation change
Erosion
Physical causes
Topography:
Slope aspect and gradient
Geological factors:
Discontinuity factors (dip spacing, asperity, dip and length)
Physical characteristics of the rock (rock strength etc.)
Tectonic activity:
Seismic activity (earthquakes)
Volcanic eruption
Physical weathering:
Thawing
Freeze-thaw
Soil erosion
Hydrogeological factors:
Intense rainfall
Rapid snow melt
Prolonged precipitation
Ground water changes (rapid drawdown)
Soil pore water pressure
Surface runoff
Human causes
Deforestation
Excavation
Loading
Water management (groundwater drawdown and water leakage)
Land use (e.g. construction of roads, houses etc.)
Mining and quarrying
Vibration
Occasionally, even after detailed investigations, no trigger can be determined - this was the case in the large Aoraki / Mount Cook landslide in New Zealand 1991. It is unclear as to whether the lack of a trigger in such cases is the result of some unknown process acting within the landslide, or whether there was in fact a trigger, but it cannot be determined. The trigger may be due to a slow but steady decrease in material strength associated with the weathering of the rock - at some point the material becomes so weak that failure must occur. Hence, the trigger is the weathering process, but this is not detectable externally.
In most cases a trigger is thought as an external stimulus that induces an immediate or near-immediate response in the slope, in this case in the form of the movement of the landslide. Generally, this movement is induced either because the stresses in the slope are altered by increasing shear stress or decreasing the effective normal stress, or by reducing the resistance to the movement perhaps by decreasing the shear strength of the materials within the landslide.
Rainfall
In the majority of cases the main trigger of landslides is heavy or prolonged rainfall. Generally this takes the form of either an exceptional short lived event, such as the passage of a tropical cyclone or even the rainfall associated with a particularly intense thunderstorm or of a long duration rainfall event with lower intensity, such as the cumulative effect of monsoon rainfall in South Asia. In the former case it is usually necessary to have very high rainfall intensities, whereas in the latter the intensity of rainfall may be only moderate - it is the duration and existing pore water pressure conditions that are important.
The importance of rainfall as a trigger for landslides cannot be overestimated. A global survey of landslide occurrence in the 12 months to the end of September 2003 revealed that there were 210 damaging landslide events worldwide. Of these, over 90% were triggered by heavy rainfall. One rainfall event for example in Sri Lanka in May 2003 triggered hundreds of landslides, killing 266 people and rendering over 300,000 people temporarily homeless. In July 2003 an intense rain band associated with the annual Asian monsoon tracked across central Nepal, triggering 14 fatal landslides that killed 85 people. The reinsurance company Swiss Re estimated that rainfall induced landslides associated with the 1997-1998 El Nino event triggered landslides along the west coast of North, Central and South America that resulted in over $5 billion in losses. Finally, landslides triggered by Hurricane Mitch in 1998 killed an estimated 18,000 people in Honduras, Nicaragua, Guatemala and El Salvador.
Rainfall triggers a large amount of landslides principally because the rainfall drives an increase in pore water pressure within the soil. Figure A illustrates the forces acting on an unstable block on a slope. Movement is driven by shear stress, which is generated by the mass of the block acting under gravity down the slope. Resistance to movement is the result of the normal load. When the slope fills with water, the fluid pressure provides the block with buoyancy, reducing the resistance to movement. In addition, in some cases fluid pressures can act down the slope as a result of groundwater flow to provide a hydraulic push to the landslide that further decreases the stability. Whilst the example given in Figures A and B is clearly an artificial situation, the mechanics are essentially as per a real landslide.
In some situations, the presence of high levels of fluid may destabilise the slope through other mechanisms, such as:
Fluidization of debris from earlier events to form debris flows;
Loss of suction forces in silty materials, leading to generally shallow failures (this may be an important mechanism in residual soils in tropical areas following deforestation);
Undercutting of the toe of the slope through river erosion.
Destabilizing of non-lithified earth materials through soil-piping.
Considerable efforts have been made to understand the triggers for landsliding in natural systems, with quite variable results. For example, working in Puerto Rico, Larsen and Simon found that storms with a total precipitation of 100–200 mm, about 14 mm of rain per hour for several hours, or 2–3 mm of rain per hour for about 100 hours can trigger landslides in that environment. Rafi Ahmad, working in Jamaica, found that for rainfall of short duration (about 1 hour) intensities of greater than 36 mm/h were required to trigger landslides. On the other hand, for long rainfall durations, low average intensities of about 3 mm/h appeared to be sufficient to cause landsliding as the storm duration approached approximately 100 hours.
Corominas and Moya (1999) found that the following thresholds exist for the upper basin of the Llobregat River, Eastern Pyrenees area. Without antecedent rainfall, high intensity and short duration rains triggered debris flows and shallow slides developed in colluvium and weathered rocks. A rainfall threshold of around 190 mm in 24 h initiated failures whereas more than 300 mm in 24-48 h were needed to cause widespread shallow landsliding. With antecedent rain, moderate intensity precipitation of at least 40 mm in 24 h reactivated mudslides and both rotational and translational slides affecting clayey and silty-clayey formations. In this case, several weeks and 200 mm of precipitation were needed to cause landslide reactivation. A similar approach is reported by Brand et al. (1988) for Hong Kong, who found that if the 24-hour antecedent rainfall exceeded 200 mm then the rainfall threshold for a large landslide event was 70 mm·h−1. Finally, Caine (1980) established a worldwide threshold:
I = 14.82 D - 0.39 where: I is the rainfall intensity (mm·h−1), D is duration of rainfall (h)
This threshold applies over time periods of 10 minutes to 10 days. It is possible to modify the formula to take into consideration areas with high mean annual precipitations by considering the proportion of mean annual precipitation represented by any individual event.
Other techniques can be used to try to understand rainfall triggers, including:
• Actual rainfall techniques, in which measurements of rainfall are adjusted for potential evapotranspiration and then correlated with landslide movement events
• Hydrogeological balance approaches, in which pore water pressure response to rainfall is used to understand the conditions under which failures are initiated
• Coupled rainfall - stability analysis methods, in which pore water pressure response models are coupled to slope stability models to try to understand the complexity of the system
• Numerical slope modelling, in which finite element (or similar) models are used to try to understand the interactions of all relevant processes
Seismicity
The second major factor in the triggering of landslides is seismicity. Landslides occur during earthquakes as a result of two separate but interconnected processes: seismic shaking and pore water pressure generation.
Seismic shaking
The passage of the earthquake waves through the rock and soil produces a complex set of accelerations that effectively act to change the gravitational load on the slope. So, for example, vertical accelerations successively increase and decrease the normal load acting on the slope. Similarly, horizontal accelerations induce a shearing force due to the inertia of the landslide mass during the accelerations. These processes are complex, but can be sufficient to induce failure of the slope. These processes can be much more serious in mountainous areas in which the seismic waves interact with the terrain to produce increases in the magnitude of the ground accelerations. This process is termed 'topographic amplification'. The maximum acceleration is usually seen at the crest of the slope or along the ridge line, meaning that it is a characteristic of seismically triggered landslides that they extend to the top of the slope.
Liquefaction
The passage of the earthquake waves through a granular material such as a soil can induce a process termed liquefaction, in which the shaking causes a reduction in the pore space of the material. This densification drives up the pore pressure in the material. In some cases this can change a granular material into what is effectively a liquid, generating 'flow slides' that can be rapid and thus very damaging. Alternatively, the increase in pore pressure can reduce the normal stress in the slope, allowing the activation of translational and rotational failures.
The nature of seismically-triggered landslides
For the main part seismically generated landslides usually do not differ in their morphology and internal processes from those generated under non-seismic conditions. However, they tend to be more widespread and sudden. The most abundant types of earthquake-induced landslides are rock falls and slides of rock fragments that form on steep slopes. However, almost every other type of landslide is possible, including highly disaggregated and fast-moving falls; more coherent and slower-moving slumps, block slides, and earth slides; and lateral spreads and flows that involve partly to completely liquefied material (Keefer, 1999). Rock falls, disrupted rock slides, and disrupted slides of earth and debris are the most abundant types of earthquake-induced landslides, whereas earth flows, debris flows, and avalanches of rock, earth, or debris typically transport material the farthest. There is one type of landslide that is essential uniquely limited to earthquakes - liquefaction failure, which can cause fissuring or subsidence of the ground. Liquefaction involves the temporary loss of strength of sands and silts which behave as viscous fluids rather than as soils. This can have devastating effects during large earthquakes.
Volcanic activity
Some of the largest and most destructive landslides known have been associated with volcanoes. These can occur either in association with the eruption of the volcano itself, or as a result of mobilisation of the very weak deposits that are formed as a consequence of volcanic activity. Essentially, there are two main types of volcanic landslide: lahars and debris avalanches, the largest of which are sometimes termed sector collapses.
An example of a lahar was seen at Mount St Helens during its catastrophic eruption on May 18, 1980.
Failures on volcanic flanks themselves are also common. For example, a part of the side of Casita Volcano in Nicaragua collapsed on October 30, 1998, during the heavy precipitation associated with the passage of Hurricane Mitch. Debris from the initial small failure eroded older deposits from the volcano and incorporated additional water and wet sediment from along its path, increasing in volume about ninefold. The lahar killed more than 2,000 people as it swept over the towns of El Porvenir and Rolando Rodriguez at the base of the mountain.
Debris avalanches commonly occur at the same time as an eruption, but occasionally they may be triggered by other factors such as a seismic shock or heavy rainfall. They are particularly common on strato volcanoes, which can be massively destructive due to their large size. The most famous debris avalanche occurred at Mount St Helens during the massive eruption in 1980. On May 18, 1980, at 8:32 a.m. local time, a magnitude 5.1 earthquake shook Mount St. Helens. The bulge and surrounding area slid away in a gigantic rockslide and debris avalanche, releasing pressure, and triggering a major pumice and ash eruption of the volcano. The debris avalanche had a volume of about , traveled at , and covered an area of , killing 57 people.
Snowmelt
In many cold mountain areas, snowmelt can be a key mechanism by which landslide initiation can occur. This can be especially significant when sudden increases in temperature lead to rapid melting of the snow pack. This water can then infiltrate into the ground, which may have impermeable layers below the surface due to still-frozen soil or rock, leading to rapid increases in pore water pressure, and resultant landslide activity. This effect can be especially serious when the warmer weather is accompanied by precipitation, which both adds to the groundwater and accelerates the rate of thawing.
Water-level change
Rapid changes in the groundwater level along a slope can also trigger landslides. This is often the case where a slope is adjacent to a water body or a river. When the water level adjacent to the slope falls rapidly the groundwater level frequently cannot dissipate quickly enough, leaving an artificially high water table. This subjects the slope to higher than normal shear stresses, leading to potential instability.
This is probably the most important mechanism by which river bank materials fail, being significant after a flood as the river level is declining (i.e. on the falling limb of the hydrograph) as shown in the following figures.
It can also be significant in coastal areas when sea level falls after a storm tide, or when the water level of a reservoir or even a natural lake rapidly falls. The most famous example of this is the Vajont failure, when a rapid decline in lake level contributed to the occurrence of a landslide that killed over 2000 people. Numerous huge landslides also occurred in the Three Gorges (TG) after the construction of the TG dam.
Rivers
In some cases, failures are triggered as a result of undercutting of the slope by a river, especially during a flood. This undercutting serves both to increase the gradient of the slope, reducing stability, and to remove toe weighting, which also decreases stability. For example, in Nepal this process is often seen after a glacial lake outburst flood, when toe erosion occurs along the channel. Immediately after the passage of flood waves extensive landsliding often occurs. This instability can continue to occur for a long time afterwards, especially during subsequent periods of heavy rain and flood events.
Colluvium-filled bedrock hollows
Colluvium-filled bedrock hollows are the cause of many shallow earth landslides in steep mountainous terrain. They can form as a U- or V-shaped trough as local bedrock variations reveal areas in the bedrock which are more prone to weathering than other locations on the slope. As the weathered bedrock turns to soil, there is a greater elevation difference between the soil level and the hard bedrock. With the introduction of water and the thick soil, there is less cohesion and the soil flows out in a landslide. With every landslide more bedrock is scoured out and the hollow becomes deeper. After time, colluvium fills the hollow, and the sequence starts again.
See also
Landslide
Landslide mitigation
David J. Varnes
References
Further reading
Caine, N., 1980. The rainfall intensity-duration control of shallow landslides and debris flows. Geografiska Annaler, 62A, 23–27.
Coates, D. R. (1977) - Landslide prospectives. In: Landslides (D.R. Coates, Ed.) Geological Society of America, pp. 3–38.
Corominas, J. and Moya, J. 1999. Reconstructing recent landslide activity in relation to rainfall in the Llobregat River basin, Eastern Pyrenees, Spain. Geomorphology, 30, 79–93.
Cruden D.M., VARNES D. J. (1996) - Landslide types and processes. In: Turner A.K.; Shuster R.L. (eds) Landslides: Investigation and Mitigation. Transp Res Board, Spec Rep 247, pp 36–75.
Hungr O, Evans SG, Bovis M, and Hutchinson JN (2001) Review of the classification of landslides of the flow type. Environmental and Engineering Geoscience VII, 221–238.'
Hutchinson J. N.: Mass Movement. In: The Encyclopedia of Geomorphology (Fairbridge, R.W., ed.), Reinhold Book Corp., New York, pp. 688–696, 1968.'
Harpe C. F. S.: Landslides and related phenomena. A Study of Mass Movements of Soil and Rock. Columbia Univo Press, New York, 137 pp., 1938
Keefer, D.K. (1984) Landslides caused by earthquakes. Bulletin of the Geological Society of America 95, 406-421
Varnes D. J.: Slope movement types and processes. In: Schuster R. L. & Krizek R. J. Ed., Landslides, analysis and control. Transportation Research Board Sp. Rep. No. 176, Nat. Acad. oi Sciences, pp. 11–33, 1978.'
Terzaghi K. - Mechanism of Landslides. In Engineering Geology (Berkel) Volume. Ed. da The Geological Society of America~ New York, 1950.
WP/ WLI. 1993. A suggested method for describing the activity of a landslide. Bulletin of the International Association of Engineering Geology, No. 47, pp. 53–57
Dunne, Thomas. Journal of the American Water Resources Association. August 1998, V. 34, NO. 4.
www3.interscience.wiley.com JAWRA Journal of the American Water Resources AssociationVolume 34, Issue 4, Article first published online: 8 JUN 2007 (registration required)
2016, Ventura County Star. A driveway in Camarillo, California (466 E. Highland Ave., Camarillo, CA) sinks and a landslide ensues engulfing the driveway within minutes.
Geomorphology
Sedimentology
Landslide analysis, prevention and mitigation
Landslides
Articles containing video clips
Classification systems by subject | Landslide classification | [
"Environmental_science"
] | 6,584 | [
"Environmental soil science",
" prevention and mitigation",
"Landslide analysis"
] |
4,249,694 | https://en.wikipedia.org/wiki/Coupling%20%28electronics%29 | In electronics, electric power and telecommunication, coupling is the transfer of electrical energy from one circuit to another, or between parts of a circuit. Coupling can be deliberate as part of the function of the circuit, or it may be undesirable, for instance due to coupling to stray fields. For example, energy is transferred from a power source to an electrical load by means of conductive coupling, which may be either resistive or direct coupling. An AC potential may be transferred from one circuit segment to another having a DC potential by use of a capacitor. Electrical energy may be transferred from one circuit segment to another segment with different impedance by use of a transformer; this is known as impedance matching. These are examples of electrostatic and electrodynamic inductive coupling.
Types
Electrical conduction:
Direct coupling, also called conductive coupling and galvanic coupling
Resistive conduction
Atmospheric plasma channel coupling
Electromagnetic induction:
Electrodynamic induction — commonly called inductive coupling, also magnetic coupling
Capacitive coupling
Evanescent wave coupling
Electromagnetic radiation:
Radio waves — Wireless telecommunications.
Electromagnetic interference (EMI) — Sometimes called radio frequency interference (RFI), is unwanted coupling. Electromagnetic compatibility (EMC) requires techniques to avoid such unwanted coupling, such as electromagnetic shielding.
Microwave power transmission
Other kinds of energy coupling:
Acoustic coupler
See also
Antenna noise temperature
Coupling loss
Aperture-to-medium coupling loss
Coupling coefficient of resonators
Directional coupler
Equilibrium length
Optocoupler
Fiber-optic coupling
Loading coil
Shield
List of electronics topics
AC Coupling
Impedance matching
Impedance bridging
Decoupling
Crosstalk
Wireless power transfer
References
Communication circuits
Electromagnetic compatibility
Electronics | Coupling (electronics) | [
"Engineering"
] | 341 | [
"Electromagnetic compatibility",
"Radio electronics",
"Telecommunications engineering",
"Electrical engineering",
"Communication circuits"
] |
4,249,844 | https://en.wikipedia.org/wiki/Ridged%20mirror | In atomic physics, a ridged mirror (or ridged atomic mirror, or Fresnel diffraction mirror) is a kind of atomic mirror, designed for the specular reflection of neutral particles (atoms) coming at a grazing incidence angle. In order to reduce the mean attraction of particles to the surface and increase the reflectivity, this surface has narrow ridges.
Reflectivity of ridged atomic mirrors
Various estimates for the efficiency of quantum reflection of waves from ridged mirror were discussed in the literature. All the estimates explicitly use the de Broglie theory about wave properties of reflected atoms.
Scaling of the van der Waals force
The ridges enhance the quantum reflection from the surface, reducing the effective constant of the van der Waals attraction of atoms to the surface. Such interpretation leads to the estimate of the reflectivity
,
where is width of the ridges, is distance between ridges, is grazing angle, and is wavenumber and is coefficient of reflection of atoms with wavenumber from a flat surface at the normal incidence. Such estimate predicts the enhancement of the reflectivity at the increase of period ; this estimate is valid at . See quantum reflection for the approximation (fit) of the function .
Interpretation as Zeno effect
For narrow ridges with large period , the ridges just blocks the part of the wavefront. Then, it can be interpreted in terms of the Fresnel diffraction of the de Broglie wave, or the Zeno effect; such interpretation leads to the estimate the reflectivity
,
where the grazing angle is supposed to be small. This estimate predicts enhancement of the reflectivity at the reduction of period . This estimate requires that .
Fundamental limit
For efficient ridged mirrors, both estimates above should predict high reflectivity. This implies reduction of both, width, of the ridges and the period, . The width of the ridges cannot be smaller than the size of an atom; this sets the limit of performance of the ridged mirrors.
Applications of ridged mirrors
Ridged mirrors are not yet commercialized, although certain achievements can be mentioned. The reflectivity of a ridged atomic mirror can be orders of magnitude better than that of a flat surface. The use of a ridged mirror as an atomic hologram has been demonstrated.
In Shimizu's and Fujita's work, atom holography is achieved via electrodes implanted into SiN4 film over an atomic mirror, or maybe as the atomic mirror itself.
Ridged mirrors can also reflect visible light; however, for light waves, the performance is not better than that of a flat surface. An ellipsoidal ridged mirror is proposed as the focusing element for an atomic optical system with submicrometre resolution (atomic nanoscope).
See also
Atomic mirror
Quantum reflection
Atomic nanoscope
Zeno effect
Matter wave
References
Atomic, molecular, and optical physics | Ridged mirror | [
"Physics",
"Chemistry"
] | 577 | [
"Atomic",
" molecular",
" and optical physics"
] |
4,249,871 | https://en.wikipedia.org/wiki/String%20galvanometer | A string galvanometer is a sensitive fast-responding measuring instrument that uses a single fine filament of wire suspended in a strong magnetic field to measure small currents. In use, a strong light source is used to illuminate the fine filament, and the optical system magnifies the movement of the filament allowing it to be observed or recorded by photography.
The principle of the string galvanometer remained in use for electrocardiograms until the advent of electronic vacuum-tube amplifiers in the 1920s.
History
Submarine cable telegraph systems of the late 19th century used a galvanometer to detect pulses of electric current, which could be observed and transcribed into a message. The speed at which pulses could be detected by the galvanometer was limited by its mechanical inertia, and by the inductance of the multi-turn coil used in the instrument. Clément Adair, a French engineer, replaced the coil with a much faster wire or "string" producing the first string galvanometer.
For most telegraphic purposes it was sufficient to detect the existence of a pulse. In 1892 André Blondel described the dynamic properties of an instrument that could measure the wave shape of an electrical impulse, an oscillograph.
Augustus Waller had discovered electrical activity from the heart and produced the first electrocardiogram in 1887. But his equipment was slow. Physiologists worked to find a better instrument. In 1901, Willem Einthoven described the science background and potential utility of a string galvanometer, stating "Mr. Adair has already built an instrument with a wires stretched between poles of a magnet. It was a telegraph receiver." Einthoven developed a sensitive form of string galvanomter that allowed photographic recording of the impulses associated with the heartbeat. He was a leader in applying the string galvanometer to physiology and medicine, leading to today's electrocardiography. Einthoven was awarded the 1924 Nobel prize in Physiology or Medicine for his work.
Previous to the string galvanometer, scientists were using a machine called the capillary electrometer to measure the heart’s electrical activity, but this device was unable to produce results of a diagnostic level. Willem Einthoven adapted the string galvanometer at Leiden University in the early 20th century, publishing the first registration of its use to record an electrocardiogram in a Festschrift book in 1902. The first human electrocardiogram was recorded in 1887; however, it was not until 1901 that a quantifiable result was obtained from the string galvanometer. In 1908, the physicians Arthur MacNalty, M.D. Oxon, and Thomas Lewis teamed to become the first of their profession to apply electrocardiography in medical diagnosis.
Mechanics
Einthoven's galvanometer consisted of a silver-coated quartz filament of a few centimeters length (see picture on the right) and negligible mass that conducted the electrical currents from the heart. This filament was acted upon by powerful electromagnets positioned either side of it, which caused sideways displacement of the filament in proportion to the current carried due to the electromagnetic field. The movement in the filament was heavily magnified and projected through a thin slot onto a moving photographic plate.
The filament was originally made by drawing out a filament of glass from a crucible of molten glass. To produce a sufficiently thin and long filament an arrow was shot across the room so that it dragged the filament from the molten glass. The filament so produced was then coated with silver to provide the conductive pathway for the current. By tightening or loosening the filament it is possible to very accurately regulate the sensitivity of the galvanometer.
The original machine required water cooling for the powerful electromagnets, required 5 operators and weighed some 600 lb.
Procedure
Patients are seated with both arms and left leg in separate buckets of saline solution. These buckets act as electrodes to conduct the current from the skin's surface to the filament. The three points of electrode contact on these limbs produces what is known as Einthoven's triangle, a principle still used in modern-day ECG recording.
References
Cardiology
Electrophysiology
Galvanometers
Medical tests
Historical scientific instruments
Dutch inventions | String galvanometer | [
"Technology",
"Engineering"
] | 886 | [
"Galvanometers",
"Measuring instruments"
] |
4,250,076 | https://en.wikipedia.org/wiki/Outrigger%20Macintosh | The Outrigger is a style of Apple Macintosh desktop computer case designed for easy access. Outrigger cases were used on the Power Macintosh 7200, 7300, 7500, 7600 and Power Macintosh G3 Desktop computers from August 1995 to December 1998.
The logic board is mounted at the bottom of the case, with drive bays and a power supply in a separate fold-out section that swings aside as one piece and props open. This allows unfettered access to logic board connections such as the memory, Central processing unit, VRAM and drive/power connections without a screwdriver. The PCI slots were located on the left edge of the case and covered only by a plastic shield, making them accessible without lifting the drive bay assembly.
Apple's next Power Macintosh case design as used in the Power Macintosh G3 (Blue & White) would also provide easy user access (although the motherboard and power-supply are significantly more difficult to replace).
References
Macintosh case designs | Outrigger Macintosh | [
"Technology"
] | 202 | [
"Computing stubs",
"Computer hardware stubs"
] |
4,250,298 | https://en.wikipedia.org/wiki/Auxiliary%20field | In physics, and especially quantum field theory, an auxiliary field is one whose equations of motion admit a single solution. Therefore, the Lagrangian describing such a field contains an algebraic quadratic term and an arbitrary linear term, while it contains no kinetic terms (derivatives of the field):
The equation of motion for is
and the Lagrangian becomes
Auxiliary fields generally do not propagate, and hence the content of any theory can remain unchanged in many circumstances by adding such fields by hand.
If we have an initial Lagrangian describing a field , then the Lagrangian describing both fields is
Therefore, auxiliary fields can be employed to cancel quadratic terms in in and linearize the action .
Examples of auxiliary fields are the complex scalar field F in a chiral superfield, the real scalar field D in a vector superfield, the scalar field B in BRST and the field in the Hubbard–Stratonovich transformation.
The quantum mechanical effect of adding an auxiliary field is the same as the classical, since the path integral over such a field is Gaussian. To wit:
See also
Bosonic field
Fermionic field
Composite Field
References
Quantum field theory | Auxiliary field | [
"Physics"
] | 244 | [
"Quantum field theory",
"Quantum mechanics"
] |
4,250,373 | https://en.wikipedia.org/wiki/PocketMail | PocketMail was a very small and inexpensive mobile computer, with a built-in acoustic coupler, developed by PocketScience.
History
PocketMail was developed by the company PocketScience and used technology developed by NASA. This was the first ever mass-market mobile email. The hardware cost around US$100 and the service was initially US$9.95 per month for unlimited use. Later the monthly fee increased. After the company made a reference hardware design, leading consumer electronics manufacturers Audioxo, Sharp, JVC, and others made their own PocketMail devices. Later a PocketMail dongle was created for the PalmPilot. PocketMail users were given a custom email address or able to synch up PocketMail with their existing email account (including AOL accounts). Although actually a computer, its main function was email. Its main advantages were that it was simple, and that it worked with any phone, even outside the United States. It was a low-cost personal digital assistant (PDA) with an inbuilt acoustic coupler which allowed users to send and receive email while connected to a normal telephone, thus allowing use outside of mobile phone range, or without the need to be signed up with a mobile telephone provider. Popularity of the PocketMail peaked around 2000, when the company stopped investing in new technology development.
In Australia, the company known as PocketMail in 2007 stopped marketing the PocketMail service, changed its name to Adavale Resources Limited and now owns uranium mining prospects in Queensland and South Australia.
References
Websites
Dan's Data Review: http://www.dansdata.com/pocketmail.htm
TechCrunch: Nostalgiamatic: The Sharp TM-20 with PocketMail
Government Computer News: With PocketMail, e-mail access is phone call away
JVC's PocketMail offers e-mail without computer or modem
InfoWorld Review of Sharp PocketMail device
Cracked.com's list of "The 5 Most Ridiculously Awful Computers Ever Made
Mobile computers
Modems
Email devices | PocketMail | [
"Technology"
] | 415 | [
"Computing stubs"
] |
4,250,553 | https://en.wikipedia.org/wiki/Gene | In biology, the word gene has two meanings. The Mendelian gene is a basic unit of heredity. The molecular gene is a sequence of nucleotides in DNA that is transcribed to produce a functional RNA. There are two types of molecular genes: protein-coding genes and non-coding genes. During gene expression (the synthesis of RNA or protein from a gene), DNA is first copied into RNA. RNA can be directly functional or be the intermediate template for the synthesis of a protein.
The transmission of genes to an organism's offspring, is the basis of the inheritance of phenotypic traits from one generation to the next. These genes make up different DNA sequences, together called a genotype, that is specific to every given individual, within the gene pool of the population of a given species. The genotype, along with environmental and developmental factors, ultimately determines the phenotype of the individual.
Most biological traits occur under the combined influence of polygenes (a set of different genes) and gene–environment interactions. Some genetic traits are instantly visible, such as eye color or the number of limbs, others are not, such as blood type, the risk for specific diseases, or the thousands of basic biochemical processes that constitute life. A gene can acquire mutations in its sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a gene, which may cause different phenotypical traits. Genes evolve due to natural selection or survival of the fittest and genetic drift of the alleles.
Definitions
There are many different ways to use the term "gene" based on different aspects of their inheritance, selection, biological function, or molecular structure but most of these definitions fall into two categories, the Mendelian gene or the molecular gene.
The Mendelian gene is the classical gene of genetics and it refers to any heritable trait. This is the gene described in The Selfish Gene. More thorough discussions of this version of a gene can be found in the articles Genetics and Gene-centered view of evolution.
The molecular gene definition is more commonly used across biochemistry, molecular biology, and most of genetics—the gene that is described in terms of DNA sequence. There are many different definitions of this gene—some of which are misleading or incorrect.
Very early work in the field that became molecular genetics suggested the concept that one gene makes one protein (originally 'one gene – one enzyme'). However, genes that produce repressor RNAs were proposed in the 1950s and by the 1960s, textbooks were using molecular gene definitions that included those that specified functional RNA molecules such as ribosomal RNA and tRNA (noncoding genes) as well as protein-coding genes.
This idea of two kinds of genes is still part of the definition of a gene in most textbooks. For example,
The important parts of such definitions are: (1) that a gene corresponds to a transcription unit; (2) that genes produce both mRNA and noncoding RNAs; and (3) regulatory sequences control gene expression but are not part of the gene itself. However, there's one other important part of the definition and it is emphasized in Kostas Kampourakis' book Making Sense of Genes.
The emphasis on function is essential because there are stretches of DNA that produce non-functional transcripts and they do not qualify as genes. These include obvious examples such as transcribed pseudogenes as well as less obvious examples such as junk RNA produced as noise due to transcription errors. In order to qualify as a true gene, by this definition, one has to prove that the transcript has a biological function.
Early speculations on the size of a typical gene were based on high-resolution genetic mapping and on the size of proteins and RNA molecules. A length of 1500 base pairs seemed reasonable at the time (1965). This was based on the idea that the gene was the DNA that was directly responsible for production of the functional product. The discovery of introns in the 1970s meant that many eukaryotic genes were much larger than the size of the functional product would imply. Typical mammalian protein-coding genes, for example, are about 62,000 base pairs in length (transcribed region) and since there are about 20,000 of them they occupy about 35–40% of the mammalian genome (including the human genome).
In spite of the fact that both protein-coding genes and noncoding genes have been known for more than 50 years, there are still a number of textbooks, websites, and scientific publications that define a gene as a DNA sequence that specifies a protein. In other words, the definition is restricted to protein-coding genes. Here is an example from a recent article in American Scientist.
This restricted definition is so common that it has spawned many recent articles that criticize this "standard definition" and call for a new expanded definition that includes noncoding genes. However, some modern writers still do not acknowledge noncoding genes although this so-called "new" definition has been recognised for more than half a century.
Although some definitions can be more broadly applicable than others, the fundamental complexity of biology means that no definition of a gene can capture all aspects perfectly. Not all genomes are DNA (e.g. RNA viruses), bacterial operons are multiple protein-coding regions transcribed into single large mRNAs, alternative splicing enables a single genomic region to encode multiple district products and trans-splicing concatenates mRNAs from shorter coding sequence across the genome. Since molecular definitions exclude elements such as introns, promotors, and other regulatory regions, these are instead thought of as "associated" with the gene and affect its function.
An even broader operational definition is sometimes used to encompass the complexity of these diverse phenomena, where a gene is defined as a union of genomic sequences encoding a coherent set of potentially overlapping functional products. This definition categorizes genes by their functional products (proteins or RNA) rather than their specific DNA loci, with regulatory elements classified as gene-associated regions.
History
Discovery of discrete inherited units
The existence of discrete inheritable units was first suggested by Gregor Mendel (1822–1884). From 1857 to 1864, in Brno, Austrian Empire (today's Czech Republic), he studied inheritance patterns in 8000 common edible pea plants, tracking distinct traits from parent to offspring. He described these mathematically as 2n combinations where n is the number of differing characteristics in the original peas. Although he did not use the term gene, he explained his results in terms of discrete inherited units that give rise to observable physical characteristics. This description prefigured Wilhelm Johannsen's distinction between genotype (the genetic material of an organism) and phenotype (the observable traits of that organism). Mendel was also the first to demonstrate independent assortment, the distinction between dominant and recessive traits, the distinction between a heterozygote and homozygote, and the phenomenon of discontinuous inheritance.
Prior to Mendel's work, the dominant theory of heredity was one of blending inheritance, which suggested that each parent contributed fluids to the fertilization process and that the traits of the parents blended and mixed to produce the offspring. Charles Darwin developed a theory of inheritance he termed pangenesis, from Greek pan ("all, whole") and genesis ("birth") / genos ("origin"). Darwin used the term gemmule to describe hypothetical particles that would mix during reproduction.
Mendel's work went largely unnoticed after its first publication in 1866, but was rediscovered in the late 19th century by Hugo de Vries, Carl Correns, and Erich von Tschermak, who (claimed to have) reached similar conclusions in their own research. Specifically, in 1889, Hugo de Vries published his book Intracellular Pangenesis, in which he postulated that different characters have individual hereditary carriers and that inheritance of specific traits in organisms comes in particles. De Vries called these units "pangenes" (Pangens in German), after Darwin's 1868 pangenesis theory.
Twenty years later, in 1909, Wilhelm Johannsen introduced the term "gene" (inspired by the ancient Greek: γόνος, gonos, meaning offspring and procreation) and, in 1906, William Bateson, that of "genetics" while Eduard Strasburger, among others, still used the term "pangene" for the fundamental physical and functional unit of heredity.
Discovery of DNA
Advances in understanding genes and inheritance continued throughout the 20th century. Deoxyribonucleic acid (DNA) was shown to be the molecular repository of genetic information by experiments in the 1940s to 1950s. The structure of DNA was studied by Rosalind Franklin and Maurice Wilkins using X-ray crystallography, which led James D. Watson and Francis Crick to publish a model of the double-stranded DNA molecule whose paired nucleotide bases indicated a compelling hypothesis for the mechanism of genetic replication.
In the early 1950s the prevailing view was that the genes in a chromosome acted like discrete entities arranged like beads on a string. The experiments of Benzer using mutants defective in the rII region of bacteriophage T4 (1955–1959) showed that individual genes have a simple linear structure and are likely to be equivalent to a linear section of DNA.
Collectively, this body of research established the central dogma of molecular biology, which states that proteins are translated from RNA, which is transcribed from DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses. The modern study of genetics at the level of DNA is known as molecular genetics.
In 1972, Walter Fiers and his team were the first to determine the sequence of a gene: that of bacteriophage MS2 coat protein. The subsequent development of chain-termination DNA sequencing in 1977 by Frederick Sanger improved the efficiency of sequencing and turned it into a routine laboratory tool. An automated version of the Sanger method was used in early phases of the Human Genome Project.
Modern synthesis and its successors
The theories developed in the early 20th century to integrate Mendelian genetics with Darwinian evolution are called the modern synthesis, a term introduced by Julian Huxley.
This view of evolution was emphasized by George C. Williams' gene-centric view of evolution. He proposed that the Mendelian gene is a unit of natural selection with the definition: "that which segregates and recombines with appreciable frequency." Related ideas emphasizing the centrality of Mendelian genes and the importance of natural selection in evolution were popularized by Richard Dawkins.
The development of the neutral theory of evolution in the late 1960s led to the recognition that random genetic drift is a major player in evolution and that neutral theory should be the null hypothesis of molecular evolution. This led to the construction of phylogenetic trees and the development of the molecular clock, which is the basis of all dating techniques using DNA sequences. These techniques are not confined to molecular gene sequences but can be used on all DNA segments in the genome.
Molecular basis
DNA
The vast majority of organisms encode their genes in long strands of DNA (deoxyribonucleic acid). DNA consists of a chain made from four types of nucleotide subunits, each composed of: a five-carbon sugar (2-deoxyribose), a phosphate group, and one of the four bases adenine, cytosine, guanine, and thymine.
Two chains of DNA twist around each other to form a DNA double helix with the phosphate–sugar backbone spiralling around the outside, and the bases pointing inward with adenine base pairing to thymine and guanine to cytosine. The specificity of base pairing occurs because adenine and thymine align to form two hydrogen bonds, whereas cytosine and guanine form three hydrogen bonds. The two strands in a double helix must, therefore, be complementary, with their sequence of bases matching such that the adenines of one strand are paired with the thymines of the other strand, and so on.
Due to the chemical composition of the pentose residues of the bases, DNA strands have directionality. One end of a DNA polymer contains an exposed hydroxyl group on the deoxyribose; this is known as the 3' end of the molecule. The other end contains an exposed phosphate group; this is the 5' end. The two strands of a double-helix run in opposite directions. Nucleic acid synthesis, including DNA replication and transcription occurs in the 5'→3' direction, because new nucleotides are added via a dehydration reaction that uses the exposed 3' hydroxyl as a nucleophile.
The expression of genes encoded in DNA begins by transcribing the gene into RNA, a second type of nucleic acid that is very similar to DNA, but whose monomers contain the sugar ribose rather than deoxyribose. RNA also contains the base uracil in place of thymine. RNA molecules are less stable than DNA and are typically single-stranded. Genes that encode proteins are composed of a series of three-nucleotide sequences called codons, which serve as the "words" in the genetic "language". The genetic code specifies the correspondence during protein translation between codons and amino acids. The genetic code is nearly the same for all known organisms.
Chromosomes
The total complement of genes in an organism or cell is known as its genome, which may be stored on one or more chromosomes. A chromosome consists of a single, very long DNA helix on which thousands of genes are encoded. The region of the chromosome at which a particular gene is located is called its locus. Each locus contains one allele of a gene; however, members of a population may have different alleles at the locus, each with a slightly different gene sequence.
The majority of eukaryotic genes are stored on a set of large, linear chromosomes. The chromosomes are packed within the nucleus in complex with storage proteins called histones to form a unit called a nucleosome. DNA packaged and condensed in this way is called chromatin. The manner in which DNA is stored on the histones, as well as chemical modifications of the histone itself, regulate whether a particular region of DNA is accessible for gene expression. In addition to genes, eukaryotic chromosomes contain sequences involved in ensuring that the DNA is copied without degradation of end regions and sorted into daughter cells during cell division: replication origins, telomeres, and the centromere. Replication origins are the sequence regions where DNA replication is initiated to make two copies of the chromosome. Telomeres are long stretches of repetitive sequences that cap the ends of the linear chromosomes and prevent degradation of coding and regulatory regions during DNA replication. The length of the telomeres decreases each time the genome is replicated and has been implicated in the aging process. The centromere is required for binding spindle fibres to separate sister chromatids into daughter cells during cell division.
Prokaryotes (bacteria and archaea) typically store their genomes on a single, large, circular chromosome. Similarly, some eukaryotic organelles contain a remnant circular chromosome with a small number of genes. Prokaryotes sometimes supplement their chromosome with additional small circles of DNA called plasmids, which usually encode only a few genes and are transferable between individuals. For example, the genes for antibiotic resistance are usually encoded on bacterial plasmids and can be passed between individual cells, even those of different species, via horizontal gene transfer.
Whereas the chromosomes of prokaryotes are relatively gene-dense, those of eukaryotes often contain regions of DNA that serve no obvious function. Simple single-celled eukaryotes have relatively small amounts of such DNA, whereas the genomes of complex multicellular organisms, including humans, contain an absolute majority of DNA without an identified function. This DNA has often been referred to as "junk DNA". However, more recent analyses suggest that, although protein-coding DNA makes up barely 2% of the human genome, about 80% of the bases in the genome may be expressed, so the term "junk DNA" may be a misnomer.
Structure and function
Structure
The structure of a protein-coding gene consists of many elements of which the actual protein coding sequence is often only a small part. These include introns and untranslated regions of the mature mRNA. Noncoding genes can also contain introns that are removed during processing to produce the mature functional RNA.
All genes are associated with regulatory sequences that are required for their expression. First, genes require a promoter sequence. The promoter is recognized and bound by transcription factors that recruit and help RNA polymerase bind to the region to initiate transcription. The recognition typically occurs as a consensus sequence like the TATA box. A gene can have more than one promoter, resulting in messenger RNAs (mRNA) that differ in how far they extend in the 5' end. Highly transcribed genes have "strong" promoter sequences that form strong associations with transcription factors, thereby initiating transcription at a high rate. Others genes have "weak" promoters that form weak associations with transcription factors and initiate transcription less frequently. Eukaryotic promoter regions are much more complex and difficult to identify than prokaryotic promoters.
Additionally, genes can have regulatory regions many kilobases upstream or downstream of the gene that alter expression. These act by binding to transcription factors which then cause the DNA to loop so that the regulatory sequence (and bound transcription factor) become close to the RNA polymerase binding site. For example, enhancers increase transcription by binding an activator protein which then helps to recruit the RNA polymerase to the promoter; conversely silencers bind repressor proteins and make the DNA less available for RNA polymerase.
The mature messenger RNA produced from protein-coding genes contains untranslated regions at both ends which contain binding sites for ribosomes, RNA-binding proteins, miRNA, as well as terminator, and start and stop codons. In addition, most eukaryotic open reading frames contain untranslated introns, which are removed and exons, which are connected together in a process known as RNA splicing. Finally, the ends of gene transcripts are defined by cleavage and polyadenylation (CPA) sites, where newly produced pre-mRNA gets cleaved and a string of ~200 adenosine monophosphates is added at the 3' end. The poly(A) tail protects mature mRNA from degradation and has other functions, affecting translation, localization, and transport of the transcript from the nucleus. Splicing, followed by CPA, generate the final mature mRNA, which encodes the protein or RNA product.
Many noncoding genes in eukaryotes have different transcription termination mechanisms and they do not have poly(A) tails.
Many prokaryotic genes are organized into operons, with multiple protein-coding sequences that are transcribed as a unit. The genes in an operon are transcribed as a continuous messenger RNA, referred to as a polycistronic mRNA. The term cistron in this context is equivalent to gene. The transcription of an operon's mRNA is often controlled by a repressor that can occur in an active or inactive state depending on the presence of specific metabolites. When active, the repressor binds to a DNA sequence at the beginning of the operon, called the operator region, and represses transcription of the operon; when the repressor is inactive transcription of the operon can occur (see e.g. Lac operon). The products of operon genes typically have related functions and are involved in the same regulatory network.
Complexity
Though many genes have simple structures, as with much of biology, others can be quite complex or represent unusual edge-cases. Eukaryotic genes often have introns that are much larger than their exons, and those introns can even have other genes nested inside them. Associated enhancers may be many kilobase away, or even on entirely different chromosomes operating via physical contact between two chromosomes. A single gene can encode multiple different functional products by alternative splicing, and conversely a gene may be split across chromosomes but those transcripts are concatenated back together into a functional sequence by trans-splicing. It is also possible for overlapping genes to share some of their DNA sequence, either on opposite strands or the same strand (in a different reading frame, or even the same reading frame).
Gene expression
In all organisms, two steps are required to read the information encoded in a gene's DNA and produce the protein it specifies. First, the gene's DNA is transcribed to messenger RNA (mRNA). Second, that mRNA is translated to protein. RNA-coding genes must still go through the first step, but are not translated into protein. The process of producing a biologically functional molecule of either RNA or protein is called gene expression, and the resulting molecule is called a gene product.
Genetic code
The nucleotide sequence of a gene's DNA specifies the amino acid sequence of a protein through the genetic code. Sets of three nucleotides, known as codons, each correspond to a specific amino acid. The principle that three sequential bases of DNA code for each amino acid was demonstrated in 1961 using frameshift mutations in the rIIB gene of bacteriophage T4 (see Crick, Brenner et al. experiment).
Additionally, a "start codon", and three "stop codons" indicate the beginning and end of the protein coding region. There are 64 possible codons (four possible nucleotides at each of three positions, hence 43 possible codons) and only 20 standard amino acids; hence the code is redundant and multiple codons can specify the same amino acid. The correspondence between codons and amino acids is nearly universal among all known living organisms.
Transcription
Transcription produces a single-stranded RNA molecule known as messenger RNA, whose nucleotide sequence is complementary to the DNA from which it was transcribed. The mRNA acts as an intermediate between the DNA gene and its final protein product. The gene's DNA is used as a template to generate a complementary mRNA. The mRNA matches the sequence of the gene's DNA coding strand because it is synthesised as the complement of the template strand. Transcription is performed by an enzyme called an RNA polymerase, which reads the template strand in the 3' to 5' direction and synthesizes the RNA from 5' to 3'. To initiate transcription, the polymerase first recognizes and binds a promoter region of the gene. Thus, a major mechanism of gene regulation is the blocking or sequestering the promoter region, either by tight binding by repressor molecules that physically block the polymerase or by organizing the DNA so that the promoter region is not accessible.
In prokaryotes, transcription occurs in the cytoplasm; for very long transcripts, translation may begin at the 5' end of the RNA while the 3' end is still being transcribed. In eukaryotes, transcription occurs in the nucleus, where the cell's DNA is stored. The RNA molecule produced by the polymerase is known as the primary transcript and undergoes post-transcriptional modifications before being exported to the cytoplasm for translation. One of the modifications performed is the splicing of introns which are sequences in the transcribed region that do not encode a protein. Alternative splicing mechanisms can result in mature transcripts from the same gene having different sequences and thus coding for different proteins. This is a major form of regulation in eukaryotic cells and also occurs in some prokaryotes.
Translation
Translation is the process by which a mature mRNA molecule is used as a template for synthesizing a new protein. Translation is carried out by ribosomes, large complexes of RNA and protein responsible for carrying out the chemical reactions to add new amino acids to a growing polypeptide chain by the formation of peptide bonds. The genetic code is read three nucleotides at a time, in units called codons, via interactions with specialized RNA molecules called transfer RNA (tRNA). Each tRNA has three unpaired bases known as the anticodon that are complementary to the codon it reads on the mRNA. The tRNA is also covalently attached to the amino acid specified by the complementary codon. When the tRNA binds to its complementary codon in an mRNA strand, the ribosome attaches its amino acid cargo to the new polypeptide chain, which is synthesized from amino terminus to carboxyl terminus. During and after synthesis, most new proteins must fold to their active three-dimensional structure before they can carry out their cellular functions.
Regulation
Genes are regulated so that they are expressed only when the product is needed, since expression draws on limited resources. A cell regulates its gene expression depending on its external environment (e.g. available nutrients, temperature and other stresses), its internal environment (e.g. cell division cycle, metabolism, infection status), and its specific role if in a multicellular organism. Gene expression can be regulated at any step: from transcriptional initiation, to RNA processing, to post-translational modification of the protein. The regulation of lactose metabolism genes in E. coli (lac operon) was the first such mechanism to be described in 1961.
RNA genes
A typical protein-coding gene is first copied into RNA as an intermediate in the manufacture of the final protein product. In other cases, the RNA molecules are the actual functional products, as in the synthesis of ribosomal RNA and transfer RNA. Some RNAs known as ribozymes are capable of enzymatic function, while others such as microRNAs and riboswitches have regulatory roles. The DNA sequences from which such RNAs are transcribed are known as non-coding RNA genes.
Some viruses store their entire genomes in the form of RNA, and contain no DNA at all. Because they use RNA to store genes, their cellular hosts may synthesize their proteins as soon as they are infected and without the delay in waiting for transcription. On the other hand, RNA retroviruses, such as HIV, require the reverse transcription of their genome from RNA into DNA before their proteins can be synthesized.
Inheritance
Organisms inherit their genes from their parents. Asexual organisms simply inherit a complete copy of their parent's genome. Sexual organisms have two copies of each chromosome because they inherit one complete set from each parent.
Mendelian inheritance
According to Mendelian inheritance, variations in an organism's phenotype (observable physical and behavioral characteristics) are due in part to variations in its genotype (particular set of genes). Each gene specifies a particular trait with a different sequence of a gene (alleles) giving rise to different phenotypes. Most eukaryotic organisms (such as the pea plants Mendel worked on) have two alleles for each trait, one inherited from each parent.
Alleles at a locus may be dominant or recessive; dominant alleles give rise to their corresponding phenotypes when paired with any other allele for the same trait, whereas recessive alleles give rise to their corresponding phenotype only when paired with another copy of the same allele. If you know the genotypes of the organisms, you can determine which alleles are dominant and which are recessive. For example, if the allele specifying tall stems in pea plants is dominant over the allele specifying short stems, then pea plants that inherit one tall allele from one parent and one short allele from the other parent will also have tall stems. Mendel's work demonstrated that alleles assort independently in the production of gametes, or germ cells, ensuring variation in the next generation. Although Mendelian inheritance remains a good model for many traits determined by single genes (including a number of well-known genetic disorders) it does not include the physical processes of DNA replication and cell division.
DNA replication and cell division
The growth, development, and reproduction of organisms relies on cell division; the process by which a single cell divides into two usually identical daughter cells. This requires first making a duplicate copy of every gene in the genome in a process called DNA replication. The copies are made by specialized enzymes known as DNA polymerases, which "read" one strand of the double-helical DNA, known as the template strand, and synthesize a new complementary strand. Because the DNA double helix is held together by base pairing, the sequence of one strand completely specifies the sequence of its complement; hence only one strand needs to be read by the enzyme to produce a faithful copy. The process of DNA replication is semiconservative; that is, the copy of the genome inherited by each daughter cell contains one original and one newly synthesized strand of DNA.
The rate of DNA replication in living cells was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli and found to be impressively rapid. During the period of exponential DNA increase at 37 °C, the rate of elongation was 749 nucleotides per second.
After DNA replication, the cell must physically separate the two genome copies and divide into two distinct membrane-bound cells. In prokaryotes (bacteria and archaea) this usually occurs via a relatively simple process called binary fission, in which each circular genome attaches to the cell membrane and is separated into the daughter cells as the membrane invaginates to split the cytoplasm into two membrane-bound portions. Binary fission is extremely fast compared to the rates of cell division in eukaryotes. Eukaryotic cell division is a more complex process known as the cell cycle; DNA replication occurs during a phase of this cycle known as S phase, whereas the process of segregating chromosomes and splitting the cytoplasm occurs during M phase.
Molecular inheritance
The duplication and transmission of genetic material from one generation of cells to the next is the basis for molecular inheritance and the link between the classical and molecular pictures of genes. Organisms inherit the characteristics of their parents because the cells of the offspring contain copies of the genes in their parents' cells. In asexually reproducing organisms, the offspring will be a genetic copy or clone of the parent organism. In sexually reproducing organisms, a specialized form of cell division called meiosis produces cells called gametes or germ cells that are haploid, or contain only one copy of each gene. The gametes produced by females are called eggs or ova, and those produced by males are called sperm. Two gametes fuse to form a diploid fertilized egg, a single cell that has two sets of genes, with one copy of each gene from the mother and one from the father.
During the process of meiotic cell division, an event called genetic recombination or crossing-over can sometimes occur, in which a length of DNA on one chromatid is swapped with a length of DNA on the corresponding homologous non-sister chromatid. This can result in reassortment of otherwise linked alleles. The Mendelian principle of independent assortment asserts that each of a parent's two genes for each trait will sort independently into gametes; which allele an organism inherits for one trait is unrelated to which allele it inherits for another trait. This is in fact only true for genes that do not reside on the same chromosome or are located very far from one another on the same chromosome. The closer two genes lie on the same chromosome, the more closely they will be associated in gametes and the more often they will appear together (known as genetic linkage). Genes that are very close are essentially never separated because it is extremely unlikely that a crossover point will occur between them.
Genome
The genome is the total genetic material of an organism and includes both the genes and non-coding sequences. Eukaryotic genes can be annotated using FINDER.
Number of genes
The genome size, and the number of genes it encodes varies widely between organisms. The smallest genomes occur in viruses, and viroids (which act as a single non-coding RNA gene). Conversely, plants can have extremely large genomes, with rice containing >46,000 protein-coding genes. The total number of protein-coding genes (the Earth's proteome) is estimated to be 5 million sequences.
Although the number of base-pairs of DNA in the human genome has been known since the 1950s, the estimated number of genes has changed over time as definitions of genes, and methods of detecting them have been refined. Initial theoretical predictions of the number of human genes in the 1960s and 1970s were based on mutation load estimates and the numbers of mRNAs and these estimates tended to be about 30,000 protein-coding genes. During the 1990s there were guesstimates of up to 100,000 genes and early data on detection of mRNAs (expressed sequence tags) suggested more than the traditional value of 30,000 genes that had been reported in the textbooks during the 1980s.
The initial draft sequences of the human genome confirmed the earlier predictions of about 30,000 protein-coding genes however that estimate has fallen to about 19,000 with the ongoing GENCODE annotation project. The number of noncoding genes is not known with certainty but the latest estimates from Ensembl suggest 26,000 noncoding genes.
Essential genes
Essential genes are the set of genes thought to be critical for an organism's survival. This definition assumes the abundant availability of all relevant nutrients and the absence of environmental stress. Only a small portion of an organism's genes are essential. In bacteria, an estimated 250–400 genes are essential for Escherichia coli and Bacillus subtilis, which is less than 10% of their genes. Half of these genes are orthologs in both organisms and are largely involved in protein synthesis. In the budding yeast Saccharomyces cerevisiae the number of essential genes is slightly higher, at 1000 genes (~20% of their genes). Although the number is more difficult to measure in higher eukaryotes, mice and humans are estimated to have around 2000 essential genes (~10% of their genes). The synthetic organism, Syn 3, has a minimal genome of 473 essential genes and quasi-essential genes (necessary for fast growth), although 149 have unknown function.
Essential genes include housekeeping genes (critical for basic cell functions) as well as genes that are expressed at different times in the organisms development or life cycle. Housekeeping genes are used as experimental controls when analysing gene expression, since they are constitutively expressed at a relatively constant level.
Genetic and genomic nomenclature
Gene nomenclature was established by the HUGO Gene Nomenclature Committee (HGNC), a committee of the Human Genome Organisation, for each known human gene in the form of an approved gene name and symbol (short-form abbreviation), which can be accessed through a database maintained by HGNC. Symbols are chosen to be unique, and each gene has only one symbol (although approved symbols sometimes change). Symbols are preferably kept consistent with other members of a gene family and with homologs in other species, particularly the mouse due to its role as a common model organism.
Genetic engineering
Genetic engineering is the modification of an organism's genome through biotechnology. Since the 1970s, a variety of techniques have been developed to specifically add, remove and edit genes in an organism. Recently developed genome engineering techniques use engineered nuclease enzymes to create targeted DNA repair in a chromosome to either disrupt or edit a gene when the break is repaired. The related term synthetic biology is sometimes used to refer to extensive genetic engineering of an organism.
Genetic engineering is now a routine research tool with model organisms. For example, genes are easily added to bacteria and lineages of knockout mice with a specific gene's function disrupted are used to investigate that gene's function. Many organisms have been genetically modified for applications in agriculture, industrial biotechnology, and medicine.
For multicellular organisms, typically the embryo is engineered which grows into the adult genetically modified organism. However, the genomes of cells in an adult organism can be edited using gene therapy techniques to treat genetic diseases.
See also
References
Citations
Sources
Main textbook
– A molecular biology textbook available free online through NCBI Bookshelf.
Glossary
Ch 1: Cells and genomes
1.1: The Universal Features of Cells on Earth
Ch 2: Cell Chemistry and Biosynthesis
2.1: The Chemical Components of a Cell
Ch 3: Proteins
Ch 4: DNA and Chromosomes
4.1: The Structure and Function of DNA
4.2: Chromosomal DNA and Its Packaging in the Chromatin Fiber
Ch 5: DNA Replication, Repair, and Recombination
5.2: DNA Replication Mechanisms
5.4: DNA Repair
5.5: General Recombination
Ch 6: How Cells Read the Genome: From DNA to Protein
6.1: DNA to RNA
6.2: RNA to Protein
Ch 7: Control of Gene Expression
7.1: An Overview of Gene Control
7.2: DNA-Binding Motifs in Gene Regulatory Proteins
7.3: How Genetic Switches Work
7.5: Posttranscriptional Controls
7.6: How Genomes Evolve
Ch 14: Energy Conversion: Mitochondria and Chloroplasts
14.4: The Genetic Systems of Mitochondria and Plastids
Ch 18: The Mechanics of Cell Division
18.1: An Overview of M Phase
18.2: Mitosis
Ch 20: Germ Cells and Fertilization
20.2: Meiosis
Further reading
External links
Comparative Toxicogenomics Database
DNA From The Beginning – a primer on genes and DNA
Gene – a searchable database of genes
Genes – an Open Access journal
IDconverter – converts gene IDs between public databases
iHOP – Information Hyperlinked over Proteins
TranscriptomeBrowser – Gene expression profile analysis
The Protein Naming Utility, a database to identify and correct deficient gene names
IMPC (International Mouse Phenotyping Consortium) – Encyclopedia of mammalian gene function
Global Genes Project – Leading non-profit organization supporting people living with genetic diseases
Encode threads explorer, Nature
Characterization of intergenic regions and gene definition, Nature
Cloning
Molecular biology
Wikipedia articles with sections published in WikiJournal of Medicine | Gene | [
"Chemistry",
"Engineering",
"Biology"
] | 7,934 | [
"Cloning",
"Biochemistry",
"Genetic engineering",
"Molecular biology"
] |
4,250,783 | https://en.wikipedia.org/wiki/Persona%20%28user%20experience%29 | A persona (also user persona, user personality, customer persona, buyer persona) in user-centered design and marketing is a personalized fictional character created to represent a potential end user. Personas represent the similarities of consumer groups or segments. They are based on demographic and behavioural personal information collected from users, qualitative interviews, and participant observation. Personas are one of the outcomes of market segmentation, where marketers use the results of statistical analysis and qualitative observations to draw profiles, giving them names and personalities to paint a picture of a person that could exist in real life. The term persona is used widely in online and technology applications as well as in advertising, where other terms such as pen portraits may also be used.
Personas are useful in considering the goals, desires, and limitations of brand buyers and users in order to help to guide decisions about a service, product or interaction space such as features, interactions, and visual design of a website. Personas may be used as a tool during the user-centered design process for designing software. They can introduce interaction design principles to things like industrial design and online marketing.
A user persona is a representation of the goals and behavior of a hypothesized group of users. In most cases, personas are synthesized from data collected from interviews or surveys with users. They are captured in short page descriptions that include behavioral patterns, goals, skills, attitudes, with a few fictional personal details to make the persona a realistic character. In addition to Human-Computer Interaction (HCI), personas are also widely used in sales, advertising, marketing and system design. Personas provide common behaviors, outlooks, and potential objections of people matching a given persona.
History
Within software design, Alan Cooper, a noted pioneer software developer, proposed the concept of a user persona. Beginning in 1983, he started using a prototype of what the persona would become using data from informal interviews with seven to eight users. From 1995, he became engaged with how a specific rather than generalized user would use and interface with the software. The technique was popularized for the online business and technology community in his 1999 book The Inmates are Running the Asylum. In this book, Cooper outlines the general characteristics, uses and best practices for creating personas, recommending that software be designed for single archetypal users.
The concept of understanding customer segments as communities with coherent identity was developed in 1993-4 by Angus Jenkinson and internationally adopted by OgilvyOne with clients using the name CustomerPrints as "day-in-the-life archetype descriptions". Creating imaginal or fictional characters to represent these customer segments or communities followed. Jenkinson's approach was to describe an imaginal character in their real interface, behavior and attitudes with the brand, and the idea was initially realized with Michael Jacobs in a series of studies. In 1997 the Ogilvy global knowledge management system, Truffles, described the concept as follows: "Each strong brand has a tribe of people who share affinity with the brand’s values. This universe typically divides into a number of different communities within which there are the same or very similar buying behaviours, and whose personality and characteristics towards the brand (product or service) can be understood in terms of common values, attitudes and assumptions. CustomerPrints are descriptions that capture the living essence of these distinct groups of customers."
Benefits and features
According to Pruitt and Adlin, the use of personas offers several benefits in product development. Personas are said to be cognitively compelling because they put a personal human face on otherwise abstract data about customers. By thinking about the needs of a fictional persona, designers may be better able to infer what a real person might need. Such inference may assist with brainstorming, use case specification, and features definition. Pruitt and Adlin argue personas are easy to communicate to engineering teams and thus allow engineers, developers, and others to absorb customer data in a palatable format. They present several examples of personas used for purposes of communication in various development projects.
Personas also help prevent some common design pitfalls. The first is designing for what Cooper calls "The Elastic User", by which he means that while making product decisions different stakeholders may define the 'user' according to their convenience. Defining personas helps the team have a shared understanding of the real users in terms of their goals, capabilities, and contexts. Personas help prevent "self-referential design" when the designer or developer may unconsciously project their own mental models on the product design which may be very different from that of the target user population. Personas also provide a reality check by helping designers keep the focus of the design on cases that are most likely to be encountered for the target users and not on edge cases which usually will not happen for the target population. According to Cooper, edge cases which should naturally be handled properly should not become the design focus.
The persona benefits are summarized as follows:
Shared Understanding: Help team members develop a consistent view of target audience groups, making data more relatable through coherent stories.
Guided Design Decisions: Allow teams to prioritize features based on how well they meet the needs of specific personas.
Empathy Building: Provide a human face to data, fostering empathy for users represented by the personas.
Focused Design: Prevent designers from making self-referential decisions by keeping the focus on user needs.
While features will vary based on project needs, all personas will capture the essence of an actual potential user.
Common features include:
Fake name and profile picture
Basic demographics (age, race, gender, education, marital status, preferred language, etc.)
Biography containing personal interests, professional goals, and any other relevant information designers should know
A summarizing quote
Technology use
Disabilities, accessibility needs, or challenges
Opinions and beliefs
Criticism
Criticism of personas falls into three general categories: analysis of the underlying logic, concerns about practical implementation, and empirical results.
In terms of scientific logic, it has been argued that because personas are fictional, they have no clear relationship to real customer data and therefore cannot be considered scientific. Chapman and Milham described the purported flaws in considering personas as a scientific research method. They argued that there is no procedure to work reliably from given data to specific personas, and thus such a process is not subject to the scientific method of reproducible research.
Other critics argue that personas can be reductive or stereotypic, leading to a false sense of confidence in an organization's knowledge about its users. Critics like Steve Portigal argue that personas' "appeal comes from the seduction of a sanitized form of reality," where customer data is continuously reduced and abstracted until it is nothing more than a stereotype. Critics claim that persona creation puts the onus on designers, marketers, and user researchers to capture multiple peoples' opinions and views into predefined segments, which could introduce personal bias into the interpretation.
Additionally, personas often feature gendered and racial depictions, which some argue is unnecessary and distracts the target audience of the personas from true consumer behaviors and only enhances biased viewpoints. Finally, it is worth acknowledging that proto-personas and personas are often generalized as the same resource, however, proto-personas are a generative tool used to identify a team's assumptions about their target users. Personas, on the other hand, should be rooted in customer data and research, and be used as a way to coalesce insights about particular segments.
Scientific research
In empirical results, the research to date has offered soft metrics for the success of personas, such as anecdotal feedback from stakeholders. Rönkkö has described how team politics and other organizational issues led to limitations of the personas method in one set of projects. Chapman, Love, Milham, Elrif, and Alford have demonstrated with survey data that descriptions with more than a few attributes (e.g., such as a persona) are likely to describe very few if any real people. They argued that personas cannot be assumed to be descriptive of actual customers.
A study conducted by Long (2009) assessed the effectiveness of personas in design education. The study found that students who used personas produced designs with better usability attributes and reported improved communication within design teams. In a partially controlled study, a group of students were asked to solve a design brief; two groups used personas while one group did not. The students who used personas were awarded higher course evaluations than the group who did not. Students who used personas were assessed as having produced designs with better usability attributes than students who did not use personas. The study also suggests that using personas may improve communication between design teams and facilitate user-focused design discussion. The study had several limitations: outcomes were assessed by a professor and students who were not blind to the hypothesis, students were assigned to groups in a non-random fashion, the findings were not replicated, and other contributing factors or expectation effects (e.g., the Hawthorne effect or Pygmalion effect) were not controlled for.
Data-driven personas
Data-driven personas (sometimes also called quantitative personas) have been suggested by McGinn and Kotamraju. These personas are claimed to address the shortcomings of qualitative persona generation (see Criticism). Academic scholars have proposed several methods for data-driven persona development, such as clustering, factor analysis, principal component analysis, latent semantic analysis, and non-negative matrix factorization. These methods generally take numerical input data, reduce its dimensionality, and output higher level abstractions (e.g., clusters, components, factors) that describe the patterns in the data. These patterns are typically interpreted as "skeletal" personas, and enriched with personified information (e.g., name, portrait picture). Quantitative personas can also be enriched with qualitative insights to generate mixed method personas (also called hybrid personas).
See also
Behavioral targeting
Dave and Sue
Digital identity
Online identity
Online identity management
Personalization
Personalized marketing
Personal information
Personal identity
Scenario (computing)
Social profiling
Use case
User profile
References
Bibliography
Human–computer interaction
Usability
Technical communication
Market segmentation
Marketing
Identity management | Persona (user experience) | [
"Engineering"
] | 2,104 | [
"Human–computer interaction",
"Human–machine interaction"
] |
4,251,102 | https://en.wikipedia.org/wiki/Three-point%20flexural%20test | The three-point bending flexural test provides values for the modulus of elasticity
in bending , flexural stress , flexural strain and the flexural stress–strain response of the material. This test is performed on a universal testing machine (tensile testing machine or tensile tester) with a three-point or four-point bend fixture. The main advantage of a three-point flexural test is the ease of the specimen preparation and testing. However, this method has also some disadvantages: the results of the testing method are sensitive to specimen and loading geometry and strain rate.
Testing method
The test method for conducting the test usually involves a specified test fixture on a universal testing machine. Details of the test preparation, conditioning, and conduct affect the test results. The sample is placed on two supporting pins a set distance apart.
Calculation of the flexural stress
for a rectangular cross section
for a circular cross section
Calculation of the flexural strain
Calculation of flexural modulus
in these formulas the following parameters are used:
= Modulus of Rupture, the stress required to fracture the sample (MPa)
= Strain in the outer surface, (mm/mm)
= flexural Modulus of elasticity,(MPa)
= load at a given point on the load deflection curve, (N)
= Support span, (mm)
= Width of test beam, (mm)
= Depth or thickness of tested beam, (mm)
= maximum deflection of the center of the beam, (mm)
= The gradient (i.e., slope) of the initial straight-line portion of the load deflection curve, (N/mm)
= The radius of the beam, (mm)
Fracture toughness testing
The fracture toughness of a specimen can also be determined using a three-point flexural test. The stress intensity factor at the crack tip of a single edge notch bending specimen is
where is the applied load, is the thickness of the specimen, is the crack length, and
is the width of the specimen. In a three-point bend test, a fatigue crack is created at the tip of the notch by cyclic loading. The length of the crack is measured. The specimen is then loaded monotonically. A plot of the load versus the crack opening displacement is used to determine the load at which the crack starts growing. This load is substituted into the above formula to find the fracture toughness .
The ASTM D5045-14 and E1290-08 Standards suggests the relation
where
The predicted values of are nearly identical for the ASTM and Bower equations for crack lengths less than 0.6.
Standards
ISO 12135: Metallic materials. Unified method for the determination of quasi-static fracture toughness.
ISO 12737: Metallic materials. Determination of plane-strain fracture toughness.
ISO 178: Plastics—Determination of flexural properties.
ASTM C293: Standard Test Method for Flexural Strength of Concrete (Using Simple Beam With Center-Point Loading).
ASTM D790: Standard test methods for flexural properties of unreinforced and reinforced plastics and electrical insulating materials.
ASTM E1290: Standard Test Method for Crack-Tip Opening Displacement (CTOD) Fracture Toughness Measurement.
ASTM D7264: Standard Test Method for Flexural Properties of Polymer Matrix Composite Materials.
ASTM D5045: Standard Test Methods for Plane-Strain Fracture Toughness and Strain Energy Release Rate of Plastic Materials.
See also
References
Materials testing
Mechanics | Three-point flexural test | [
"Physics",
"Materials_science",
"Engineering"
] | 715 | [
"Materials testing",
"Mechanics",
"Materials science",
"Mechanical engineering"
] |
4,251,329 | https://en.wikipedia.org/wiki/Bush%20%28brand%29 | Bush is a British consumer electronics brand owned by J Sainsbury plc (Sainsbury's), the parent company of the retailer Argos. The former Bush company is one of the most famous manufacturers of early British radios. The company is now defunct, but the Bush brand name survives as a private label brand for budget electronics. Today, all Bush are sold exclusively at Argos and Sainsbury's, with Argos having a wider selection.
History
Original Bush company
The company was founded in 1932 as Bush Radio from the remains of the Graham Amplion company, which had made horn loudspeakers as a subsidiary of the Gaumont British Picture Corporation. The brand name comes from Gaumont's Shepherd's Bush studios. The company expanded rapidly moving to a new factory at Power Road, Chiswick in 1936.
Bush became part of the Rank empire in 1945 and a brand new factory was opened at Ernesettle, Plymouth in 1949. In 1946 the DAC90, designed by Frank Middleditch, featured in the V&A exhibition Britain Can Make It. The original model in black became very popular and was succeeded by the DAC90A in other colours, and export models with dials in different languages. In 1950 the DAC10 radios were launched, along with the distinctive TV22 television.
The Bush TR82 transistor radio, designed by Ogle Design, and launched in 1959, is regarded as an icon of early radio design. Although the first radio to use the Ogle cabinet design was actually the MB60, a battery/mains valve set from 1957 to 1959.
The original Bush Radio company merged with Murphy Radio on 4 June 1962, and a new company was formed called Rank Bush Murphy Ltd. In 1978, Rank Bush Murphy was sold to British conglomerate Great Universal Stores.
Rank formed a joint venture with Toshiba in 1978 called Rank Toshiba, and manufactured Toshiba designed televisions in Ernesettle UK. In 1980 Rank terminated its agreement with Toshiba and the joint company was wound up. Toshiba took over the UK factory and continued to manufacture television sets alone.
Purchase in the 1980s
The Bush brand name disappeared from the British market during the 1980s. However, since the purchase of the brand by Alba Group in 1986 (now known as Harvard International), it once again became common, being used primarily on electronic goods produced in China and televisions made in Turkey.
Sale to Home Retail Group
In November 2008, the Bush brand name, along with Alba, was purchased by Home Retail Group, the parent company of Homebase and Argos, for £15.25 million. As a result, the former Alba Group has now been renamed Harvard International. Harvard International still owns the Bush brand in Oceania. In 2013 a 7-inch tablet called MyTablet was released under the Bush brand; it cost £99.99.
Purchase by Sainsbury's
In September 2016, the British supermarket chain Sainsbury's completed its acquisition of Home Retail Group, bringing Argos, along with the Alba and Bush brands, under its ownership.
In 2022 the Bush brand replaced the Alba brand, enabling Sainsbury's to have one main own brand electronics brand.
Product range
Current products
Bush now has products including: televisions, boomboxes, shelf stereos, set-top boxes, washing machines, radios, trimmers, headsets, headphones, ovens, cookers, fridges, computer mice, webcams, microphones, turntables, DVD players, Blu-ray players, home cinemas, MP3 players, MP4 players, dishwashers, vacuum cleaners, camcorders, smartphones and tablets.
At least some of the Bush TV sets and some white goods are made by Turkish company Vestel.
Reproductions of classic Bush Radio models from the 1950s and 1960s are also being sold today under the Bush brand. Some of these units also include DAB tuners.
Former products
A range known as Bush iD was used to brand items such as digital radios and set-top boxes under the Bush name since the 1980s. Since the purchase by Home Retail Group in 2008, the Bush iD branding is no longer used.
Marketing and branding
Bush has had a number of different logos over the years. It had a long-standing one between the 1990s and 2014, and a separate one for its former Bush iD range.
Gallery
References
Sources
Radio and television brands Alba and Bush sold to Argos owner Home Retail Group
Home Retail Group signs contract for Intellectual Property Rights to Bush and Alba Trademarks
Bush Freesat Website
External links
Bush at Argos.co.uk
Article on vintage Bush radios
1950 Bush television https://web.archive.org/web/20151004114901/http://www.sciencemuseum.org.uk/objects/radio_communication/1971-76.aspx
Traditional Bush Radios for sale
Bush Australia
Electronics companies of the United Kingdom
Electronics companies established in 1932
Consumer electronics brands
Audio equipment manufacturers of the United Kingdom
Headphones manufacturers
British brands
Companies based in Milton Keynes
Display technology companies
1932 establishments in England
Radio manufacturers
Sainsbury's | Bush (brand) | [
"Engineering"
] | 1,053 | [
"Radio electronics",
"Radio manufacturers"
] |
4,251,626 | https://en.wikipedia.org/wiki/Sodium%20ascorbate | Sodium ascorbate is one of a number of mineral salts of ascorbic acid (vitamin C). The molecular formula of this chemical compound is C6H7NaO6. As the sodium salt of ascorbic acid, it is known as a mineral ascorbate. It has not been demonstrated to be more bioavailable than any other form of vitamin C supplement.
Sodium ascorbate normally provides 131 mg of sodium per 1,000 mg of ascorbic acid (1,000 mg of sodium ascorbate contains 889 mg of ascorbic acid and 111 mg of sodium).
As a food additive, it has the E number E301 and is used as an antioxidant and an acidity regulator. It is approved for use as a food additive in the EU, USA, Australia, and New Zealand.
In in vitro studies, sodium ascorbate has been found to produce cytotoxic effects in various malignant cell lines, which include melanoma cells that are particularly susceptible.
Production
Sodium ascorbate is produced by dissolving ascorbic acid in water and adding an equivalent amount of sodium hydroxide or sodium bicarbonate in water. After cessation of effervescence, the sodium ascorbate is precipitated by the addition of isopropanol.
References
External links
The Bioavailability of Different Forms of Vitamin C
Ascorbates
Food additives
Organic sodium salts
Vitamers
Vitamin C
E-number additives | Sodium ascorbate | [
"Chemistry"
] | 310 | [
"Organic sodium salts",
"Salts"
] |
4,251,643 | https://en.wikipedia.org/wiki/Potassium%20ascorbate | Potassium ascorbate is a compound with formula KC6H7O6. It is the potassium salt of ascorbic acid (vitamin C) and a mineral ascorbate. As a food additive, it has E number E303, INS number 303. Although it is not a permitted food additive in the UK, USA and the EU, it is approved for use in Australia and New Zealand. According to some studies, it has shown a strong antioxidant activity and antitumoral properties.
References
Ascorbates
Potassium compounds
Food additives
E-number additives | Potassium ascorbate | [
"Chemistry"
] | 121 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
4,251,950 | https://en.wikipedia.org/wiki/Disjunctive%20sequence | A disjunctive sequence is an infinite sequence of characters drawn from a finite alphabet, in which every finite string appears as a substring. For instance, the binary Champernowne sequence
formed by concatenating all binary strings in shortlex order, clearly contains all the binary strings and so is disjunctive. (The spaces above are not significant and are present solely to make clear the boundaries between strings). The complexity function of a disjunctive sequence S over an alphabet of size k is pS(n) = kn.
Any normal sequence (a sequence in which each string of equal length appears with equal frequency) is disjunctive, but the converse is not true. For example, letting 0n denote the string of length n consisting of all 0s, consider the sequence
obtained by splicing exponentially long strings of 0s into the shortlex ordering of all binary strings. Most of this sequence consists of long runs of 0s, and so it is not normal, but it is still disjunctive.
A disjunctive sequence is recurrent but never uniformly recurrent/almost periodic.
Examples
The following result can be used to generate a variety of disjunctive sequences:
If a1, a2, a3, ..., is a strictly increasing infinite sequence of positive integers such that n → ∞ (an+1 / an) = 1,
then for any positive integer m and any integer base b ≥ 2, there is an an whose expression in base b starts with the expression of m in base b.
(Consequently, the infinite sequence obtained by concatenating the base-b expressions for a1, a2, a3, ..., is disjunctive over the alphabet {0, 1, ..., b-1}.)
Two simple cases illustrate this result:
an = nk, where k is a fixed positive integer. (In this case, n → ∞ (an+1 / an) = n → ∞ ( (n+1)k / nk ) = n → ∞ (1 + 1/n)k = 1.)
E.g., using base-ten expressions, the sequences
123456789101112... (k = 1, positive natural numbers),
1491625364964... (k = 2, squares),
182764125216343... (k = 3, cubes),
etc.,
are disjunctive on {0,1,2,3,4,5,6,7,8,9}.
an = pn, where pn is the nth prime number. (In this case, n → ∞ (an+1 / an) = 1 is a consequence of pn ~ n ln n.)
E.g., the sequences
23571113171923... (using base ten),
10111011111011110110001 ... (using base two),
etc.,
are disjunctive on the respective digit sets.
Another result that provides a variety of disjunctive sequences is as follows:
If an = (f(n)), where f is any non-constant polynomial with real coefficients such that f(x) > 0 for all x > 0,
then the concatenation a1a2a3... (with the an expressed in base b) is a normal sequence in base b, and is therefore disjunctive on {0, 1, ..., b-1}.
E.g., using base-ten expressions, the sequences
818429218031851879211521610... (with f(x) = 2x3 - 5x2 + 11x )
591215182124273034... (with f(x) = πx + e)
are disjunctive on {0,1,2,3,4,5,6,7,8,9}.
Rich numbers
A rich number or disjunctive number is a real number whose expansion with respect to some base b is a disjunctive sequence over the alphabet {0,...,b−1}. Every normal number in base b is disjunctive but not conversely. The real number x is rich in base b if and only if the set { x bn mod 1} is dense in the unit interval.
A number that is disjunctive to every base is called absolutely disjunctive or is said to be a lexicon. Every string in every alphabet occurs within a lexicon. A set is called "comeager" or "residual" if it contains the intersection of a countable family of open dense sets. The set of absolutely disjunctive reals is residual. It is conjectured that every real irrational algebraic number is absolutely disjunctive.
Notes
References
Sequences and series | Disjunctive sequence | [
"Mathematics"
] | 1,036 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Mathematical objects"
] |
4,252,320 | https://en.wikipedia.org/wiki/Overflow%20flag | In computer processors, the overflow flag (sometimes called the V flag) is usually a single bit in a system status register used to indicate when an arithmetic overflow has occurred in an operation, indicating that the signed two's-complement result would not fit in the number of bits used for the result. Some architectures may be configured to automatically generate an exception on an operation resulting in overflow.
An example, suppose we add 127 and 127 using 8-bit registers. 127+127 is 254, but using 8-bit arithmetic the result would be 1111 1110 binary, which is the two's complement encoding of −2, a negative number. A negative sum of positive operands (or vice versa) is an overflow. The overflow flag would then be set so the program can be aware of the problem and mitigate this or signal an error. The overflow flag is thus set when the most significant bit (here considered the sign bit) is changed by adding two numbers with the same sign (or subtracting two numbers with opposite signs). Overflow cannot occur when the sign of two addition operands are different (or the sign of two subtraction operands are the same).
When binary values are interpreted as unsigned numbers, the overflow flag is meaningless and normally ignored. One of the advantages of two's complement arithmetic is that the addition and subtraction operations do not need to distinguish between signed and unsigned operands. For this reason, most computer instruction sets do not distinguish between signed and unsigned operands, generating both (signed) overflow and (unsigned) carry flags on every operation, and leaving it to following instructions to pay attention to whichever one is of interest.
Internally, the overflow flag is usually generated by an exclusive or of the internal carry into and out of the sign bit.
Bitwise operations (and, or, xor, not, rotate) do not have a notion of signed overflow, so the defined value varies on different processor architectures. Some processors clear the bit unconditionally (which is useful because bitwise operations set the sign flag, and the clear overflow flag then indicates that the sign flag is valid), others leave it unchanged, and some set it to an undefined value. Shifts and multiplies do permit a well-defined value, but it is not consistently implemented. For example, the x86 instruction set only defines the overflow flag for multiplies and 1-bit shifts; multi-bit shifts leave it undefined.
References
Computer arithmetic | Overflow flag | [
"Mathematics"
] | 521 | [
"Computer arithmetic",
"Arithmetic"
] |
4,252,560 | https://en.wikipedia.org/wiki/Rotational%20temperature | The characteristic rotational temperature ( or ) is commonly used in statistical thermodynamics to simplify the expression of the rotational partition function and the rotational contribution to molecular thermodynamic properties. It has units of temperature and is defined as
where is the rotational constant, is a molecular moment of inertia, is the Planck constant, is the speed of light, is the reduced Planck constant and is the Boltzmann constant.
The physical meaning of is as an estimate of the temperature at which thermal energy (of the order of ) is comparable to the spacing between rotational energy levels (of the order of ). At about this temperature the population of excited rotational levels becomes important. Some typical values are given in the table. In each case the value refers to the most common isotopic species.
References
See also
Rotational spectroscopy
Vibrational temperature
Vibrational spectroscopy
Infrared spectroscopy
Spectroscopy
Atomic physics
Molecular physics | Rotational temperature | [
"Physics",
"Chemistry"
] | 183 | [
"Thermodynamics stubs",
" and optical physics stubs",
"Molecular physics",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Thermodynamics",
"Atomic",
"nan",
"Molecular physics stubs",
"Physical chemistry stubs",
" and optical physics"
] |
4,252,699 | https://en.wikipedia.org/wiki/Methoprene | Methoprene is a juvenile hormone (JH) analog which acts as a growth regulator when used as an insecticide (IRAC group 7A). It is an amber-colored liquid with a faint fruity odor.
Methoprene does not kill insects. Instead, it interferes with an insect’s life cycle and prevents it from reaching maturity or
reproducing. Juvenile growth hormones must be absent for a pupa to molt to an adult, so methoprene-treated larvae will be unable to successfully change from pupae to adults. This breaks the biological life cycle of the insect, preventing recurring infestation.
Methoprene is considered a biological pesticide because rather than controlling target pests through direct toxicity, methoprene interferes with an insect’s lifecycle and prevents it from reaching maturity or reproducing.
Applications
Methoprene is used in the production of a number of foods, including meat, milk, mushrooms, peanuts, rice, and cereals. It also has several uses on domestic animals (pets) for controlling fleas.
It is used in drinking water cisterns to control mosquitoes which spread dengue fever and malaria. Methoprene is commonly used as a mosquito larvicide used to help stop the spread of the West Nile virus.
Methoprene is also used as a food additive in cattle feed to prevent fly breeding in the manure.
Health and safety
According to its materials safety data sheet (MSDS), methoprene is a material that may be irritating to the mucous membranes and upper respiratory tract; may be harmful by inhalation, ingestion, or skin absorption; may cause eye, skin, or respiratory system irritation; and is very toxic to aquatic life. The GHS signal word is "Warning", with notes such as "P273: Avoid release into the environment" and "P391: Collect spillage".
Methoprene is suspected to be highly toxic to lobsters. Like insects and mites, lobsters are arthropods.
References
External links
Methoprene Pesticide Fact Sheet - Environmental Protection Agency
Methoprene Pesticide Information Profile - Extension Toxicology Network
Insecticides
Carboxylate esters
Ethers
Dienes
Isopropyl esters | Methoprene | [
"Chemistry"
] | 479 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
4,252,726 | https://en.wikipedia.org/wiki/Gatifloxacin | Gatifloxacin (brand names Gatiflo, Tequin, and Zymar) is an antibiotic of the fourth-generation fluoroquinolone family, that like other members of that family, inhibits the bacterial enzymes DNA gyrase and topoisomerase IV.
It was patented in 1986 and approved for medical use in 1999.
Side effects
A Canadian study published in the New England Journal of Medicine in March 2006, claimed that Tequin can have significant side effects including dysglycemia. An editorial by
Jerry Gurwitz in the same issue called for the Food and Drug Administration (FDA) to consider giving Tequin a black box warning. This editorial followed distribution of a letter dated February 15 by Bristol-Myers Squibb to health care providers indicating action taken with the FDA to strengthen warnings for the medication. Subsequently, Bristol-Myers Squibb reported it would stop manufacture of Tequin, end sales of the drug after existing stockpiles were exhausted, and return all rights to Kyorin.
By contrast, ophthalmic gatifloxacin is generally well tolerated. The observed systemic concentration of the drug following oral administration of 400 mg (0.01 ounces) gatifloxacin is approximately 800 times higher than that of the 0.5% gatifloxacin eye drop. Given as an eye drop, gatifloxacin has very low systemic exposure. Therefore, the systemic exposures resulting from the gatifloxacin ophthalmic solution are not likely to pose any risk for systemic toxicities.
Contraindications
Hypersensitivity
Society and culture
Availability
Gatifloxacin is currently available in the US and Canada only as an ophthalmic solution.
In 2011, the Union Health and Family Welfare Ministry of India banned the manufacture, sale, and distribution of gatifloxacin because of its adverse side effects.
In China, gatifloxacin is sold in tablet as well as in eye drop formulations.
Brand names
Bristol-Myers Squibb introduced gatifloxacin in 1999 under the proprietary name Tequin for the treatment of respiratory tract infections, having licensed the medication from Kyorin Pharmaceutical Company of Japan. Allergan produces it in eye-drop formulation under the names Zymar, Zymaxid and Zylopred. In many countries, gatifloxacin is also available as tablets and in various aqueous solutions for intravenous therapy.
References
Fluoroquinolone antibiotics
Withdrawn drugs
1,4-di-hydro-7-(1-piperazinyl)-4-oxo-3-quinolinecarboxylic acids
Drugs developed by AbbVie
Phenol ethers
Cyclopropyl compounds
Anti-tuberculosis drugs | Gatifloxacin | [
"Chemistry"
] | 582 | [
"Drug safety",
"Withdrawn drugs"
] |
4,252,869 | https://en.wikipedia.org/wiki/Q%20%28number%20format%29 | The Q notation is a way to specify the parameters of a binary fixed point number format. For example, in Q notation, the number format denoted by Q8.8 means that the fixed point numbers in this format have 8 bits for the integer part and 8 bits for the fraction part.
A number of other notations have been used for the same purpose.
Definition
Texas Instruments version
The Q notation, as defined by Texas Instruments, consists of the letter followed by a pair of numbers mn, where m is the number of bits used for the integer part of the value, and n is the number of fraction bits.
By default, the notation describes signed binary fixed point format, with the unscaled integer being stored in two's complement format, used in most binary processors. The first bit always gives the sign of the value(1 = negative, 0 = non-negative), and it is not counted in the m parameter. Thus, the total number w of bits used is 1 + m + n.
For example, the specification describes a signed binary fixed-point number with a w = 16 bits in total, comprising the sign bit, three bits for the integer part, and 12 bits that are the fraction. That is, a 16-bit signed (two's complement) integer, that is implicitly multiplied by the scaling factor 2−12
In particular, when n is zero, the numbers are just integers. If m is zero, all bits except the sign bit are fraction bits; then the range of the stored number is from −1.0 (inclusive) to +1.0 (exclusive).
The m and the dot may be omitted, in which case they are inferred from the size of the variable or register where the value is stored. Thus, means a signed integer with any number of bits, that is implicitly multiplied by 2−12.
The letter can be prefixed to the to denote an unsigned binary fixed-point format. For example, describes values represented as unsigned 16-bit integers with an implicit scaling factor of 2−15, which range from 0.0 to (216−1)/215 = +1.999969482421875.
ARM version
A variant of the Q notation has been in use by ARM. In this variant, the m number includes the sign bit. For example, a 16-bit signed integer would be denoted Q15.0 in the TI variant, but Q16.0 in the ARM variant.
Characteristics
The resolution (difference between successive values) of a Qm.n or UQm.n format is always 2−n. The range of representable values depends on the notation used:
For example, a Q15.1 format number requires 15+1 = 16 bits, has resolution 2−1 = 0.5, and the representable values range from −214 = −16384.0 to +214 − 2−1 = +16383.5. In hexadecimal, the negative values range from 0x8000 to 0xFFFF followed by the non-negative ones from 0x0000 to 0x7FFF.
Math operations
Q numbers are a ratio of two integers: the numerator is kept in storage, the denominator is equal to 2n.
Consider the following example:
The Q8 denominator equals 28 = 256
1.5 equals 384/256
384 is stored, 256 is inferred because it is a Q8 number.
If the Q number's base is to be maintained (n remains constant) the Q number math operations must keep the denominator constant. The following formulas show math operations on the general Q numbers and . (If we consider the example as mentioned above, is 384 and is 256.)
Because the denominator is a power of two, the multiplication can be implemented as an arithmetic shift to the left and the division as an arithmetic shift to the right; on many processors shifts are faster than multiplication and division.
To maintain accuracy, the intermediate multiplication and division results must be double precision and care must be taken in rounding the intermediate result before converting back to the desired Q number.
Using C the operations are (note that here, Q refers to the fractional part's number of bits) :
Addition
int16_t q_add(int16_t a, int16_t b)
{
return a + b;
}
With saturation
int16_t q_add_sat(int16_t a, int16_t b)
{
int16_t result;
int32_t tmp;
tmp = (int32_t)a + (int32_t)b;
if (tmp > 0x7FFF)
tmp = 0x7FFF;
if (tmp < -1 * 0x8000)
tmp = -1 * 0x8000;
result = (int16_t)tmp;
return result;
}
Unlike floating point ±Inf, saturated results are not sticky and will unsaturate on adding a negative value to a positive saturated value (0x7FFF) and vice versa in that implementation shown. In assembly language, the Signed Overflow flag can be used to avoid the typecasts needed for that C implementation.
Subtraction
int16_t q_sub(int16_t a, int16_t b)
{
return a - b;
}
Multiplication
// precomputed value:
#define K (1 << (Q - 1))
// saturate to range of int16_t
int16_t sat16(int32_t x)
{
if (x > 0x7FFF) return 0x7FFF;
else if (x < -0x8000) return -0x8000;
else return (int16_t)x;
}
int16_t q_mul(int16_t a, int16_t b)
{
int16_t result;
int32_t temp;
temp = (int32_t)a * (int32_t)b; // result type is operand's type
// Rounding; mid values are rounded up
temp += K;
// Correct by dividing by base and saturate result
result = sat16(temp >> Q);
return result;
}
Division
int16_t q_div(int16_t a, int16_t b)
{
/* pre-multiply by the base (Upscale to Q16 so that the result will be in Q8 format) */
int32_t temp = (int32_t)a << Q;
/* Rounding: mid values are rounded up (down for negative values). */
/* OR compare most significant bits i.e. if (((temp >> 31) & 1) == ((b >> 15) & 1)) */
if ((temp >= 0 && b >= 0) || (temp < 0 && b < 0)) {
temp += b / 2; /* OR shift 1 bit i.e. temp += (b >> 1); */
} else {
temp -= b / 2; /* OR shift 1 bit i.e. temp -= (b >> 1); */
}
return (int16_t)(temp / b);
}
See also
Fixed-point arithmetic
Floating-point arithmetic
References
Further reading
(Note: the accuracy of the article is in dispute; see discussion.)
External links
Computer arithmetic | Q (number format) | [
"Mathematics"
] | 1,589 | [
"Computer arithmetic",
"Arithmetic"
] |
4,252,900 | https://en.wikipedia.org/wiki/Aluminium%20amalgam | Aluminium amalgam is a solution of aluminium in mercury. In practice the term refers to particles or pieces of aluminium with a surface coating of the amalgam. A gray solid, it is typically used for organic reductions. It is written as Al(Hg) in reactions.
Al(Hg) may be prepared by either grinding aluminium pellets or wire in mercury, or by allowing aluminium wire to react with a solution of mercury(II) chloride in water.
This amalgam is used as a chemical reagent to reduce compounds, such as of imines to amines. The aluminium is the ultimate electron donor, and the mercury serves to mediate the electron transfer and to remove passivating oxide.
The reaction and the waste from it contains mercury, so special safety precautions and disposal methods are needed. As an environmentally friendlier alternative, hydrides or other reducing agents can often be used to accomplish the same synthetic result. An alloy of aluminium and gallium was proposed as a method of hydrogen generation, as the gallium renders the aluminium more reactive by preventing it from forming an oxide layer. Mercury has this same effect on aluminium, but also serves additional functions related to electron transfer that make aluminium amalgams useful for some reactions that would not be possible with gallium.
Reactivity
Aluminium exposed to air is ordinarily protected by a molecule-thin layer of its own oxide. This aluminium oxide layer serves as a protective barrier to the underlying unoxidized aluminium and prevents amalgamation from occurring. No reaction takes place when oxidized aluminium is exposed to mercury. However, if any elemental aluminium is exposed (even by a recent scratch), the mercury may combine with it to form the amalgam. This amalgamation can continue well beyond the vulnerable aluminium that was exposed, potentially reacting with a large amount of the raw aluminium before it finally ends.
The net result is similar to the mercury electrodes often used in electrochemistry, however instead of providing electrons from an electrical supply, they are provided by the aluminium which becomes oxidized in the process. The reaction that occurs at the surface of the amalgam may actually be a hydrogenation rather than a reduction.
The presence of water in the solution is reportedly necessary; the electron rich amalgam will oxidize aluminium and generate hydrogen gas from water, creating aluminium hydroxide (Al(OH)3) and free mercury. The electrons from the aluminium reduce mercuric Hg2+ ion to metallic mercury. The metallic mercury can then form an amalgam with the exposed aluminium. The amalgamated aluminium then is oxidized by water, converting the aluminium to aluminium hydroxide and releasing free metallic mercury. The generated mercury then cycles through these last two steps until the aluminium supply is exhausted.
2Al + 3Hg^2+ + 6H2O -> 2Al(OH)3 + 3H2 +3Hg
Hg + Al -> Hg*Al
2 Hg*Al + 6 H2O -> 2 Al(OH)3 + 2 Hg + 3 H2
Due to the reactivity of aluminium amalgam, restrictions are placed on the use and handling of mercury in proximity with aluminium. In particular, large amounts of mercury are not allowed aboard aircraft under most circumstances because of the risk of it forming amalgam with exposed aluminium parts in the aircraft. Even the transportation and packaging of mercury-containing thermometers and barometers is severely restricted. Accidental mercury spills in aircraft do sometimes result in insurance write-offs.
See also
Amalgam
Aluminium-gallium
References
External links
Video of Mercury-Aluminium-Gallium amalgam attacking aluminium bar
Video ( closeup ) of Mercury-Aluminium-Gallium amalgam attacking aluminium bar
Aluminium alloys
Reducing agents
Amalgams | Aluminium amalgam | [
"Chemistry"
] | 774 | [
"Redox",
"Aluminium alloys",
"Reducing agents",
"Alloys",
"Amalgams"
] |
4,253,041 | https://en.wikipedia.org/wiki/MIL-STD-810 | MIL-STD-810, U.S. Department of Defense Test Method Standard, Environmental Engineering Considerations and Laboratory Tests, is a United States Military Standard that emphasizes tailoring an equipment's environmental design and test limits to the conditions that it will experience throughout its service life, and establishing chamber test methods that replicate the effects of environments on the equipment rather than imitating the environments themselves. Although prepared specifically for U.S. military applications, the standard is often applied for commercial products as well.
The standard's guidance and test methods are intended to:
define environmental stress sequences, durations, and levels of equipment life cycles;
be used to develop analysis and test criteria tailored to the equipment and its environmental life cycle;
evaluate equipment's performance when exposed to a life cycle of environmental stresses
identify deficiencies, shortcomings, and defects in equipment design, materials, manufacturing processes, packaging techniques, and maintenance methods; and
demonstrate compliance with contractual requirements.
MIL-STD-810G was replaced by MIL-STD-810H in 2019. In 2022, MIL-STD-810H Change Notice 1 was released. As of 2024, the latest version is MIL-STD-810H with Change Notice 1.
Cognizant agency
MIL-STD-810 is maintained by a Tri-Service partnership that includes the United States Air Force, Army, and Navy. The U.S. Army Test and Evaluation Command, or ATEC, serves as Lead Standardization Activity / Preparing Activity, and is chartered under the Defense Standardization Program (DSP) with maintaining the functional expertise and serving as the DoD-wide technical focal point for the standard. The Institute of Environmental Sciences and Technology is the Administrator for WG-DTE043: MIL-STD-810, the Working Group tasked with reviewing the current environmental testing guidance and recommending improvements to the DOD Tri-Service Working Group.
Scope and purpose
MIL-STD-810 addresses a broad range of environmental conditions that include: low pressure for altitude testing; exposure to high and low temperatures plus temperature shock (both operating and in storage); rain (including wind blown and freezing rain); humidity, fungus, salt fog for corrosion testing; sand and dust exposure; explosive atmosphere; leakage; acceleration; shock and transport shock; gunfire vibration; and random vibration. The standard describes environmental management and engineering processes that can be of enormous value to generate confidence in the environmental worthiness and overall durability of a system design. The standard contains military acquisition program planning and engineering direction to consider the influences that environmental stresses have on equipment throughout all phases of its service life. The document does not impose design or test specifications. Rather, it describes the environmental tailoring process that results in realistic materiel designs and test methods based on materiel system performance requirements.
Finally, there are limitations inherent in laboratory testing that make it imperative to use proper engineering judgment to extrapolate laboratory results to results that may be obtained under actual service conditions. In many cases, real-world environmental stresses (singularly or in combination) cannot be duplicated in test laboratories. Therefore, users should not assume that an item that passes laboratory testing also will pass field/fleet verification tests.
History and evolution
In 1945, the Army Air Force (AAF) released the first specification providing a formal methodology for testing equipment under simulated environmental conditions. That document, entitled AAF Specification 41065, Equipment: General Specification for Environmental Test of, is the direct ancestor of MIL-STD-810. In 1965, the USAF released a technical report with data and information on the origination and development of natural and induced environmental tests intended for aerospace and ground equipment. By using that document, the design engineer obtained a clearer understanding of the interpretation, application, and relationship of environmental testing to military equipment and materiel.
The Institute of Environmental Sciences and Technology (IEST), a non-profit technical society, released the publication History and Rationale of MIL-STD-810 to capture the thought process behind the evolution of MIL-STD-810. It also provides a development history of test methods, rationale for many procedural changes, tailoring guidance for many test procedures, and insight into the future direction of the standard.
The MIL-STD-810 test series originally addressed generic laboratory environmental testing. The first edition of MIL-STD-810 in 1962 included only a single sentence allowing users to modify tests to reflect environmental conditions. Subsequent editions contained essentially the same phrase, but did not elaborate on the subject until MIL-STD-810D was issued marking one of the more significant revisions of the standard with its focus more on shock and vibration tests that closely mirrored real-world operating environments. MIL-STD-810F further defined test methods while continuing the concept of creating test chambers that simulate conditions likely to be encountered during a product's useful life rather than simply replicating the actual environments. More recently, MIL-STD-810G implements Test Method 527 calling for the use of multiple vibration exciters to perform multi-axis shaking that simultaneously excites all test article resonances and simulates real-world vibrations. This approach replaces the legacy approach of three distinct tests, that is, shaking a load first in its x axis, then its y axis, and finally in its z axis.
A matrix of the tests and methods of MIL-STD-810 through Revision G is available on the web and quite useful in comparing the changes among the various revisions .
The following table traces the specification's evolution in terms of environmental tailoring to meet a specific user's needs.
Part one – General program guidelines
Part One of MIL-STD-810 describes management, engineering, and technical roles in the environmental design and test tailoring process. It focuses on the process of tailoring design and test criteria to the specific environmental conditions an equipment item is likely to encounter during its service life. New appendices support the succinctly presented text of Part One. It describes the tailoring process (i.e., systematically considering detrimental effects that various environmental factors may have on a specific equipment throughout its service life) and applies this process throughout the equipment's life cycle to meet user and interoperability needs.
Part two – Laboratory test methods
Part Two of MIL-STD-810 contains the environmental laboratory test methods to be applied using the test tailoring guidelines described in Part One of the document. With the exception of Test Method 528, these methods are not mandatory, but rather the appropriate method is selected and tailored to generate the most relevant test data possible. Each test method in Part Two contains some environmental data and references, and it identifies particular tailoring opportunities. Each test method supports the test engineer by describing preferred laboratory test facilities and methodologies. These environmental management and engineering processes can be of enormous value to generate confidence in the environmental worthiness and overall durability of equipment and materiel. Still, the user must recognize that there are limitations inherent in laboratory testing that make it imperative to use engineering judgment when extrapolating from laboratory results to results that may be obtained under actual service conditions. In many cases, real-world environmental stresses (singularly or in combination) cannot be duplicated practically or reliably in test laboratories. Therefore, users should not assume that a system or component that passes laboratory tests of this standard also would pass field/fleet verification trials.
Updated Test Methods and Procedures in MIL-STD-810H are listed below:
Test Method 500.6 Low Pressure (Altitude)
Procedure I - Storage/Air Transport: Procedure I is appropriate if the materiel is to be transported or stored at high ground elevations or transported by air in its shipping/storage configuration.
Procedure II - Operation/Air Carriage: Use Procedure II to determine the performance of the materiel under low pressure conditions.
Procedure III - Rapid Decompression: Use Procedure III to determine if a rapid decrease in pressure of the surrounding environment will cause a materiel reaction that would endanger nearby personnel or the platform (ground vehicle or aircraft) in which it is being transported.
Test Method 501.7 High Temperature
Procedure I - Storage: Use Procedure I to investigate how high temperatures during storage affect the materiel (integrity of materials, and safety/performance of the materiel).
Procedure II - Operation: Use Procedure II to investigate how high ambient temperatures may affect materiel performance while it is operating.
Procedure III - Tactical-Standby to Operational: This procedure evaluates the materiel’s performance at the operating temperatures after being presoaked at non-operational temperatures.
Test Method 502.7 Low Temperature
Procedure I - Storage: Use Procedure I to investigate how low temperatures during storage affect materiel safety during and after storage, and performance after storage.
Procedure II - Operation: Use Procedure II to investigate how well the materiel operates in low temperature environments.
Procedure III - Manipulation: Use Procedure III to investigate the ease with which the materiel can be set up or assembled, operated, and disassembled by personnel wearing heavy, cold-weather clothing.
Test Method 503.7 Temperature Shock
Procedure I-A: One-way Shock(s) from Constant Extreme Temperature.
Procedure I-B: Single Cycle Shock from Constant Extreme Temperature.
Procedure I-C: Multi-Cycle Shocks from Constant Extreme Temperature.
Procedure I-D: Shocks To or From Controlled Ambient Temperature.
Test Method 504.3 Contamination by Fluids
Test Method 505.7 Solar Radiation (Sunshine)
Procedure I – Cycling: Use Procedure I to investigate response temperatures when materiel is exposed in the open in realistically hot climates, and is expected to perform without degradation during and after exposure.
Procedure II – Steady State: Use Procedure II to investigate the effects on materiel of long periods of exposure to sunshine.
Test Method 506.6 Rain
Procedure I - Rain and Blowing Rain: Procedure I is applicable for materiel that will be deployed out-of-doors and that will be unprotected from rain or blowing rain.
Procedure II - Exaggerated: Consider Procedure II when large (shelter-size) materiel is to be tested and a blowing-rain facility is not available or practical.
Procedure III - Drip: Procedure III is appropriate when materiel is normally protected from rain but may be exposed to falling water from condensation or leakage from upper surfaces.
Test Method 507.6 Humidity
Procedure I – Induced (Storage and Transit) and Natural Cycles: Once a cycle is selected, perform the storage and transit portion first, followed by the corresponding natural environment portion of the cycle.
Procedure II – Aggravated: Procedure II exposes the test item to more extreme temperature and humidity levels than those found in nature (without contributing degrading elements), but for shorter durations.
Test Method 508.8 Fungus
Test Method 509.7 Salt Fog
Test Method 510.7 Sand and Dust
Procedure I - Blowing Dust: Use Procedure I to investigate the susceptibility of materiel to concentrations of blowing dust (< 150 μm).
Procedure II - Blowing Sand: Use Procedure II to investigate the susceptibility of materiel to the effects of blowing sand (150 μm to 850 μm).
Test Method 511.7 Explosive Atmosphere
Procedure I - Explosive Atmosphere: This procedure is applicable to all types of sealed and unsealed materiel. This test evaluates the ability of the test item to be operated in a fuel vapor environment without igniting the environment.
Procedure II - Explosion Containment: This procedure is used to determine the ability of the test item's case or other enclosures to contain an explosion or flame that is a result of an internal materiel malfunction.
Test Method 512.6 Immersion
Test Method 513.8 Acceleration
Procedure I - Structural Test: Procedure I is used to demonstrate that materiel will structurally withstand the loads induced by in-service accelerations.
Procedure II - Operational Test: Procedure II is used to demonstrate that materiel will operate without degradation during and after being subjected to loads induced by in-service acceleration.
Procedure III - Crash Hazard Acceleration Test: Procedure III is used to disclose structural failures of materiel that may present a hazard to personnel during or after a crash.
Procedure IV – Strength Test: Procedure IV is a strength test primarily intended to generate specific loads in primary structures using sine burst testing
Test Method 514.8 Vibration
Procedure I - General Vibration: Use Procedure I for materiel to be transported as secured cargo or deployed for use on a vehicle.
Procedure II - Loose Cargo Transportation: Use this procedure for materiel to be carried in/on trucks, trailers, or tracked vehicles and not secured to (tied down in) the carrying vehicle.
Procedure III - Large Assembly Transportation: This procedure is intended to replicate the vibration and shock environment incurred by large assemblies of materiel installed or transported by wheeled or tracked vehicles.
Procedure IV - Assembled Aircraft Store Captive Carriage and Free Flight: Apply Procedure IV to fixed wing aircraft carriage and free flight portions of the environmental life cycles of all aircraft stores, and to the free flight phases of ground or sea-launched missiles.
Test Method 515.8 Acoustic Noise
Procedure I-a - Uniform Intensity Acoustic Noise: Procedure Ia has a uniform intensity shaped spectrum of acoustic noise that impacts all the exposed materiel surfaces.
Procedure I-b - Direct Field Acoustic Noise (DFAN): Procedure Ib uses normal incident plane waves in a shaped spectrum of acoustic noise to impact directly on all exposed test article surfaces without external boundary reflections.
Procedure II - Grazing Incidence Acoustic Noise: Procedure II includes a high intensity, rapidly fluctuating acoustic noise with a shaped spectrum that impacts the materiel surfaces in a particular direction - generally along the long dimension of the materiel.
Procedure III - Cavity Resonance Acoustic Noise: In Procedure III, the intensity and, to a great extent, the frequency content of the acoustic noise spectrum is governed by the relationship between the geometrical configuration of the cavity and the materiel within the cavity.
Test Method 516.8 Shock
Procedure I - Functional Shock: Procedure I is intended to test materiel (including mechanical, electrical, hydraulic, and electronic) in its functional mode, and to assess the physical integrity, continuity, and functionality of the materiel to shock.
Procedure II - Transportation Shock: Procedure II is used to evaluate the response of an item or restraint system to transportation environments that create a repetitive shock load.
Procedure III - Fragility: Procedure III is used early in the item development program to determine the materiel's fragility level, in order that packaging, storage, or mounting configurations may be designed to protect the materiel's physical and functional integrity.
Procedure IV - Transit Drop: Procedure IV is a physical drop test, and is intended for materiel either outside of, or within its transit or combination case, or as prepared for field use (carried to a combat situation by man, truck, rail, etc.).
Procedure V - Crash Hazard Shock Test: Procedure V is for materiel mounted in air or ground vehicles that could break loose from its mounts, tiedowns, or containment configuration during a crash, and present a hazard to vehicle occupants and bystanders.
Procedure VI - Bench Handling: Procedure VI is intended for materiel that may typically experience bench handling, bench maintenance, or packaging.
Procedure VII – Pendulum Impact: Procedure VII is intended to test the ability of large shipping containers to resist horizontal impacts, and to determine the ability of the packaging and packing methods to provide protection to the contents when the container is impacted.
Procedure VIII - Catapult Launch/Arrested Landing: Procedure VIII is intended for materiel mounted in or on fixed-wing aircraft that is subject to catapult launches and arrested landings.
Test Method 517.3 Pyroshock
Procedure I - Near-field with Actual Configuration: Procedure I is intended to test materiel in its functional mode and actual configuration (materiel/pyrotechnic device physical configuration), and to ensure it can survive and function as required when tested using the actual pyrotechnic test device in its intended installed configuration.
Procedure II - Near-field with Simulated Configuration: Procedure II is intended to test materiel in its functional mode, but with a simulated structural configuration, and to ensure it can survive and function as required when in its actual materiel/pyrotechnic device physical configuration.
Procedure III - Mid-field with a Mechanical Test Device: Pyroshock can be applied using conventional high acceleration amplitude/frequency test input devices between 3,000 and 10,000 Hz.
Procedure IV - Far-field Using a Mechanical Test Device: Pyroshock can be applied using conventional high acceleration amplitude/frequency test input devices frequencies less than 3,000 Hz.
Procedure V - Far-field Using an Electrodynamic Shaker: On occasion, pyroshock response can be replicated using conventional electrodynamic shakers.
Test Method 518.2 Acidic Atmosphere
Test Method 519.8 Gunfire Shock
Procedure I. Measured Materiel Input/Response Time History Under TWR: Measured in-service gunfire shock environment for materiel is replicated under laboratory exciter waveform control (Method 525.2 TWR) to achieve a near exact reproduction of the measured in-service gunfire shock environment.
Procedure II. SRS Generated Shock Time History Pulse Sequence Under TWR: This procedure is based on former processing measured gunfire shock in terms of the SRS applied either to individual gunfire pulses or the SRS applied to the overall gunfire pulse sequence.
Procedure III. Stochastically Generated Materiel Input From Preliminary Design Spectrum Under TWR: This procedure is ad hoc, lacking necessary field measured time trace information, and a last resort to providing guidelines for design of materiel to resist a gunfire shock environment.
Test Method 520.5 Combined Environments
Test Method 521.4 Icing/Freezing Rain
Test Method 522.2 Ballistic Shock
Procedure I - BH&T: Ballistic shock is applied in its natural form using live fire testing.
Procedure II - LSBSS: LSBSS is a low cost option for producing the spectrum of ballistic shock without the expense of live fire testing.
Procedure III - LWSM: Ballistic shock is simulated using a hammer impact. This procedure is used to test shock mounted components up to 113.6 kg (250 lb), which are known to be insensitive to the higher frequency content of ballistic shock.
Procedure IV - Mechanical Shock Simulator: Ballistic shock is simulated using a metal-to-metal impact (gas driven projectile).
Procedure V - MWSM: Ballistic shock is simulated using a hammer impact. This procedure is used to test components up to 2273 kg (5000 lb) in weight which are known to be insensitive to the higher frequencies of ballistic shock.
Procedure VI - Drop Table: Ballistic shock is simulated by the impact resulting from a drop.
Test Method 523.4 Vibro-Acoustic/Temperature
Test Method 524.1 Freeze / Thaw
Procedure I – Diurnal Cycling Effects: To simulate the effects of diurnal cycling on materiel exposed to temperatures varying slightly above and below the freeze point that is typical of daytime warming and freezing at night when deposits of ice or condensation, or high relative humidity exist.
Procedure II – Fogging: For materiel transported directly from a cold to a warm environment such as from an unheated aircraft, missile or rocket, to a warm ground area, or from a cold environment to a warm enclosure, and resulting in free water or fogging.
Procedure III – Rapid Temperature Change: For materiel that is to be moved from a warm environment to a cold environment (freeze) and then back to the warm environment, inducing condensation (free water).
Test Method 525.2 Time Waveform Replication
Procedure I: The SESA replication of a field measured materiel time trace input/response.
Procedure II: The SESA replication of an analytically specified materiel time trace input/response
Test Method 526.2 Rail Impact.
Test Method 527.2 Multi-Exciter Test
Procedure I – Time Domain Reference Criteria: This MET Procedure is an extension to the SESA TimeWaveform Replication (TWR) techniques addressed in Method 525.2.
Procedure II – Frequency Domain Reference Criteria: This MET Procedure is an extension to the SESA Spectral based vibration control techniques addressed in Method 514.8.
Test Method 528.1 Mechanical Vibrations of Shipboard Equipment (Type I – Environmental and Type II – Internally Excited)
Part three – World climatic regions
Part Three contains a compendium of climatic data and guidance assembled from several sources, including AR 70-38, Research, Development, Test and Evaluation of Materiel for Extreme Climatic Conditions (1979), a draft version of AR 70-38 (1990) that was developed using Air Land Battlefield Environment (ALBE) report information, Environmental Factors and Standards for Atmospheric Obscurants, Climate, and Terrain (1987), and MIL-HDBK-310, Global Climatic Data for Developing Military Products. It also provides planning guidance for realistic consideration (i.e., starting points) of climatic conditions in various regions throughout the world.
Applicability to "ruggedized" consumer products
U.S. MIL-STD-810 is a flexible standard that allows users to tailor test methods to fit the application. As a result, a vendor's claims of "...compliance to U.S. MIL-STD-810..." can be misleading, because no commercial organization or agency certifies compliance, commercial vendors can create the test methods or approaches to fit their product. Suppliers can – and some do – take significant latitude with how they test their products, and how they report the test results. Consumers who require rugged products should verify which test methods that compliance is claimed against and which parameter limits were selected for testing. Also, if some testing was actually done they would have to specify: (i) against which test methods of the standard the compliance is claimed; (ii) to which parameter limits the items were actually tested; and (iii) whether the testing was done internally or externally by an independent testing facility.
Related documents
Environmental Conditions for Airborne Equipment: The document DO-160G, Environmental Conditions and Test Procedures for Airborne Equipment outlines a set of minimal standard environmental test conditions (categories) and corresponding test procedures for airborne equipment. It is published by the RTCA, Inc, formerly known as Radio Technical Commission for Aeronautics until their re-incorporation in 1991 as a not-for-profit corporation that functions as a Federal Advisory Committee pursuant to the United States Federal Advisory Committee Act.
Environmental Test Methods for Defense Materiel: The Ministry of Defence (United Kingdom) provides requirements for environmental conditions experienced by defence materiel in service via the Defence Standard 00-35, Environmental Handbook for Defence Materiel (Part 3) Environmental Test Methods. The document contains environmental descriptions, a range of tests procedures and default test severities representing conditions that may be encountered during the equipment's life.
NATO Environmental Guidelines for Defence Equipment: The North Atlantic Treaty Organization (NATO) provides guidance to project managers, programme engineers, and environmental engineering specialists in the planning and implementation of environmental tasks via the Allied Environmental Conditions and Test Publication (AECTP) 100, Environmental Guidelines for Defence Materiel. The current document, AECTP-100 (Edition 3), was released January 2006.
Shock Testing Requirements for Naval Ships: The military specification entitled MIL-DTL-901E, Detail Specification, Shock Tests, H.I. (High-Impact) Shipboard Machinery, Equipment, and Systems, Requirements for (often mistakenly referred to as MIL-STD-901) covers shock testing requirements for ship board machinery, equipment, systems, and structures, excluding submarine pressure hull penetrations. Compliance to the document verifies the ability of shipboard installations to withstand shock loadings which may be incurred during wartime service due to the effects of nuclear or conventional weapons. The current specification was released 20 June 2017.
IEST Vibration and Shock Testing Recommended Practices: These documents are peer-reviewed documents that outline how to do specific tests. They are published by the Institute of Environmental Sciences and Technology.
See also
IP Code
Rugged computer
Rugged smartphone
EN 62262
Industrial PC
References
External links
Military of the United States standards
Environmental testing | MIL-STD-810 | [
"Engineering"
] | 4,970 | [
"Environmental testing",
"Reliability engineering"
] |
4,253,054 | https://en.wikipedia.org/wiki/Ampere-hour | An ampere-hour or amp-hour (symbol: A⋅h or A h; often simplified as Ah) is a unit of electric charge, having dimensions of electric current multiplied by time, equal to the charge transferred by a steady current of one ampere flowing for one hour, or 3,600 coulombs.
The commonly seen milliampere-hour (symbol: mA⋅h, mA h, often simplified as mAh) is one-thousandth of an ampere-hour (3.6 coulombs).
Use
The ampere-hour is frequently used in measurements of electrochemical systems such as electroplating and for battery capacity where the commonly known nominal voltage is understood.
A milliampere second (mA⋅s) is a unit of measurement used in X-ray imaging, diagnostic imaging, and radiation therapy. It is equivalent to a millicoulomb. This quantity is proportional to the total X-ray energy produced by a given X-ray tube operated at a particular voltage. The same total dose can be delivered in different time periods depending on the X-ray tube current.
To help express energy, computation over charge values in ampere-hour requires precise data of voltage: in a battery system, for example, accurate calculation of the energy delivered requires integration of the power delivered (product of instantaneous voltage and instantaneous current) over the discharge interval. Generally, the battery voltage varies during discharge; an average value or nominal value may be used to approximate the integration of power.
When comparing the energy capacities of battery-based products that might have different internal cell chemistries or cell configurations, a simple ampere-hour rating is often insufficient. For example, at 3.2 V for a lithium iron phosphate battery cell (), the perceived energy capacity of a small UPS product that has multiple DC outputs at different voltages but is simply listed with a single ampere-hour rating, e.g., 8800 mAh, would be exaggerated by a factor of 3.75 compared to that of a sealed 12-volt lead-acid battery where the ampere-hour rating, e.g., 7 Ah, is based on the total output voltage rather than the internal cell voltage, so the 12-volt output of the example UPS product can actually deliver only about a third of the energy of the example battery, not a quarter more energy. But a direct replacement product for the example battery, in the same form factor and comparable output voltage and energy capacity but based on , might also be specified as 7 Ah, here based on output voltage rather than cell chemistry. For consumers without an engineering background, these difficulties would be avoided by a specification of the watt-hour rating instead (or additionally).
In other units of electric charge
One ampere-hour is equal to (up to 4 significant figures):
3,600 coulombs
2.247 × 1022 elementary charges
0.03731 faradays
1.079 × 1013 statcoulombs (CGS-ESU equivalent)
360 abcoulombs (CGS-EMU equivalent)
Examples
An AA size dry cell has a capacity of about 2,000 to 3,000 milliampere-hours.
An average smartphone battery usually has between 2,500 and 4,000 milliampere-hours of electric capacity.
Automotive car batteries vary in capacity but a large automobile propelled by an internal combustion engine would have about a 50-ampere-hour battery capacity.
Since one ampere-hour can produce 0.336 grams of aluminium from molten aluminium chloride, producing a ton of aluminium required transfer of at least 2.98 million ampere-hours.
See also
Electrochemical equivalent
Kilowatt-hour (kW⋅h)
References
Units of electrical charge
Non-SI metric units | Ampere-hour | [
"Physics",
"Mathematics"
] | 779 | [
"Physical quantities",
"Electric charge",
"Non-SI metric units",
"Quantity",
"Units of electrical charge",
"Units of measurement"
] |
4,253,083 | https://en.wikipedia.org/wiki/Mobile%20web | The mobile web comprises mobile browser-based World Wide Web services accessed from handheld mobile devices, such as smartphones or feature phones, through a mobile or other wireless network.
History and development
Traditionally, the World Wide Web has been accessed via fixed-line services on laptops and desktop computers. However, the web is now more accessible by portable and wireless devices. Early 2010 ITU (International Telecommunication Union) report said that with current growth rates, web access by people on the go via laptops and smart mobile devices was likely to exceed web access from desktop computers within the following five years. In January 2014, mobile internet use exceeded desktop use in the United States. The shift to mobile Web access has accelerated since 2007 with the rise of larger multitouch smartphones, and since 2010 with the rise of multitouch tablet computers. Both platforms provide better Internet access, screens, and mobile browsers, or application-based user Web experiences than previous generations of mobile devices. Web designers may work separately on such pages, or pages may be automatically converted, as in Mobile Wikipedia. Faster speeds, smaller, feature-rich devices, and a multitude of applications continue to drive explosive growth for mobile internet traffic. The 2017 Virtual Network Index (VNI) report produced by Cisco Systems forecasts that by 2021, there will be 5.5 billion global mobile users (up from 4.9 billion in 2016). Additionally, the same 2017 VNI report forecasts that average access speeds will increase by roughly three times from 6.8 Mbit/s to 20 Mbit/s in that same period with video comprising the bulk of the traffic (78%).
According to BuzzCity, the mobile internet increased by 30% from Q1 to Q2 2011. In July 2012, approximately 10.5% of all web traffic occurred through mobile devices (up from 4% in December 2010).
The distinction between mobile web applications and native applications is anticipated to become increasingly blurred, as mobile browsers gain direct access to the hardware of mobile devices (including accelerometers and GPS chips), and the speed and abilities of browser-based applications improve. Persistent storage and access to sophisticated user interface graphics functions may further reduce the need for the development of platform-specific native applications.
The mobile web has also been called Web 3.0, drawing parallels to the changes users were experiencing as Web 2.0 websites proliferated.
The mobile web was first popularized by the Silicon Valley company, Unwired Planet. In 1997, Unwired Planet, Nokia, Ericsson, and Motorola started the WAP Forum to create and harmonize the standards to ease the transition to bandwidth networks and small display devices. The WAP standard was built on a three-layer, middleware architecture that fueled the early growth of the mobile web. It was made virtually irrelevant after the development and adoption of faster networks, larger displays, and advanced smartphones based on Apple's iOS and Google's Android software.
Mobile points of access
Mobile Internet refers to Internet access and mainly usage of Internet using a cellular telephone service provider or mobile wireless network. This wireless access can easily change to use a different wireless Internet (radio) tower as a mobile device user moves across the service area. Cellular base stations that connect through the telephone system are more expensive to provide compared to a wireless base station that connects directly to the network of an internet service provider. A mobile broadband modem may "tethers" the smartphone to one or more devices to provide access to the Internet via the protocols that cellular telephone service providers offer.
Mobile standards
The Mobile Web Initiative (MWI) was set up by the W3C to develop the best practices and technologies relevant to the mobile web. The goal of the initiative is to make browsing the web from mobile devices more reliable and accessible. The main aim is to evolve standards of data formats from Internet providers that are tailored to the specifications of particular mobile devices. The W3C has published guidelines for mobile content, and aimed to address the problem of device diversity by establishing a technology to support a repository of device descriptions.
W3C developed a validating scheme to assess the readiness of content for the mobile web, through its mobileOK Scheme, which aims to help content developers to determine if their content is web-ready. The W3C guidelines and mobileOK approach have faced criticism. mTLD, the registry for .mobi, released a free testing tool called the MobiReady Report (see mobiForge) to analyze the mobile readiness of website.
Development
Access to the mobile web was first commercially offered in 1996, in Finland, on the Nokia 9000 Communicator phone via the Sonera and Radiolinja networks. The first commercial launch of a mobile-specific browser-based web service was in 1999 in Japan when i-mode was launched by NTT DoCoMo.
The mobile web primarily utilizes lightweight pages like this one written in Extensible Hypertext Markup Language (XHTML) or Wireless Markup Language (WML) to deliver content to mobile devices. Many new mobile browsers are moving beyond these limits by supporting a wider range of Web formats, including variants of HTML commonly found on the desktop web.
Growth
At one time, half the world had mobile phones. The articles in 2007-2008 were slightly misleading because the real story at the time was that the number of mobile phone subscriptions had reached half the population of the world. In reality, many people have more than one subscription. For example, in Hong Kong, Italy and Ukraine, the mobile phone penetration rate had passed 140% by 2009 . In 2009, the number of unique users of mobile phones had reached half the population of the planet when the ITU reported that Mobile Internet data connections are following the growth of mobile phone connections, albeit at a lower rate. In 2009 Yankee Group reported that 29% of all mobile phone users globally were accessing browser-based internet content on their phones. According to the BBC, in 2020 there were over 5 billion mobile phone users in the world. According to Statista there were 1.57 billion smartphone owners in 2014 and 2.32 billion in 2017.
Many users in Europe and the United States are already users of the fixed internet when they first try the same experience on a mobile phone. Meanwhile, in other parts of the world, such as India, their first usage of the internet is on a mobile phone. Growth is fastest in parts of the world where the personal computer (PC) is not the first user experience of the internet. India, South Africa, Indonesia, and Saudi Arabia are seeing the fastest growth in mobile internet usage. To a great extent, this is due to the rapid adoption of mobile phones themselves. For example, Morgan Stanley reports that the highest mobile phone adoption growth in 2006 was in Pakistan and India. Mobile internet has also been adopted in West Africa, and China had 155 million mobile internet users as of June 2009.
Top-level domain
The .mobi sponsored top-level domain was launched specifically for the mobile Internet by a consortium of companies including Google, Microsoft, Nokia, Samsung, and Vodafone. By forcing sites to comply with mobile web standards, .mobi tries to ensure visitors a consistent and optimized experience on their mobile device. However, this domain has been criticized by several big names, including Tim Berners-Lee of the W3C, who said that providing different content to different devices "breaks the Web in a fundamental way".
Accelerated Mobile Pages
In the fall of 2015, Google announced it would be rolling out an open source initiative called "Accelerated Mobile Pages" or AMP. The goal of this project is to improve the speed and performance of content-rich pages which include video, animations, and graphics. Since the majority of the population now consumes the web through tablets and smartphones, having web pages that are optimized for these products is the primary need to AMP.
The three main types of AMP are AMP HTML, AMP JS, and Google AMP Cache.
As of February 2018, Google requires the canonical page content to match the content on accelerated mobile pages.
Limitations
Mobile web access may suffer from interoperability and usability problems. Interoperability issues stem from the platform fragmentation of mobile devices, mobile operating systems, and browsers. Usability problems are centered on the small physical size of the mobile phone form factors, which limit display resolution and user input). Limitations vary, depending on the device, and newer smartphones overcome some of these restrictions, but problems which may be encountered include:
Small screen size – This makes it difficult or impossible to see text and graphics dependent on the standard size of a desktop computer screen. To display more information, smartphone screen sizes have been getting bigger.
Lack of windows – On a desktop computer, the ability to open more than one window at a time allows for multi-tasking and easy revert to a previous page. Historically on mobile web, only one page could be displayed at a time, and pages could only be viewed in the sequence they were originally accessed. Opera Mini was among the first allowing multiple windows, and browser tabs have become commonplace but few mobile browsers allow overlapping windows on the screen.
Navigation – Navigation is a problem for websites not optimized for mobile devices as the content area is large, the screen size is small, and there is no scroll wheel or hover box feature.
Lack of JavaScript and cookies – Most devices do not support client-side scripting and storage of cookies (smartphones excluded), which are now widely used in most web sites to enhance the user experience, facilitating the validation of data entered by the page visitor, etc. This also results in web analytics tools being unable to uniquely identify visitors using mobile devices.
Types of pages accessible – Many sites that can be accessed on a desktop cannot on a mobile device. Many devices cannot access pages with a secured connection, Flash, or other similar software, PDFs, or video sites, although as of 2011, this has been changing.
Speed – On most mobile devices, the speed of service is slow, sometimes slower than dial-up Internet access.
Broken pages – On many devices, a single page as viewed on a desktop is broken into segments, each treated as a separate page. This further slows navigation.
Compressed pages – Many pages, in their conversion to mobile format, are squeezed into an order different from how they would customarily be viewed on a desktop computer.
Size of messages – Many devices have limits on the number of characters that can be sent in an email message.
Cost – The access and bandwidth charges levied by cellphone networks can be high if there is no flat fee per month.
Location of mobile user – If the user is abroad the flat fee per month usually does not apply
Access to device capabilities – The inability of mobile web applications to access the local capabilities on the mobile device can limit their ability to provide the same features as native applications.
See also
.mobi
Mobile app
Mobile application server
Mobile browser
Mobile dating
Mobile content
Mobile advertising
Responsive web design
Wireless Application Protocol
References
External links
W3C Mobile Web Best Practices
W3C Mobile Web Initiative (MWI)
Internet Standards | Mobile web | [
"Technology"
] | 2,270 | [
"Wireless networking",
"Mobile web"
] |
4,253,446 | https://en.wikipedia.org/wiki/Human-based%20computation | Human-based computation (HBC), human-assisted computation, ubiquitous human computing or distributed thinking (by analogy to distributed computing) is a computer science technique in which a machine performs its function by outsourcing certain steps to humans, usually as microwork. This approach uses differences in abilities and alternative costs between humans and computer agents to achieve symbiotic human–computer interaction. For computationally difficult tasks such as image recognition, human-based computation plays a central role in training Deep Learning-based Artificial Intelligence systems. In this case, human-based computation has been referred to as human-aided artificial intelligence.
In traditional computation, a human employs a computer to solve a problem; a human provides a formalized problem description and an algorithm to a computer, and receives a solution to interpret. Human-based computation frequently reverses the roles; the computer asks a person or a large group of people to solve a problem, then collects, interprets, and integrates their solutions. This turns hybrid networks of humans and computers into "large scale distributed computing networks". where code is partially executed in human brains and on silicon based processors.
Early work
Human-based computation (apart from the historical meaning of "computer") research has its origins in the early work on interactive evolutionary computation (EC). The idea behind interactive evolutionary algorithms has been attributed to Richard Dawkins; in the Biomorphs software accompanying his book The Blind Watchmaker (Dawkins, 1986) the preference of a human experimenter is used to guide the evolution of two-dimensional sets of line segments. In essence, this program asks a human to be the fitness function of an evolutionary algorithm, so that the algorithm can use human visual perception and aesthetic judgment to do something that a normal evolutionary algorithm cannot do. However, it is difficult to get enough evaluations from a single human if we want to evolve more complex shapes. Victor Johnston and Karl Sims extended this concept by harnessing the power of many people for fitness evaluation (Caldwell and Johnston, 1991; Sims, 1991). As a result, their programs could evolve beautiful faces and pieces of art appealing to the public. These programs effectively reversed the common interaction between computers and humans. In these programs, the computer is no longer an agent of its user, but instead, a coordinator aggregating efforts of many human evaluators. These and other similar research efforts became the topic of research in aesthetic selection or interactive evolutionary computation (Takagi, 2001), however the scope of this research was limited to outsourcing evaluation and, as a result, it was not fully exploring the full potential of the outsourcing.
A concept of the automatic Turing test pioneered by Moni Naor (1996) is another precursor of human-based computation. In Naor's test, the machine can control the access of humans and computers to a service by challenging them with a natural language processing (NLP) or computer vision (CV) problem to identify humans among them. The set of problems is chosen in a way that they have no algorithmic solution that is both effective and efficient at the moment. If it existed, such an algorithm could be easily performed by a computer, thus defeating the test. In fact, Moni Naor was modest by calling this an automated Turing test. The imitation game described by Alan Turing (1950) didn't propose using CV problems. It was only proposing a specific NLP task, while the Naor test identifies and explores a large class of problems, not necessarily from the domain of NLP, that could be used for the same purpose in both automated and non-automated versions of the test.
Finally, Human-based genetic algorithm (HBGA) encourages human participation in multiple different roles. Humans are not limited to the role of evaluator or some other predefined role, but can choose to perform a more diverse set of tasks. In particular, they can contribute their innovative solutions into the evolutionary process, make incremental changes to existing solutions, and perform intelligent recombination. In short, HBGA allows humans to participate in all operations of a typical genetic algorithm. As a result of this, HBGA can process solutions for which there are no computational innovation operators available, for example, natural languages. Thus, HBGA obviated the need for a fixed representational scheme that was a limiting factor of both standard and interactive EC. These algorithms can also be viewed as novel forms of social organization coordinated by a computer, according to Alex Kosorukoff and David Goldberg.
Classes of human-based computation
Human-based computation methods combine computers and humans in different roles. Kosorukoff (2000) proposed a way to describe division of labor in computation, that groups human-based methods into three classes. The following table uses the evolutionary computation model to describe four classes of computation, three of which rely on humans in some role. For each class, a representative example is shown. The classification is in terms of the roles (innovation or selection) performed in each case by humans and computational processes. This table is a slice of a three-dimensional table. The third dimension defines if the organizational function is performed by humans or a computer. Here it is assumed to be performed by a computer.
Classes of human-based computation from this table can be referred by two-letter abbreviations: HC, CH, HH. Here the first letter identifies the type of agents performing innovation, the second letter specifies the type of selection agents. In some implementations (wiki is the most common example), human-based selection functionality might be limited, it can be shown with small h.
Methods of human-based computation
(HC) Darwin (Vyssotsky, Morris, McIlroy, 1961) and Core War (Jones, Dewdney 1984) These are games where several programs written by people compete in a tournament (computational simulation) in which fittest programs will survive. Authors of the programs copy, modify, and recombine successful strategies to improve their chances of winning.
(CH) Interactive EC (Dawkins, 1986; Caldwell and Johnston, 1991; Sims, 1991) IEC enables the user to create an abstract drawing only by selecting his/her favorite images, so the human only performs fitness computation and software performs the innovative role. [Unemi 1998] Simulated breeding style introduces no explicit fitness, just selection, which is easier for humans.
(HH2) Wiki (Cunningham, 1995) enabled editing the web content by multiple users, i.e. supported two types of human-based innovation (contributing new page and its incremental edits). However, the selection mechanism was absent until 2002, when wiki has been augmented with a revision history allowing for reversing of unhelpful changes. This provided means for selection among several versions of the same page and turned wiki into a tool supporting collaborative content evolution (would be classified as human-based evolution strategy in EC terms).
(HH3) Human-based genetic algorithm (Kosorukoff, 1998) uses both human-based selection and three types of human-based innovation (contributing new content, mutation, and recombination). Thus, all operators of a typical genetic algorithm are outsourced to humans (hence the origin of human-based). This idea is extended to integrating crowds with genetic algorithm to study creativity in 2011.
(HH1) Social search applications accept contributions from users and attempt to use human evaluation to select the fittest contributions that get to the top of the list. These use one type of human-based innovation. Early work was done in the context of HBGA. Digg and Reddit are recently popular examples. See also Collaborative filtering.
(HC) Computerized tests. A computer generates a problem and presents it to evaluate a user. For example, CAPTCHA tells human users from computer programs by presenting a problem that is supposedly easy for a human and difficult for a computer. While CAPTCHAs are effective security measures for preventing automated abuse of online services, the human effort spent solving them is otherwise wasted. The reCAPTCHA system makes use of these human cycles to help digitize books by presenting words from scanned old books that optical character recognition cannot decipher.
(HC) Interactive online games: These are programs that extract knowledge from people in an entertaining way.
(HC) "Human Swarming" or "Social Swarming". The UNU platform for human swarming establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence.
(NHC) Natural Human Computation involves leveraging existing human behavior to extract computationally significant work without disturbing that behavior. NHC is distinguished from other forms of human-based computation in that rather than involving outsourcing computational work to human activity by asking humans to perform novel computational tasks, it involves taking advantage of previously unnoticed computational significance in existing behavior.
Incentives to participation
In different human-based computation projects people are motivated by one or more of the following.
Receiving a fair share of the result
Direct monetary compensation (e.g. in Amazon Mechanical Turk, ChaCha Search guide, Mahalo.com Answers members)
Opportunity to participate in the global information economy
Desire to diversify their activity (e.g. "people aren't asked in their daily lives to be creative")
Esthetic satisfaction
Curiosity, desire to test if it works
Volunteerism, desire to support a cause of the project
Reciprocity, exchange, mutual help
Desire to be entertained with the competitive or cooperative spirit of a game
Desire to communicate and share knowledge
Desire to share a user innovation to see if someone else can improve on it
Desire to game the system and influence the final result
Enjoyment
Increasing online reputation/recognition
Many projects had explored various combinations of these incentives. See more information about motivation of participants in these projects in Kosorukoff, and Von Hippel.
Human-based computation as a form of social organization
Viewed as a form of social organization, human-based computation often surprisingly turns out to be more robust and productive than traditional organizations. The latter depend on obligations to maintain their more or less fixed structure, be functional and stable. Each of them is similar to a carefully designed mechanism with humans as its parts. However, this limits the freedom of their human employees and subjects them to various kinds of stresses. Most people, unlike mechanical parts, find it difficult to adapt to some fixed roles that best fit the organization. Evolutionary human-computation projects offer a natural solution to this problem. They adapt organizational structure to human spontaneity, accommodate human mistakes and creativity, and utilize both in a constructive way. This leaves their participants free from obligations without endangering the functionality of the whole, making people happier. There are still some challenging research problems that need to be solved before we can realize the full potential of this idea.
The algorithmic outsourcing techniques used in human-based computation are much more scalable than the manual or automated techniques used to manage outsourcing traditionally. It is this scalability that allows to easily distribute the effort among thousands (or more) of participants. It was suggested recently that this mass outsourcing is sufficiently different from traditional small-scale outsourcing to merit a new name: crowdsourcing. However, others have argued that crowdsourcing ought to be distinguished from true human-based computation. Crowdsourcing does indeed involve the distribution of computation tasks across a number of human agents, but Michelucci argues that this is not sufficient for it to be considered human computation. Human computation requires not just that a task be distributed across different agents, but also that the set of agents across which the task is distributed be mixed: some of them must be humans, but others must be traditional computers. It is this mixture of different types of agents in a computational system that gives human-based computation its distinctive character. Some instances of crowdsourcing do indeed meet this criterion, but not all of them do.
Human Computation organizes workers through a task market with APIs, task prices, and software-as-a-service protocols that allow employers / requesters to receive data produced by workers directly in to IT systems. As a result, many employers attempt to manage worker automatically through algorithms rather than responding to workers on a case-by-case basis or addressing their concerns. Responding to workers is difficult to scale to the employment levels enabled by human computation microwork platforms. Workers in the system Mechanical Turk, for example, have reported that human computation employers can be unresponsive to their concerns and needs
Applications
Human assistance can be helpful in solving any AI-complete problem, which by definition is a task which is infeasible for computers to do but feasible for humans. Specific practical applications include:
Internet search, improving ranking of results by combining automated ranking with human editorial input.
Distributed Proofreaders
Analysis of astronomical images:
Galaxy Zoo
Stardust@home
General scientific computing platforms:
Zooniverse (citizen science project)
Berkeley Open System for Skill Aggregation, by analogy with the distributed computing project Berkeley Open Infrastructure for Network Computing
Criticism
Human-based computation has been criticized as exploitative and deceptive with the potential to undermine collective action.
In social philosophy it has been argued that human-based computation is an implicit form of online labour. The philosopher Rainer Mühlhoff distinguishes five different types of "machinic capture" of human microwork in "hybrid human-computer networks": (1) gamification, (2) "trapping and tracking" (e.g. CAPTCHAs or click-tracking in Google search), (3) social exploitation (e.g. tagging faces on Facebook), (4) information mining and (5) click-work (such as on Amazon Mechanical Turk). Mühlhoff argues that human-based computation often feeds into Deep Learning-based Artificial Intelligence systems, a phenomenon he analyzes as "human-aided artificial intelligence".
See also
Citizen science
Collaborative intelligence
Collaborative Innovation Networks
Collaborative human interpreter
Crowdsourcing
Game with a purpose (or GWAP)
Global brain
Human computer
Human Computer Information Retrieval
Social software
Social computing
Social organization
Symbiotic intelligence
References | Human-based computation | [
"Technology"
] | 2,893 | [
"Information systems",
"Human-based computation"
] |
4,253,757 | https://en.wikipedia.org/wiki/The%20Secret%20%282006%20film%29 | The Secret is a 2006 Australian-American spirituality documentary consisting of a series of interviews designed to demonstrate the New Thought "law of attraction", the belief that everything/one wants or needs can be satisfied by believing in an outcome, repeatedly thinking about it, and maintaining positive emotional states to "attract" the desired outcome.
The film and the subsequent publication of the book of the same name attracted interest from media figures such as Oprah Winfrey, Ellen DeGeneres and Larry King.
Synopsis
The Secret, described as a self-help film, uses a documentary format to present a concept titled "law of attraction". As described in the film, the "Law of Attraction" hypothesis posits that feelings and thoughts can attract events, feelings, and experiences, from the workings of the cosmos to interactions among individuals in their physical, emotional, and professional affairs. The film also suggests that there has been a strong tendency by those in positions of power to keep this central principle hidden from the public.
Foundations in New Thought ideas
The authors of The Secret cite the New Thought movement which began in the late 18th century as the historical basis for their ideas.
The New Thought book The Science of Getting Rich by Wallace Wattles, the source Rhonda Byrne cites as inspiration for the film, was preceded by numerous other New Thought books, including the 1906 book Thought Vibration or the law of attraction in the Thought World by William Walker Atkinson, editor of New Thought magazine. Other New Thought books Byrne is purported to have read include self-help authors like Prentice Mulford's 19th-century Thoughts Are Things; and Robert Collier's Secret of the Ages from 1926.
Carolyn Sackariason of the Aspen Times, when commenting about Byrne's intention to share The Secret with the world, identifies the Rosicrucians as keepers of The Secret.
Production
The Secret was created by Prime Time Productions of Melbourne, Australia with executive producer Rhonda Byrne, producer Paul Harrington, and director Drew Heriot. Gozer Media of Collingwood, a suburb of Melbourne, is the design house
responsible for the visual style and feel of the film and its companion book. Byrne's company TS Production LLC, a Hungarian company, is responsible for marketing and distribution of the film and book. Byrne commented about the research she did prior to making the film:
Byrne's inspiration for creating The Secret came from reading the 1910 book The Science of Getting Rich by Wallace D. Wattles. The film was done as a project for Australia's 9Network. Nine put up less than 25% of the $3 million project with additional funding from mortgaging Byrne's home and from an investment by Bob Rainone, "a former Internet executive in Chicago". Rainone became the CEO of one of Byrne's companies, The Secret LLC, and is described by Byrne as "delivered to us from heaven".
The interviews were conducted and filmed throughout July and August 2005, with editing "effectively completed by Christmas time". About 55 teachers and authors were interviewed at locations including Chicago, Aspen, Alaska, and a Mexican Riviera cruise (interviewing Esther Hicks). The film uses 24 of these teachers in the extended version. The first edition featured a 25th teacher, Hicks, known "as the most prominent interpreter of the Law of Attraction". Since the first DVD release, Hicks declined to continue with the project. Her 10% share of sales netted the Hickses $500,000. As a result of this, Hicks' scenes are instead narrated by Lisa Nichols and Marci Shimoff. No other "secret teachers" received compensation for their appearance in the film — revealed by Bob Proctor in an interview on Nightline.
What the Bleep Do We Know!? producer, director and screenwriter Betsy Chasse interviewed Secret co-producer Paul Harrington, who gave this description of Byrne's production methods: "We used the law of attraction during the making of the program. We went very unconventional, in terms of scheduling and budgeting. We allowed things to come to us... We just had faith that things would come to us."
9Network, after viewing the completed film, declined to broadcast. A new contract was negotiated with all DVD sales going to Byrne's companies (Prime Time and The Secret LLC). In hindsight, Len Downs of Channel Nine commented, "we looked at it and we didn't deem it as having broad, mass appeal". It eventually broadcast on 3 February 2007 at 10:30 pm. Downs reported that "it didn't do all that well". The film was sold on DVD and also online through streaming media.
Marketing
Packaging
The film has been described as a "slick repackaging" of the Law of Attraction, a concept originating in the New Thought ideas of the late 19th century. In producing the film, the law was intentionally "packaged" with a focus on "wealth enhancement" — differing from the more spiritual orientation of the New Thought Movement. One of the film's backers stated, "we desired to hit the masses, and money is the number one thing on the masses' minds". A review in salon.com described the packaging of the products related to the film as having "a look... that conjures a 'Da Vinci Code' aesthetic, full of pretty faux parchment, quill-and-ink fonts and wax seals.
Choosing to package the film's theme as a "secret" has been called an important component of the film's popularity. Donavin Bennes, a buyer who specializes in metaphysics for Borders, stated "We all want to be in on a secret. But to present it as the secret, that was brilliant."
Marketing campaign
The movie was advertised on the Internet using "tease" advertising and viral marketing; techniques in which the specific details of The Secret were not revealed. Additionally, Prime Time Productions granted written permission to individuals or companies, via application at the official site, to provide free screenings of the film to public audiences. Optionally, the DVD could be sold at these screenings.
The book
A companion book by Rhonda Byrne was published called The Secret (Simon & Schuster, 2006). The Secret was featured on two episodes of Oprah — and as the film reached number one on the Amazon DVD chart in March 2007, the book version of The Secret reached number one on The New York Times bestseller list. For much of February through April 2007, both the book and the DVD versions were #1 or #2 at Amazon, Barnes & Noble, and Borders. Simon & Schuster released a second printing of 2 million copies of The Secret — "the biggest order for a second printing in its history," while Time reported brisk sales of the DVD through New Age bookstores, and New Thought churches, such as Unity and Agape International Spiritual Center. Like the movie, the book has also experienced a great deal of controversy and criticism for its claims, and has been parodied on several TV shows.
Reception
Gross
The estimated domestic DVD sales in the US in 2007 exceed $56 million, and eventually topped $65 million.
Critical response
The Secret has been described as a "self-help phenomenon", a "publishing phenomenon", and a "cultural phenomenon".
Several critics wrote about the Secret in relation to self-help in general. Julie Mason, of the Ottawa Citizen, wrote that word of mouth about the film spread through Pilates classes, "get-rich-quick websites" and personal-motivation blogs. Jane Lampman, of the Christian Science Monitor, described The Secret as a brand promoting Secret-related teachers, seminars and retreats. According to Jill Culora, of the New York Post, fans of The Secret have posted on a wide range of blogs and Web forums accounts of how shifting from negative to positive thoughts made big improvements in their lives.
Jerry Adler of Newsweek called it "breathless pizzazz" for a tired self-help genre; "emphatically cinematic" and "driven by images and emotions rather than logic"; a blend of Tony Robbins and The Da Vinci Code; and "the Unsolved Mysteries of infomercials".
In 2007, The Secret was reportedly being discussed in "e-mails, in chat rooms, around office cubicles, [and] on blind dates". It is recognized as having a broad and varied impact on culture.
American TV host Oprah Winfrey is a proponent of the film and later the book. On The Larry King Show she said that the message of The Secret is the message she's been trying to share with the world on her show for the past 21 years. Author Rhonda Byrne was later invited to her show along people who vow by The Secret.
Some critics were bothered by the film's focus on questionable wealth enhancement, including promises that the universe will give you material goods "like having the universe as your catalog."
According to a March 2007 issue of Skeptical Inquirer, the central idea of the film "has [no] basis in scientific reality", despite invoking scientific concepts.
Within businesses using the DVD for employee-training and morale-building, author Barbara Ehrenreich called it "a gimmick" and "disturbing", like "being indoctrinated into a cult".
UFC former champion Conor McGregor claims The Secret played a role in his rise to fame. McGregor has said his first reaction on watching the DVD version was: “This is bullshit — but then something clicked for me.” He and girlfriend Dee Devlin, who manages his finances, started focusing on small things they wanted, such as a parking space closest to the doors of a local shopping centre. He said: “We would be driving to the shop and visualising the exact car park space. And then we’d be able to get it every time.”
They then began visualizing wealth, fame and championships.
Parodies
The concept was parodied on Parks and Recreation, The Chaser's War on Everything, It's Always Sunny in Philadelphia, The Simpsons, Boston Legal and Saturday Night Live.
Legal controversies
A Current Affair, an Australian newsmagazine airing on The Secrets co-funder 9Network, carried a 14 May 2007 segment titled "The Secret Stoush". Australian author Vanessa J. Bonnette is interviewed, and Bonnette—when referring to the book version of The Secret—asserts, "that is my work and Rhonda Byrne has stolen it". Bonnette and a reporter compare her book to Byrne's on the use of the "TV transmission" analogy. Bonnette's book, Empowered for the New Era was released in 2007 as a second edition. Bonnette, at her website, claims 100 instances of plagiarism. Byrne's marketing company, TS Production LLC, has responded with a lawsuit to restrain Bonnette. From the statement of claim:
David Schirmer, the "investment guru"—and only Australian—in the film, has his business activities under investigation by the Australian Securities Investment Commission (ASIC). This was reported on 1 June 2007 by A Current Affair in a segment titled "The Secret Con" with those words and The Secret logo appearing in the background behind the newscaster. The show initially confronted Schirmer in a segment titled "The Secret Exposed", aired on 28 May 2007, with complaints from people who say Schirmer owed them money.
On 12 February 2008, Bob Proctor's company, LifeSuccess Productions, L.L.C. successfully sued Schirmer, his wife Lorna, and their several companies (including LifeSuccess Pacific Rim PTY LTD, Schirmer Financial Management PTY LTD, LifeSuccess Productions PTY LTD, Excellence in Marketing PTY LTD, and Wealth By Choice PTY LTC) for "misleading or deceptive conduct".
In August 2008, The Australian reported that director Heriot and Internet consultant Dan Hollings were in a legal dispute with Byrne over pay from the project.
Footage featuring Esther Hicks was removed from the "Extended Edition" of The Secret after Byrne rescinded the original contract covering Hicks' participation.
Releases
Paul Harrington, the producer for the film, reported that broadcast TV—instead of the Internet—was initially planned as the medium for the first release:
Release dates
The Secret premiere was broadcast through the Internet on 23 March 2006 using Vividas technology. It is still available either on a pay-per-view basis via streaming media (or on DVD at the official site for the film). A new extended edition of The Secret was released to the public on 1 October 2006. The Australian television premiere was on Nine Network on Saturday, 3 February 2007.
Future releases and spin-offs
Plans were announced in 2007 to produce a sequel to The Secret and a spin-off TV series. The drama film The Secret: Dare to Dream, starring Katie Holmes and Josh Lucas, was released on July 31, 2020.
See also
Affirmative prayer
As a Man Thinketh
Cosmic ordering
Just-world fallacy
Magical thinking
ONE: The Movie
Positive mental attitude
Pygmalion effect
Quantum mysticism
The Kybalion
Think and Grow Rich
Wishful thinking
References
Further reading
Doyle, Bob – Featured in the movie 'The Secret'. Author of Wealth Beyond Reason Program
External links
"The Secret Behind The Secret – What is Attracting Millions to the Law of Attraction?", from the Skeptic Magazine
2000s English-language films
2006 films
Films about spirituality
New Thought mass media
Pseudoscience documentary films
Quantum mysticism | The Secret (2006 film) | [
"Physics"
] | 2,760 | [
"Quantum mechanics",
"Quantum mysticism"
] |
4,253,861 | https://en.wikipedia.org/wiki/Double%20subscript%20notation | In engineering, double-subscript notation is a notation used to indicate some variable between two points (each point being represented by one of the subscripts). In electronics, the notation is usually used to indicate the direction of current or voltage, while in mechanical engineering it is sometimes used to describe the force or stress between two points, and sometimes even a component that spans between two points (like a beam on a bridge or truss). Although there are many cases where multiple subscripts are used, they are not necessarily called double subscript notation specifically.
Electronic usage
IEEE standard 255-1963, "Letter Symbols for Semiconductor Devices", defined eleven original quantity symbols expressed as abbreviations.
This is the basis for a convention to standardize the directions of double-subscript labels. The following uses transistors as an example, but shows how the direction is read generally. The convention works like this:
represents the voltage from C to B. In this case, C would denote the collector end of a transistor, and B would denote the base end of the same transistor. This is the same as saying "the voltage drop from C to B", though this applies the standard definitions of the letters C and B. This convention is consistent with IEC 60050-121.
would in turn represent the current from C to E. In this case, C would again denote the collector end of a transistor, and E would denote the emitter end of the transistor. This is the same as saying "the current in the direction going from C to E".
Power supply pins on integrated circuits utilize the same letters for denoting what kind of voltage the pin would receive. For example, a power input labeled VCC would be a positive input that would presumably connect to the collector pin of a BJT transistor in the circuit, and likewise respectively with other subscripted letters. The format used is the same as for notations described above, though without the connotation of VCC meaning the voltage from a collector pin to collector pin; the repetition avoids confusion as such an expression would not exist.
The table above shows only the originally denoted letters; others have found their way into use over time, such as S and D for the Source and Drain of a FET, respectively.
References
Notation
Electronic engineering | Double subscript notation | [
"Mathematics",
"Technology",
"Engineering"
] | 476 | [
"Computer engineering",
"Symbols",
"Electronic engineering",
"Notation",
"Electrical engineering"
] |
4,253,950 | https://en.wikipedia.org/wiki/List%20of%20elements%20by%20stability%20of%20isotopes | This is a list of chemical elements by the stability of their isotopes. Of the first 82 elements in the periodic table, 80 have isotopes considered to be stable. Overall, there are 251 known stable isotopes in total.
Background
Atomic nuclei consist of protons and neutrons, which attract each other through the nuclear force, while protons repel each other via the electric force due to their positive charge. These two forces compete, leading to some combinations of neutrons and protons being more stable than others. Neutrons stabilize the nucleus, because they attract protons, which helps offset the electrical repulsion between protons. As a result, as the number of protons increases, an increasing ratio of neutrons to protons is needed to form a stable nucleus; if too many or too few neutrons are present with regard to the optimum ratio, the nucleus becomes unstable and subject to certain types of nuclear decay. Unstable isotopes decay through various radioactive decay pathways, most commonly alpha decay, beta decay, or electron capture. Many rare types of decay, such as spontaneous fission or cluster decay, are known. (See Radioactive decay for details.)
Of the first 82 elements in the periodic table, 80 have isotopes considered to be stable. The 83rd element, bismuth, was traditionally regarded as having the heaviest stable isotope, bismuth-209, but in 2003 researchers in Orsay, France, measured the half-life of to be . Technetium and promethium (atomic numbers 43 and 61, respectively) and all the elements with an atomic number over 82 only have isotopes that are known to decompose through radioactive decay. No undiscovered elements are expected to be stable; therefore, lead is considered the heaviest stable element. However, it is possible that some isotopes that are now considered stable will be revealed to decay with extremely long half-lives (as with ). This list depicts what is agreed upon by the consensus of the scientific community as of 2023.
For each of the 80 stable elements, the number of the stable isotopes is given. Only 90 isotopes are expected to be perfectly stable, and an additional 161 are energetically unstable, but have never been observed to decay. Thus, 251 isotopes (nuclides) are stable by definition (including tantalum-180m, for which no decay has yet been observed). Those that may in the future be found to be radioactive are expected to have half-lives longer than 1022 years (for example, xenon-134).
In April 2019 it was announced that the half-life of xenon-124 had been measured to 1.8 × 1022 years. This is the longest half-life directly measured for any unstable isotope; only the half-life of tellurium-128 is longer.
Of the chemical elements, only 1 element (tin) has 10 such stable isotopes, 5 have 7 stable isotopes, 7 have 6 stable isotopes, 11 have 5 stable isotopes, 9 have 4 stable isotopes, 5 have 3 stable isotopes, 16 have 2 stable isotopes, and 26 have 1 stable isotope.
Additionally, about 31 nuclides of the naturally occurring elements have unstable isotopes with a half-life larger than the age of the Solar System (~109 years or more). An additional four nuclides have half-lives longer than 100 million years, which is far less than the age of the Solar System, but long enough for some of them to have survived. These 35 radioactive naturally occurring nuclides comprise the radioactive primordial nuclides. The total number of primordial nuclides is then 251 (the stable nuclides) plus the 35 radioactive primordial nuclides, for a total of 286 primordial nuclides. This number is subject to change if new shorter-lived primordials are identified on Earth.
One of the primordial nuclides is tantalum-180m, which is predicted to have a half-life in excess of 1015 years, but has never been observed to decay. The even-longer half-life of 2.2 × 1024 years of tellurium-128 was measured by a unique method of detecting its radiogenic daughter xenon-128 and is the longest known experimentally measured half-life. Another notable example is the only naturally occurring isotope of bismuth, bismuth-209, which has been predicted to be unstable with a very long half-life, but has been observed to decay. Because of their long half-lives, such isotopes are still found on Earth in various quantities, and together with the stable isotopes they are called primordial isotopes. All the primordial isotopes are given in order of their decreasing abundance on Earth. For a list of primordial nuclides in order of half-life, see List of nuclides.
118 chemical elements are known to exist. All elements to element 94 are found in nature, and the remainder of the discovered elements are artificially produced, with isotopes all known to be highly radioactive with relatively short half-lives (see below). The elements in this list are ordered according to the lifetime of their most stable isotope. Of these, three elements (bismuth, thorium, and uranium) are primordial because they have half-lives long enough to still be found on the Earth, while all the others are produced either by radioactive decay or are synthesized in laboratories and nuclear reactors. Only 13 of the 38 known-but-unstable elements have isotopes with a half-life of at least 100 years. Every known isotope of the remaining 25 elements is highly radioactive; these are used in academic research and sometimes in industry and medicine. Some of the heavier elements in the periodic table may be revealed to have yet-undiscovered isotopes with longer lifetimes than those listed here.
About 338 nuclides are found naturally on Earth. These comprise 251 stable isotopes, and with the addition of the 35 long-lived radioisotopes with half-lives longer than 100 million years, a total of 286 primordial nuclides, as noted above. The nuclides found naturally comprise not only the 286 primordials, but also include about 52 more short-lived isotopes (defined by a half-life less than 100 million years, too short to have survived from the formation of the Earth) that are daughters of primordial isotopes (such as radium from uranium); or else are made by energetic natural processes, such as carbon-14 made from atmospheric nitrogen by bombardment from cosmic rays.
Elements by number of primordial isotopes
An even number of protons or neutrons is more stable (higher binding energy) because of pairing effects, so even–even nuclides are much more stable than odd–odd. One effect is that there are few stable odd–odd nuclides: in fact only five are stable, with another four having half-lives longer than a billion years.
Another effect is to prevent beta decay of many even–even nuclides into another even–even nuclide of the same mass number but lower energy, because decay proceeding one step at a time would have to pass through an odd–odd nuclide of higher energy. (Double beta decay directly from even–even to even–even, skipping over an odd-odd nuclide, is only occasionally possible, and is a process so strongly hindered that it has a half-life greater than a billion times the age of the universe.) This makes for a larger number of stable even–even nuclides, up to three for some mass numbers, and up to seven for some atomic (proton) numbers and at least four for all stable even-Z elements beyond iron (except strontium and lead).
Since a nucleus with an odd number of protons is relatively less stable, odd-numbered elements tend to have fewer stable isotopes. Of the 26 "monoisotopic" elements that have only a single stable isotope, all but one have an odd atomic number—the single exception being beryllium. In addition, no odd-numbered element has more than two stable isotopes, while every even-numbered element with stable isotopes, except for helium, beryllium, and carbon, has at least three. Only a single odd-numbered element, potassium, has three primordial isotopes; none have more than three.
Tables
The following tables give the elements with primordial nuclides, which means that the element may still be identified on Earth from natural sources, having been present since the Earth was formed out of the solar nebula. Thus, none are shorter-lived daughters of longer-lived parental primordials. Two nuclides which have half-lives long enough to be primordial, but have not yet been conclusively observed as such (244Pu and 146Sm), have been excluded.
The tables of elements are sorted in order of decreasing number of nuclides associated with each element. (For a list sorted entirely in terms of half-lives of nuclides, with mixing of elements, see List of nuclides.) Stable and unstable (marked decays) nuclides are given, with symbols for unstable (radioactive) nuclides in italics. Note that the sorting does not quite give the elements purely in order of stable nuclides, since some elements have a larger number of long-lived unstable nuclides, which place them ahead of elements with a larger number of stable nuclides. By convention, nuclides are counted as "stable" if they have never been observed to decay by experiment or from observation of decay products (extremely long-lived nuclides unstable only in theory, such as tantalum-180m, are counted as stable).
The first table is for even-atomic numbered elements, which tend to have far more primordial nuclides, due to the stability conferred by proton-proton pairing. A second separate table is given for odd-atomic numbered elements, which tend to have far fewer stable and long-lived (primordial) unstable nuclides.
Elements with no primordial isotopes
See also
Island of stability
List of nuclides
List of radioactive nuclides by half-life
Primordial nuclide
Stable nuclide
Stable isotope ratio
Table of nuclides
Footnotes
References
Stability of isotopes | List of elements by stability of isotopes | [
"Chemistry"
] | 2,157 | [
"Lists of chemical elements"
] |
4,253,959 | https://en.wikipedia.org/wiki/Chinese%20Academy%20of%20Engineering | The Chinese Academy of Engineering (CAE, ) is the national academy of the People's Republic of China for engineering. It was established in 1994 and is an institution of the State Council of China. The CAE and the Chinese Academy of Sciences are often referred to together as the "Two Academies". Its current president is Li Xiaohong.
Since the establishment of CAE, entrusted by the relevant ministries and commissions, the academy has offered consultancy to the State on major programs, planning, guidelines, and policies. With the incitation by various ministries of the central government as well as local governments, the academy has organized its members to make surveys on the forefront, and to put forward strategic opinions and proposals. These entrusted projects have played an important role in maximizing the participation of the members in the macro decision-making of the State. In the meantime, the members, based on their own experiences and perspectives accumulated in a long term and in combination with international trends of the development of engineering science and technology, have regularly and actively put forward their opinions and suggestions.
List of presidents
Zhu Guangya (1994–1998)
Song Jian (1998–2002)
Xu Kuangdi (2002–2010)
Zhou Ji (2010–2018)
Li Xiaohong (2018–present)
Structure
The CAE is composed of elected members with the highest honor in the community of engineering and technological sciences of the nation. The General Assembly of the CAE is the highest decision-making body of the academy and is held during the first week of June bi-annually.
Membership
Membership of Chinese Academy of Engineering is the highest academic title in engineering science and technology in China. It is a lifelong honor and must be elected by existing members.
The academy consists of members, senior members and foreign members, who are distinguished and recognized for their respective field of engineering.
As of January 2020, the academy has 920 Chinese members, in addition to 93 foreigner members. The composition of its members include:
Deng Zhonghan was elected to the Chinese Academy of Engineering in 2009 at the age of 41, making him the youngest academician in the history of the CAE.
Division of Mechanical and Vehicle Engineering: 130 members
Division of Information and Electronic Engineering: 131 members
Division of Chemical, Metallurgical and Materials Engineering: 115 members
Division of Energy and Mining Engineering: 125 members
Division of Civil and Hydraulic Engineering and Architecture: 110 members
Division of Light Industry and Environmental Engineering: 61 members
Division of Agriculture: 84 members
Division of Medicine and Health: 125 members
Division of Engineering Management: 39 members
Criteria and Qualifications
The senior engineers, professors and other scholars or specialists, who shall have the Chinese citizenship (including those who reside in Taiwan, Hong Kong Special Administrative Region, Macao Special Administrative Region and overseas) and who have made significant and creative achievements and contributions in the fields of engineering and technological sciences, are qualified for the membership of the academy.
Elections of members
The election of new members (academicians) is conducted biennially. Total numbers of members to be elected in each election is decided by the governing body of the academy. Examination and election of the candidates are done in every Academy Division and the voting is anonymous. The results of the voting is then examined and validated by the governing board.
Publications
Engineering Sciences (journal)
ISSN Print 1009-1724
Engineering (journal)
ISSN Print 2095-8099
ISSN Online 2096-0026
Collaborations
The Chinese Academy of Engineering has collaborated with other major academies (in policy development, engineering research projects, etc.), such as those
from UK and USA:
Royal Academy of Engineering
National Academy of Engineering
See also
Chinese Academy of Sciences
Scientific publishing in China
References
External links
National academies of engineering
Research institutes in China
Science and technology in the People's Republic of China
1994 establishments in China | Chinese Academy of Engineering | [
"Engineering"
] | 764 | [
"National academies of engineering"
] |
4,254,315 | https://en.wikipedia.org/wiki/Atkinson%20friction%20factor | Atkinson friction factor is a measure of the resistance to airflow of a duct. It is widely used in the mine ventilation industry but is rarely referred to outside of it.
Atkinson friction factor is represented by the symbol and has the same units as air density (kilograms per cubic metre in SI units, lbfmin^2/ft^4 in Imperial units). It is related to the more widespread Fanning friction factor by
in which is the density of air in the shaft or roadway under consideration and is Fanning friction factor (dimensionless). It is related to the Darcy friction factor by
in which is the Darcy friction factor (dimensionless).
It was introduced by John J Atkinson in an early mathematical treatment of mine ventilation (1862) and has been known under his name ever since.
See also
Atkinson resistance
References
NCB Mining Dept., Ventilation in coal mines: a handbook for colliery ventilation officers, National Coal Board 1979.
Further reading
1999 paper giving the derivation of
Atkinson, J J, Gases met with in Coal Mines, and the general principles of Ventilation Transactions of the Manchester Geological Society, Vol. III, p.218, 1862
Fluid dynamics
Mine ventilation | Atkinson friction factor | [
"Chemistry",
"Engineering"
] | 235 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
4,254,363 | https://en.wikipedia.org/wiki/Size%20consistency%20and%20size%20extensivity | In quantum chemistry, size consistency and size extensivity are concepts relating to how the behaviour of quantum-chemistry calculations changes with the system size. Size consistency (or strict separability) is a property that guarantees the consistency of the energy behaviour when interaction between the involved molecular subsystems is nullified (for example, by distance). Size extensivity, introduced by Bartlett, is a more mathematically formal characteristic which refers to the correct (linear) scaling of a method with the number of electrons.
Let A and B be two non-interacting systems. If a given theory for the evaluation of the energy is size-consistent, then the energy of the supersystem A + B, separated by a sufficiently large distance such that there is essentially no shared electron density, is equal to the sum of the energy of A plus the energy of B taken by themselves: This property of size consistency is of particular importance to obtain correctly behaving dissociation curves. Others have more recently argued that the entire potential energy surface should be well-defined.
Size consistency and size extensivity are sometimes used interchangeably in the literature. However, there are very important distinctions to be made between them. Hartree–Fock (HF), coupled cluster, many-body perturbation theory (to any order), and full configuration interaction (FCI) are size-extensive but not always size-consistent. For example, the restricted Hartree–Fock model is not able to correctly describe the dissociation curves of H2, and therefore all post-HF methods that employ HF as a starting point will fail in that matter (so-called single-reference methods). Sometimes numerical errors can cause a method that is formally size-consistent to behave in a non-size-consistent manner.
Core extensivity is yet another related property, which extends the requirement to the proper treatment of excited states.
References
Quantum chemistry | Size consistency and size extensivity | [
"Physics",
"Chemistry"
] | 395 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
4,254,483 | https://en.wikipedia.org/wiki/FO4 | In digital electronics, Fan-out of 4 is a measure of time used in digital CMOS technologies: the gate delay of a component with a fan-out of 4.
Fan out = Cload / Cin, where
Cload = total MOS gate capacitance driven by the logic gate under consideration
Cin = the MOS gate capacitance of the logic gate under consideration
As a delay metric, one FO4 is the delay of an inverter, driven by an inverter 4x smaller than itself, and driving an inverter 4x larger than itself. Both conditions are necessary since input signal rise/fall time affects the delay as well as output loading.
FO4 is generally used as a delay metric because such a load is generally seen in case of tapered buffers driving large loads, and approximately in any logic gate of a logic path sized for minimum delay. Also, for most technologies the optimum fanout for such buffers generally varies from 2.7 to 5.3.
A fan out of 4 is the answer to the canonical problem stated as follows:
Given a fixed size inverter, small in comparison to a fixed large load, minimize the delay in driving the large load. After some math, it can be shown that the minimum delay is achieved when the load is driven by a chain of N inverters, each successive inverter ~4x larger than the previous; N ~ log4(Cload/Cin) .
In the absence of parasitic capacitances (drain diffusion capacitance and wire capacitance), the result is "a fan out of e" (now N ~ ln(Cload/Cin).
If the load itself is not large, then using a fan out of 4 scaling in successive logic stages does not make sense. In these cases, minimum sized transistors may be faster.
Because scaled technologies are inherently faster (in absolute terms), circuit performance can be more fairly compared using the fan out of 4 as a metric. For example, given two 64-bit adders, one implemented in a 0.5 μm technology and the other in 90 nm technology, it would be unfair to say the 90 nm adder is better from a circuits and architecture standpoint just because it has less latency. The 90 nm adder might be faster only due to its inherently faster devices. To compare the adder architecture and circuit design, it is more fair to normalize each adder's latency to the delay of one FO4 inverter.
The FO4 time for a technology is five times its RC time constant τ; therefore 5·τ = FO4.
Some examples of high-frequency CPUs with long pipeline and low stage delay: IBM Power6 has design with cycle delay of 13 FO4; clock period of Intel's Pentium 4 at 3.4 GHz is estimated as 16.3 FO4.
See also
Logical effort
Fan-in
References
External links
Logical Effort Revisited
Revisiting the FO4 Metric // RWT, Aug 15, 2002
David Harris, Slides on Logical Effort – with a succinct example of design using FO4 inverters (p. 19).
MS Hrishikesh, The Optimal Logic Depth Per Pipeline Stage is 6 to 8 FO4 Inverter Delays // ACM SIGARCH Computer Architecture News. Vol. 30. No. 2. IEEE Computer Society, 2002
Electronic design | FO4 | [
"Engineering"
] | 725 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
4,254,743 | https://en.wikipedia.org/wiki/Chorography | Chorography (from χῶρος khōros, "place" and γράφειν graphein, "to write") is the art of describing or mapping a region or district, and by extension such a description or map. This term derives from the writings of the ancient geographer Pomponius Mela and Ptolemy, where it meant the geographical description of regions. However, its resonances of meaning have varied at different times. Richard Helgerson states that "chorography defines itself by opposition to chronicle. It is the genre devoted to place, and chronicle is the genre devoted to time". Darrell Rohl prefers a broad definition of "the representation of space or place".
Ptolemy's definition
In his text of the Geographia (2nd century CE), Ptolemy defined geography as the study of the entire world, but chorography as the study of its smaller parts—provinces, regions, cities, or ports. Its goal was "an impression of a part, as when one makes an image of just an ear or an eye"; and it dealt with "the qualities rather than the quantities of the things that it sets down". Ptolemy implied that it was a graphic technique, comprising the making of views (not simply maps), since he claimed that it required the skills of a draftsman or landscape artist, rather than the more technical skills of recording "proportional placements". Ptolemy's most recent English translators, however, render the term as "regional cartography".
Renaissance revival
Ptolemy's text was rediscovered in the west at the beginning of the fifteenth century, and the term "chorography" was revived by humanist scholars. John Dee in 1570 regarded the practice as "an underling, and a twig of Geographie", by which the "plat" [plan or drawing] of a particular place would be exhibited to the eye.
The term also came to be used, however, for written descriptions of regions. These regions were extensively visited by the writer, who then combined local topographical description, summaries of the historical sources, and local knowledge and stories, into a text. The most influential example (at least in Britain) was probably William Camden's Britannia (first edition 1586), which described itself on its title page as a Chorographica descriptio. William Harrison in 1587 similarly described his own "Description of Britaine" as an exercise in chorography, distinguishing it from the historical/chronological text of Holinshed's Chronicles (to which the "Description" formed an introductory section). Peter Heylin in 1652 defined chorography as "the exact description of some Kingdom, Countrey, or particular Province of the same", and gave as examples Pausanias's Description of Greece (2nd century AD); Camden's Britannia (1586); Lodovico Guicciardini's Descrittione di tutti i Paesi Bassi (1567) (on the Low Countries); and Leandro Alberti's Descrizione d'Italia (1550).
Camden's Britannia was predominantly concerned with the history and antiquities of Britain, and, probably as a result, the term chorography in English came to be particularly associated with antiquarian texts. William Lambarde, John Stow, John Hooker, Michael Drayton, Tristram Risdon, John Aubrey and many others used it in this way, arising from a gentlemanly topophilia and a sense of service to one's county or city, until it was eventually often applied to the genre of county history. A late example was William Grey's Chorographia (1649), a survey of the antiquities of the city of Newcastle upon Tyne. Even before Camden's work appeared, Andrew Melville in 1574 had referred to chorography and chronology as the "twa lights" [two lights] of history.
However, the term also continued to be used for maps and map-making, particularly of sub-national or county areas. William Camden praised the county mapmakers Christopher Saxton and John Norden as "most skilfull (sic) Chorographers"; and Robert Plot in 1677 and Christopher Packe in 1743 both referred to their county maps as chorographies.
By the beginning of the eighteenth century the term had largely fallen out of use in all these contexts, being superseded for most purposes by either "topography" or "cartography". Samuel Johnson in his Dictionary (1755) made a distinction between geography, chorography and topography, arguing that geography dealt with large areas, topography with small areas, but chorography with intermediary areas, being "less in its object than geography, and greater than topography". In practice, however, the term is only rarely found in English by this date.
Modern usages
In more technical geographical literature, the term had been abandoned as city views and city maps became more and more sophisticated and demanded a set of skills that required not only skilled draftsmanship but also some knowledge of scientific surveying. However, its use was revived for a second time in the late nineteenth century by the geographer Ferdinand von Richthofen. He regarded chorography as a specialization within geography, comprising the description through field observation of the particular traits of a given area.
The term is also now widely used by historians and literary scholars to refer to the early modern genre of topographical and antiquarian literature.
See also
Local history
Antiquarianism
Cartography
Khôra
Chorology
English county histories
Regional geography
References
Bibliography
Area studies
Cartography
Fields of history
History of geography
Humanities
Regional geography
Surveying
Topography techniques | Chorography | [
"Engineering"
] | 1,172 | [
"Surveying",
"Civil engineering"
] |
4,254,806 | https://en.wikipedia.org/wiki/PfSense | pfSense is a firewall/router computer software distribution based on FreeBSD. The open source pfSense Community Edition (CE) and pfSense Plus is installed on a physical computer or a virtual machine to make a dedicated firewall/router for a network. It can be configured and upgraded through a web-based interface, and requires no knowledge of the underlying FreeBSD system to manage.
Overview
The pfSense project began in 2004 as a fork of the m0n0wall project by Chris Buechler and Scott Ullrich. Its first release was in October 2006. The name derives from the fact that the software uses the packet-filtering tool, PF.
Notable functions of pfSense include traffic shaping, VPNs using IPsec or PPTP, captive portal, stateful firewall, network address translation, 802.1q support for VLANs, and dynamic DNS. pfSense can be installed on hardware with an x86-64 processor architecture. It can also be installed on embedded hardware using Compact Flash or SD cards, or as a virtual machine.
OPNsense
In January 2015, the OPNsense project was started by forking the version of pfSense at that time.
In November 2017, a World Intellectual Property Organization panel found Netgate, the copyright holder of pfSense, utilized OPNsense' trademarks in bad faith to discredit OPNsense, and obligated Netgate to transfer ownership of a domain name to Deciso.
WireGuard protocol support
In February 2021, pfSense CE 2.5.0 and pfSense Plus 21.02 added support for a kernel WireGuard implementation. Support for WireGuard was temporarily removed in March 2021 after implementation issues were discovered by WireGuard founder Jason Donenfeld. The July 2021 release of pfSense CE 2.5.2 version re-included WireGuard.
See also
Comparison of firewalls
List of router and firewall distributions
References
Further reading
Mastering pfSense, Second Edition Birmingham, UK: Packt Publishing, 2018. . By David Zientra.
Security: Manage Network Security With pfSense Firewall [Video] Birmingham, UK: Packt, 2018. . By Manuj Aggarwal.
External links
2004 software
BSD software
Firewall software
Free routing software
FreeBSD
Gateway/routing/firewall distribution
Operating system distributions bootable from read-only media
Products introduced in 2004
Routers (computing)
Wireless access points | PfSense | [
"Engineering"
] | 511 | [
"Computer networks engineering",
"Network operating systems"
] |
4,255,174 | https://en.wikipedia.org/wiki/Oval%20%28projective%20plane%29 | In projective geometry an oval is a point set in a plane that is defined by incidence properties. The standard examples are the nondegenerate conics. However, a conic is only defined in a pappian plane, whereas an oval may exist in any type of projective plane. In the literature, there are many criteria which imply that an oval is a conic, but there are many examples, both infinite and finite, of ovals in pappian planes which are not conics.
As mentioned, in projective geometry an oval is defined by incidence properties, but in other areas, ovals may be defined to satisfy other criteria, for instance, in differential geometry by differentiability conditions in the real plane.
The higher dimensional analog of an oval is an ovoid in a projective space.
A generalization of the oval concept is an abstract oval, which is a structure that is not necessarily embedded in a projective plane. Indeed, there exist abstract ovals which can not lie in any projective plane.
Definition of an oval
In a projective plane a set of points is called an oval, if:
Any line meets in at most two points, and
For any point there exists exactly one tangent line through , i.e., }.
When the line is an exterior line (or passant), if a tangent line and if the line is a secant line.
For finite planes (i.e. the set of points is finite) we have a more convenient characterization:
For a finite projective plane of order (i.e. any line contains points) a set of points is an oval if and only if and no three points are collinear (on a common line).
A set of points in an affine plane satisfying the above definition is called an affine oval.
An affine oval is always a projective oval in the projective closure (adding a line at infinity) of the underlying affine plane.
An oval can also be considered as a special quadratic set.
Examples
Conic sections
In any pappian projective plane there exist nondegenerate projective conic sections
and any nondegenerate projective conic section is an oval. This statement can be verified by a straightforward calculation for any of the conics (such as the parabola or hyperbola).
Non-degenerate conics are ovals with special properties:
Pascal's Theorem and its various degenerations are valid.
There are many projectivities which leave a conic invariant.
Ovals, which are not conics
in the real plane
If one glues one half of a circle and a half of an ellipse smoothly together, one gets a non-conic oval.
If one takes the inhomogeneous representation of a conic oval as a parabola plus a point at infinity and replaces the expression by , one gets an oval which is not a conic.
If one takes the inhomogeneous representation of a conic oval as a hyperbola plus two points at infinity and replaces the expression by , one gets an oval which is not a conic.
The implicit curve is a non conic oval.
in a finite plane of even order
In a finite pappian plane of even order a nondegenerate conic has a nucleus (a single point through which every tangent passes), which can be exchanged with any point of the conic to obtain an oval which is not a conic.
For the field with elements let
For and and coprime, the set is an oval, which is not a conic.
Further finite examples can be found here:
Criteria for an oval to be a conic
For an oval to be a conic the oval and/or the plane has to fulfill additional conditions. Here are some results:
An oval in an arbitrary projective plane, which fulfills the incidence condition of Pascal's theorem or the 5-point degeneration of it, is a nondegenerate conic.
If is an oval in a pappian projective plane and the group of projectivities which leave invariant is 3-transitive, i.e. for 2 triples of points there exists a projectivity with . In the finite case 2-transitive is sufficient.
An oval in a pappian projective plane of characteristic is a conic if and only if for any point of a tangent there is an involutory perspectivity (symmetry) with center which leaves invariant.
If is an oval in a finite Desarguesian (pappian) projective plane of odd order, , then is a conic by Segre's theorem.). This implies that, after a possible change of coordinates, every oval of with odd has the parametrization :
For topological ovals the following simple criteria holds:
5. Any closed oval of the complex projective plane is a conic.
Further results on ovals in finite planes
An oval in a finite projective plane of order is a ()-arc, in other words, a set of points, no three collinear. Ovals in the Desarguesian (pappian) projective plane for odd are just the nonsingular conics. However, ovals in for even have not yet been classified.
In an arbitrary finite projective plane of odd order , no sets with more points than , no three of which are collinear, exist, as first pointed out by Bose in a 1947 paper on applications of this sort of mathematics to the statistical design of experiments. Furthermore, by
Qvist's theorem, through any point not on an oval there pass either zero or two tangent lines of that oval.
When q is even, the situation is completely different.
In this case, sets of points, no three of which collinear, may exist in a finite projective plane of order and they are called hyperovals; these are maximal arcs of degree 2.
Given an oval there is a unique tangent through each point, and if is even then Qvist's theorem shows that all these tangents are concurrent in a point outside the oval. Adding this point (called the nucleus of the oval or sometimes the knot) to the oval gives a hyperoval. Conversely, removing any one point from a hyperoval immediately gives an oval.
As all ovals in the even order case are contained in hyperovals, a description of the (known) hyperovals implicitly gives all (known) ovals. The ovals obtained by removing a point from a hyperoval are projectively equivalent if and only if the removed points are in the same orbit of the automorphism group of the hyperoval. There are only three small examples (in the Desarguesian planes) where the automorphism group of the hyperoval is transitive on its points so, in general, there are different types of ovals contained in a single hyperoval.
Desarguesian Case: PG(2,2h)
This is the most studied case and so the most is known about these hyperovals.
Every nonsingular conic in the projective plane, together with its nucleus, forms a hyperoval. These may be called hyperconics, but the more traditional term is regular hyperovals. For each of these sets, there is a system of coordinates such that the set is:
However, many other types of hyperovals of PG(2, q) can be found if q > 8. Hyperovals of PG(2, q) for q even have only been classified for q < 64 to date.
In PG(2,2h), h > 0, a hyperoval contains at least four points no three of which are collinear.
Thus, by the Fundamental Theorem of Projective Geometry we can always assume that the points with projective coordinates (1,0,0), (0,1,0), (0,0,1) and (1,1,1) are contained in any hyperoval. The remaining points of the hyperoval (when h > 1) will have the form (t, f(t),1) where t ranges through the values of the finite field GF(2h) and f is a function on that field which represents a permutation and can be uniquely expressed as a polynomial of degree at most 2h - 2, i.e. it is a permutation polynomial. Notice that f(0) = 0 and f(1) = 1 are forced by the assumption concerning the inclusion of the specified points. Other restrictions on f are forced by the no three points collinear condition. An f which produces a hyperoval in this way is called an o-polynomial. The following table lists all the known hyperovals (as of 2011) of
PG(2,2h) by giving the o-polynomial and any restrictions on the value of h that are necessary for the displayed function to be an o-polynomial. Note that all exponents are to be taken
mod(2h - 1).
Known Hyperovals in PG(2,2h)
a) The Subiaco o-polynomial is given by:
whenever ,
where tr is the absolute trace function of GF(2h). This
o-polynomial gives rise to a unique hyperoval if and to two
inequivalent hyperovals if .
b) To describe the Adelaide hyperovals, we will start in a slightly more general setting. Let F = GF(q) and K = GF(q2). Let be an element of norm 1, different from 1, i.e. bq+1 = 1, . Consider the polynomial, for ,
f(t) = (tr(b))−1tr(bm)(t + 1) + (tr(b))−1tr((bt + bq)m)(t + tr(b)t½+ 1)1−m + t½,
where tr(x) = trK/F(x) = x + xq.
When q = 2h, with h even and m = ±(q - 1)/3, the above f(t) is an o-polynomial for the Adelaide hyperoval.
c) The Penttila-O'Keefe o-polynomial is given by:
f(t) = t4 + t16 + t28 + η11(t6 + t10 + t14 + t18 + t22 + t26) + η20(t8 + t20) + η6(t12 + t24),
where η is a primitive root of GF(32) satisfying η5 = η2 + 1.
Hyperovals in PG(2, q), q even, q ≤ 64
As the hyperovals in the Desarguesian planes of orders 2, 4 and 8 are all hyperconics we shall only examine the planes of orders 16, 32 and 64.
PG(2,16)
In the details of a computer search for
complete arcs in small order planes carried out at the suggestion of B. Segre are given. In PG(2,16) they found a number of hyperovals which were not hyperconics. In 1975, M. Hall Jr. showed, also with considerable aid from a computer, that there were only two classes of projectively inequivalent hyperovals in this plane, the hyperconics and the hyperovals found by Lunelli and Sce. Out of the 2040 o-polynomials which give the Lunelli-Sce hyperoval, we display only one:
f(x) = x12 + x10 + η11x8 + x6 + η2x4 + η9x2,
where η is a primitive element of GF(16) satisfying η4 = η + 1.
In his 1975 paper Hall described a number of collineations of the plane which stabilized the Lunelli-Sce hyperoval, but did not show that they generated the full automorphism group of this hyperoval. using properties of a related generalized quadrangle, showed that the automorphism group could be no larger than the group given by Hall. independently gave a constructive proof of this result and also showed that in Desarguesian planes, the Lunelli-Sce hyperoval is the unique irregular hyperoval (non-hyperconic) admitting a transitive automorphism group (and that the only hyperconics admitting such a group are those of orders 2 and 4).
reproved Hall's classification result without the use of a computer. Their argument consists of finding an upper bound on the number of o-polynomials defined over GF(16) and then, by examining the possible automorphism groups of hyperovals in this plane, showing that if a hyperoval other than the known ones existed in this plane then the upper bound would be exceeded.
provides a group-theoretic construction of the Lunelli-Sce hyperoval as the union of orbits of the group generated by the elations of PGU(3,4) considered as a subgroup of PGL(3,16). Also included in this paper is a discussion of some remarkable
properties concerning the intersections of Lunelli-Sce hyperovals and hyperconics. In it is shown that the Lunelli-Sce hyperoval is the first non-trivial member of theSubiaco family In it is shown to be the first non-trivial member of the Adelaide family.
PG(2,32)
Since h = 5 is odd, a number of the known families have a representative here, but due to the small
size of the plane there are some spurious equivalences, in fact, each of the Glynn type hyperovals is
projectively equivalent to a translation hyperoval, and the Payne hyperoval is projectively equivalent to the Subiaco hyperoval (this does not occur in larger planes). Specifically, there are three classes of (monomial type) hyperovals, the hyperconics (f(t) = t2), proper translation hyperovals (f(t) = t4) and the Segre hyperovals (f(t) = t6). There are also classes corresponding to the Payne hyperovals and the Cherowitzo hyperovals. In the collineation
groups stabilizing each of these hyperovals have been determined. Note that in the original determination of the collineation group for the Payne hyperovals the case of q = 32 had to be treated separately and relied heavily on computer results. In an alternative version of the proof is given which does not
depend on computer computations.
In 1991, O'Keefe and Penttila discovered a new hyperoval in this plane by means of a detailed
investigation of the divisibility properties of the orders of automorphism groups of hypothetical
hyperovals. One of its o-polynomials is given by:
f(x) = x4 + x16 + x28 + η11(x6 + x10 + x14 + x18 + x22 + x26) + η20(x8 + x20) + η6(x12 + x24),
where η is a primitive root of GF(32) satisfying η5 = η2 + 1. The full automorphism group of this hyperoval has order 3.
cleverly structured an exhaustive computer search for all hyperovals in this plane. The result was that the above listing is complete, there are just six classes of hyperovals in PG(2,32).
PG(2,64)
By extending the ideas in to PG(2,64), were able to search for hyperovals whose automorphism group admitted a collineation of order 5. They found two and showed that no other
hyperoval exists in this plane that has such an automorphism. This settled affirmatively a long open question of B. Segre who wanted to know if there were any hyperovals in this plane besides the hyperconics. The hyperovals are:
f(x) = x8 + x12 + x20 + x22 + x42
+ x52 + η21(x4+x10+x14+x16+x30+x38+x44+x48+x54+x56+x58+x60+x62) + η42(x2 + x6 + x26 + x28 + x32 + x36 + x40),
which has an automorphism group of order 15, and
f(x) = x24 + x30 + x62 + η21(x4 +x8+x10+x14+x16+x34+x38 +x40 +x44+x46+x52+x54+x58+x60) + η42(x6+ x12+ x18+ x20+ x26+ x32 + x36+ x42+ x48+x50),
which has an automorphism group of order 60, where η is a primitive element of GF(64) satisfying η6 = η + 1. In it is shown that these are Subiaco hyperovals. By refining the computer search program, extended the search to hyperovals admitting an automorphism of order 3, and found the hyperoval:
f(x) = x4 + x8 + x14 + x34 + x42 + x48 + x62 + η21(x6+x16 +x26+x28+x30+x32+x40+x58) + η42(x10 + x18 + x24 + x36 + x44 + x50 + x52+ x60),
which has an automorphism group of order 12 (η is a primitive element of GF(64) as above). This hyperoval is the first distinct Adelaide hyperoval.
Penttila and Royle have shown that any other hyperoval in this plane would have to have a trivial automorphism group. This would mean that there would be many projectively equivalent copies of such a hyperoval, but general searches to date have found none, giving credence to the conjecture that there are no others in this plane.
Abstract ovals
Following (Bue1966), an abstract oval, also called a B-oval, of order is a pair where is a set of elements, called points, and is a set of involutions acting on in a sharply quasi 2-transitive way, that is, for any two with for , there exists exactly one with and .
Any oval embedded in a projective plane of order might be endowed with a structure of an abstract oval of the same order. The converse is, in general, not true for ; indeed, for there are two abstract ovals which may not be embedded in a projective plane, see (Fa1984).
When is even, a similar construction yields abstract hyperovals, see (Po1997): an abstract hyperoval of order is a pair where is a set of elements and is a set of fixed-point free involutions acting on such that for any set of four distinct elements
there is exactly one with .
See also
Ovoid (projective geometry)
Notes
References
External links
Bill Cherowitzo's Hyperoval Page
Projective geometry
Incidence geometry | Oval (projective plane) | [
"Mathematics"
] | 4,018 | [
"Incidence geometry",
"Combinatorics"
] |
4,255,513 | https://en.wikipedia.org/wiki/Distributed%20constraint%20optimization | Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is minimized.
Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct participants (agents). The constraints are described on some variables with predefined domains, and have to be assigned to the same values by the different agents.
Problems defined with this framework can be solved by any of the algorithms that are designed for it.
The framework was used under different names in the 1980s. The first known usage with the current name is in 1990.
Definitions
DCOP
The main ingredients of a DCOP problem are agents and variables. Importantly, each variable is owned by an agent; this is what makes the problem distributed. Formally, a DCOP is a tuple , where:
is the set of agents, .
is the set of variables, .
is the set of variable-domains, where each is a finite set containing the possible values of variable .
If contains only two values (e.g. 0 or 1), then is called a binary variable.
is the cost function. It is a function that maps every possible partial assignment to a cost. Usually, only few values of are non-zero, and it is represented as a list of the tuples that are assigned a non-zero value. Each such tuple is called a constraint. Each constraint in this set is a function assigning a real value to each possible assignment of the variables. Some special kinds of constraints are:
Unary constraints - constraints on a single variable, i.e., for some .
Binary constraints - constraints on two variables, i.e, for some
is the ownership function. It is a function mapping each variable to its associated agent. means that variable "belongs" to agent . This implies that it is agent 's responsibility to assign the value of variable . Note that is not necessarily an injection, i.e., one agent may own more than one variables. It is also not necessarily a surjection, i.e., some agents may own no variables.
is the objective function. It is an operator that aggregates all of the individual costs for all possible variable assignments. This is usually accomplished through summation:
The objective of a DCOP is to have each agent assign values to its associated variables in order to either minimize or maximize for a given assignment of the variables.
Assignments
A value assignment is a pair where is an element of the domain .
A partial assignment is a set of value-assignments where each appears at most once. It is also called a context. This can be thought of as a function mapping variables in the DCOP to their current values:
Note that a context is essentially a partial solution and need not contain values for every variable in the problem; therefore, implies that the agent has not yet assigned a value to variable . Given this representation, the "domain" (that is, the set of input values) of the function f can be thought of as the set of all possible contexts for the DCOP. Therefore, in the remainder of this article we may use the notion of a context (i.e., the function) as an input to the function.
A full assignment is an assignment in which each appears exactly once, that is, all variables are assigned. It is also called a solution to the DCOP.
An optimal solution is a full assignment in which the objective function is optimized (i.e., maximized or minimized, depending on the type of problem).
Example problems
Various problems from different domains can be presented as DCOPs.
Distributed graph coloring
The graph coloring problem is as follows: given a graph and a set of colors , assign each vertex, , a color, , such that the number of adjacent vertices with the same color is minimized.
As a DCOP, there is one agent per vertex that is assigned to decide the associated color. Each agent has a single variable whose associated domain is of cardinality (there is one domain value for each possible color). For each vertex , there is a variable with domain . For each pair of adjacent vertices , there is a constraint of cost 1 if both of the associated variables are assigned the same color: The objective, then, is to minimize .
Distributed multiple knapsack problem
The distributed multiple- variant of the knapsack problem is as follows: given a set of items of varying volume and a set of knapsacks of varying capacity, assign each item to a knapsack such that the amount of overflow is minimized. Let be the set of items, be the set of knapsacks, be a function mapping items to their volume, and be a function mapping knapsacks to their capacities.
To encode this problem as a DCOP, for each create one variable with associated domain . Then for all possible contexts :where represents the total weight assigned by context to knapsack :
Distributed item allocation problem
The item allocation problem is as follows. There are several items that have to be divided among several agents. Each agent has a different valuation for the items. The goal is to optimize some global goal, such as maximizing the sum of utilities or minimizing the envy. The item allocation problem can be formulated as a DCOP as follows.
Add a binary variable vij for each agent i and item j. The variable value is "1" if the agent gets the item, and "0" otherwise. The variable is owned by agent i.
To express the constraint that each item is given to at most one agent, add binary constraints for each two different variables related to the same item, with an infinite cost if the two variables are simultaneously "1", and a zero cost otherwise.
To express the constraint that all items must be allocated, add an n-ary constraint for each item (where n is the number of agents), with an infinite cost if no variable related to this item is "1".
Other applications
DCOP was applied to other problems, such as:
coordinating mobile sensors;
meeting and task scheduling.
Algorithms
DCOP algorithms can be classified in several ways:
Completeness - complete search algorithms finding the optimal solution, vs. local search algorithms finding a local optimum.
Search strategy - best-first search or depth-first branch-and-bound search;
Synchronization among agents - synchronous or asynchronous;
Communication among agents - point-to-point with neighbors in the constraint graph, or broadcast;
Communication topology - chain or tree.
ADOPT, for example, uses best-first search, asynchronous synchronization, point-to-point communication between neighboring agents in the constraint graph and a constraint tree as main communication topology.
Hybrids of these DCOP algorithms also exist. BnB-Adopt, for example, changes the search strategy of Adopt from best-first search to depth-first branch-and-bound search.
Asymmetric DCOP
An asymmetric DCOP is an extension of DCOP in which the cost of each constraint may be different for different agents. Some example applications are:
Event scheduling: agents who attend the same event might derive different values from it.
Smart grid: the increase in price of electricity in loaded hours may be different agents.
One way to represent an ADCOP is to represent the constraints as functions:
Here, for each constraint there is not a single cost but a vector of costs - one for each agent involved in the constraint. The vector of costs is of length k if each variable belongs to a different agent; if two or more variables belong to the same agent, then the vector of costs is shorter - there is a single cost for each involved agent, not for each variable.
Approaches to solving an ADCOP
A simple way for solving an ADCOP is to replace each constraint with a constraint , which equals the sum of the functions . However, this solution requires the agents to reveal their cost functions. Often, this is not desired due to privacy considerations.
Another approach is called Private Events as Variables (PEAV). In this approach, each variable owns, in addition to his own variables, also "mirror variables" of all the variables owned by his neighbors in the constraint network. There are additional constraints (with a cost of infinity) that guarantee that the mirror variables equal the original variables. The disadvantage of this method is that the number of variables and constraints is much larger than the original, which leads to a higher run-time.
A third approach is to adapt existing algorithms, developed for DCOPs, to the ADCOP framework. This has been done for both complete-search algorithms and local-search algorithms.
Comparison with strategic games
The structure of an ADCOP problem is similar to the game-theoretic concept of a simultaneous game. In both cases, there are agents who control variables (in game theory, the variables are the agents' possible actions or strategies). In both cases, each choice of variables by the different agents result in a different payoff to each agent. However, there is a fundamental difference:
In a simultaneous game, the agents are selfish - each of them wants to maximize his/her own utility (or minimize his/her own cost). Therefore, the best outcome that can be sought for in such setting is an equilibrium - a situation in which no agent can unilaterally increase his/her own gain.
In an ADCOP, the agents are considered cooperative: they act according to the protocol even if it decreases their own utility. Therefore, the goal is more challenging: we would like to maximize the sum of utilities (or minimize the sum of costs). A Nash equilibrium roughly corresponds to a local optimum of this problem, while we are looking for a global optimum.
Partial cooperation
There are some intermediate models in which the agents are partially-cooperative: they are willing to decrease their utility to help the global goal, but only if their own cost is not too high. An example of partially-cooperative agents are employees in a firm. On one hand, each employee wants to maximize their own utility; on the other hand, they also want to contribute to the success of the firm. Therefore, they are willing to help others or do some other time-consuming tasks that help the firm, as long as it is not too burdensome on them. Some models for partially-cooperative agents are:
Guaranteed personal benefit: the agents agree to act for the global good if their own utility is at least as high as in the non-cooperative setting (i.e., the final outcome must be a Pareto improvement of the original state).
Lambda-cooperation: there is a parameter . The agents agree to act for the global good if their own utility is at least as high as times their non-cooperative utility.
Solving such partial-coopreation ADCOPs requires adaptations of ADCOP algorithms.
See also
Constraint satisfaction problem
Distributed algorithm
Distributed algorithmic mechanism design
Notes and references
Books and surveys
A chapter in an edited book.
See Chapters 1 and 2; downloadable free online.
Mathematical optimization
Constraint programming | Distributed constraint optimization | [
"Mathematics"
] | 2,300 | [
"Mathematical optimization",
"Mathematical analysis"
] |
4,255,637 | https://en.wikipedia.org/wiki/Games%20People%20Play%20%28book%29 | Games People Play: The Psychology of Human Relationships is a 1964 book by psychiatrist Eric Berne. The book was a bestseller at the time of its publication, despite drawing academic criticism for some of the psychoanalytic theories it presented. It popularized Berne's model of transactional analysis among a wide audience, and has been considered one of the first pop psychology books.
Background
The author Eric Berne was a psychiatrist specializing in psychotherapy who began developing alternate theories of interpersonal relationship dynamics in the 1950s. He sought to explain recurring patterns of interpersonal conflicts that he observed, which eventually became the basis of transactional analysis. After being rejected by a local psychoanalytic institute, he focused on writing about his own theories. In 1961, he published Transactional Analysis in Psychotherapy. That book was followed by Games People Play, in 1964. Berne did not intend for Games People Play to explore all aspects of transactional analysis, viewing it instead as an introduction to some of the concepts and patterns he identified. He borrowed money from friends and used his own savings to publish the book.
Summary
In the first half of the book, Berne introduces his theory of transactional analysis as a way of interpreting social interactions. He proposes that individuals encompass three roles or ego states, known as the Parent, the Adult, and the Child, which they switch between. He postulates that while Adult to Adult interactions are largely healthy, dysfunctional interactions can arise when people take on mismatched roles such as Parent and Child or Child and Adult.
The second half of the book catalogues a series of "mind games" identified by Berne, in which people interact through a patterned and predictable series of "transactions" based on these mismatched roles. He states that although these interactions may seem plausible, they are actually a way to conceal hidden motivations under scripted interactions with a predefined outcome. The book uses casual, often humorous phrases such as "See What You Made Me Do," "Why Don't You — Yes But," and "Ain't It Awful" as a way of briefly describing each game. Berne describes the "winner" of these mind games as the person that returns to the Adult ego-state first.
Reception and influence
Commercial performance
The book was a commercial success, and reached fifth place on The New York Times Best Seller list in March 1966. It has been described as one of the first "pop psychology" books. As of 1965, there were eight additional printings after the initial run of 3,000, and a total of 83,000 copies had been published. In a Time magazine article titled "The Names of the Games," speculated that the book's popularity was due to its applications for both self-help and "cocktail party talk." Carol M. Taylor, in the Florida Communication Journal, noted that many concepts and terms from transactional analysis had made their way into everyday speech.
The book was republished as an audiobook in 2012.
Critical reception
Despite its popularity among lay readership, Berne's model of interpersonal relationships received criticism from academics. A 1974 article by Roger W. Hite in Speech Teacher noted that although its theoretical basis had inspired numerous subsequent publications, there was little research or scientific support for it. Ben L. Glancy in a review for Quarterly Journal of Speech described Berne's work as "parlor psychiatry and party-time psychoanalysis." He wrote that the book oversimplified interpersonal relationships and was "antithetical" to contemporary psychological research. Some scholars, including proponents of transactional analysis, have expressed concern over the popularization of oversimplified psychological concepts as self-help methods. Peter Hartley's Interpersonal Communication noted the relative lack of academic review and interest in popular mental healthcare as opposed to physical healthcare in his overview of transactional analysis.
See also
I'm OK – You're OK
References
External links
Official website
Popular psychology books
Transactional analysis
Self-help books
1964 non-fiction books
Books about game theory
Books about games
Play (activity)
1964 quotations
Grove Press books | Games People Play (book) | [
"Biology"
] | 831 | [
"Play (activity)",
"Behavior",
"Human behavior"
] |
4,255,729 | https://en.wikipedia.org/wiki/Kendall%20Houk | Kendall Newcomb Houk is a Distinguished Research Professor in Organic Chemistry at the University of California, Los Angeles. His research group studies organic, organometallic, and biological reactions using the tools of computational chemistry. This work involves quantum mechanical calculations, often with density functional theory, and molecular dynamics, either quantum dynamics for small systems or force fields such as AMBER, for solution and protein simulations.
Early life and education
K. N. Houk was born in Nashville, Tennessee, in 1943. He received his A.B. (1964), M.S. (1966), and Ph.D. (1968) degrees at Harvard, working with R. A. Olofson as an undergraduate and R. B. Woodward as a graduate student in the area of experimental tests of orbital symmetry selection rules. In 1968, he joined the faculty at Louisiana State University, becoming Professor in 1976.
In 1980, he moved to the University of Pittsburgh, and in 1986, he moved to UCLA. From 1988 to 1990, he was Director of the Chemistry Division of the National Science Foundation. He was Chairman of the UCLA Department of Chemistry and Biochemistry from 1991 to 1994.
Awards and achievements
Houk received the Akron American Chemical Society (ACS) Section Award in 1984. He was awarded the Arthur C. Cope Scholar Award of the ACS in 1988, the James Flack Norris Award in Physical Organic Chemistry of the ACS in 1991, the Schrödinger Medal of the World Association of Theoretically Oriented Chemists (WATOC) in 1998, the Tolman Medal of the Southern California Section of the ACS in 1998, the ACS Award for Computers in Chemical and Pharmaceutical Sciences in 2003, the Arthur C. Cope Award of the ACS in 2010, the Robert Robinson Award of the Royal Society of Chemistry in 2012, and UCLA's Glenn T. Seaborg Award in 2013. He received the 2021 Roger Adams Award of the ACS, the highest award in organic chemistry by the ACS, and the 2021 Foresight Institute Feynman Prize for Theory in Nanotechnology. He and his collaborators won the Royal Society of Chemistry 2021 Horizon Prize for the discovery of pericyclases.
His achievements have been recognized by a variety of U.S. and international fellowships. He was a Camille and Henry Dreyfus Teacher Scholar, a Fellow of the Alfred P. Sloan Foundation, the von Humboldt Foundation U.S. Senior Scientist in 1981, an Erskine Fellow in New Zealand in 1993, the Lady Davis Fellow at the Technion in Haifa, Israel in 2000, and a JSPS Fellow in Japan in 2001. He was elected to the American Academy of Arts and Sciences in 2002 and the International Academy of Quantum Molecular Sciences in 2003. He is a Fellow of the AAAS, the ACS, the WATOC, and the Royal Society of Chemistry. He was the Saul Winstein Chair in Organic Chemistry at UCLA from 2009 to 2021 and is now a Distinguished Research Professor. He was elected a member of the National Academy of Sciences in 2010. He was also elected a foreign member of the Chinese Academy of Sciences (CAS) in 2021.
Houk received the L.S.U. Distinguished Research Master Award in 1968, was named the Faculty Research Lecturer at UCLA for 1998, received the Bruylants Chair from the University of Louvain-la-Neuve in Belgium in 1998, and was awarded an honorary doctorate (Dr. rer. nat. h. c.) from the University of Essen in Germany in 1999. He is an Honorary Professor at the University of Queensland, Brisbane, Australia.
He is a 2002-2012 ISI Highly Cited Researcher.
Service
Houk has served on the Advisory Boards of the Chemistry Division of the National Science Foundation, the ACS Petroleum Research Fund, and a variety of journals, including Accounts of Chemical Research, the Journal of the American Chemical Society, the Journal of Organic Chemistry, Chemical and Engineering News, the Journal of Computational Chemistry, the Journal of Chemical Theory and Computation, Chemistry - A European Journal, Topics in Current Chemistry, the Chinese Journal of Chemistry, and the Israel Journal of Chemistry. From 2018 to 2021, he was the North American Co-chair of Chemistry – A European Journal.
He has been a member of the NIH Medicinal Chemistry Study Section and the NRC Board of Chemical Sciences and Technology. He was Chair of the Chemistry Section of the AAAS in 2000-2003 and served as Chair of the NIH Synthesis and Biological Chemistry-A Study Section in 2008.
He co-chaired the NIH-DOE-NSF Workshop on Building Strong Academic Chemistry Departments Through Gender Equity in 2006. He was a Senior Editor of Accounts of Chemical Research from 2005 to 2015.
He was Director of the UCLA Chemistry-Biology Interface Training Program, an NIH-supported training grant from 2002 to 2012 and is a member of the UCLA Molecular Biology Institute and the California NanoSystems Institute.
References
External links
Professor Houk's Website
His International Academy of Quantum Molecular Science page
ISI Author Profile
A Video interview of Professor Houk
UCLA chemists just broke a 100-year-old rule and say it’s time to rewrite the textbooks
1943 births
21st-century American chemists
Living people
Louisiana State University faculty
University of Pittsburgh faculty
Members of the International Academy of Quantum Molecular Science
Harvard University alumni
University of California, Los Angeles faculty
Theoretical chemists
Schrödinger Medal recipients
Computational chemists
Foreign members of the Chinese Academy of Sciences | Kendall Houk | [
"Chemistry"
] | 1,109 | [
"Theoretical chemists",
"American theoretical chemists"
] |
4,255,990 | https://en.wikipedia.org/wiki/Iberian%20ribbed%20newt | The Iberian ribbed newt, gallipato or Spanish ribbed newt (Pleurodeles waltl) is a newt endemic to the central and southern Iberian Peninsula and Morocco. It is the largest European newt species and it is also known for its sharp ribs which can puncture through its sides, and as such is also called the sharp-ribbed newt.
This species should not be confused with the different species with similar common name, the Iberian newt (Lissotriton boscai).
Description
The Iberian ribbed newt has tubercles running down each side. Through these, its sharp ribs can puncture. The ribs act as a defense mechanism, causing little harm to the newt. This mechanism could be considered as a primitive and rudimentary system of envenomation, but is completely harmless to humans. At the same time as pushing its ribs out the newt begins to secrete poison from special glands on its body. The poison coated ribs create a highly effective stinging mechanism, injecting toxins through the thin skin in predator's mouths. The newt's effective immune system and collagen coated ribs mean the pierced skin quickly regrows without infection.
In the wild, this amphibian grows up to , but rarely more than in captivity. Its color is dark gray dorsally, and lighter gray on its ventral side, with rust-colored small spots where its ribs can protrude. This newt has a flat, spade-shaped head and a long tail, which is about half its body length. Males are more slender and usually smaller than females. The larvae have bushy external gills and usually paler color patterns than the adults.
Pleurodeles waltl is more aquatic-dwelling than many other European tailed amphibians. Though they are quite able to walk on land, most rarely leave the water, living usually in ponds, cisterns, and ancient village wells that were common in Portugal and Spain in the past. They prefer cool, quiet, and deep waters, where they feed on insects, aquatic molluscs, worms, and tadpoles.
Sex determination
Sex determination is regulated by sex chromosomes, but can be overridden by temperature. Females have both sex chromosomes (Z and W), while males have two copies of the Z chromosome (ZZ). However, when ZW larvae are reared at 32 °C (90 °F) during particular stages of development (stage 42 to stage 54), they differentiate into functional neomales. Hormones play an important role during the sex determination process, and the newts can be manipulated to change sex by adding hormones or hormone-inhibitors to the water in which they are reared.
Aromatase, an estrogen-synthesizing enzyme which acts as a steroid hormone, plays a key role in sex determination in many non-mammalian vertebrates, including the Iberian ribbed newt. It is found in higher levels in the gonad–mesonephros complexes in ZW larvae than in their ZZ counterparts, although not in heat-treated ZW larvae. The increase occurs near the final stages of which their sex can be determined by temperature (stage 52).
Conservation
The IUCN has listed the Iberian ribbed newt as Near Threatened since its 2006 Red List. It received this listing because its wild populations appear to be in significant decline due to widespread habitat loss and the effects of invasive species, thus making the species close to qualifying for Vulnerable. Previously, in 2004, the species had been listed as Least Concern, the lowest ranking. This species is generally threatened through loss of aquatic habitats through drainage, agrochemical pollution, the impacts of livestock (in North African dayas), eutrophication, domestic and industrial contamination, golf courses, and infrastructure development. It has largely disappeared from coastal areas in Iberia and Morocco close to concentrations of tourism and highly populated areas such as Madrid's outskirts. Introduced fish such as the largemouth bass and crayfish (Procambarus clarkii) are known to prey on the eggs and larvae of this species, and are implicated in its decline. Mortality on roads has been reported to be a serious threat to some populations.
Space experiments
Pleurodeles waltl has been studied in space on at least six missions. The first Iberian ribbed newts were sent to space in 1985 on board Bion 7. The ten newts shared their journey with two rhesus macaques and ten rats, in an otherwise crewless Soviet Kosmos satellite. In 1992, Bion 10 also carried the newts on board, as did Bion 11 in 1996.
Pleurodeles waltl research was continued later in 1996 by French-led experiments on the Mir space station (Mir Cassiopée expedition), with follow-up studies in 1998 (Mir Pégase expedition) and 1999 (Mir Perseus expedition). Foton-M2 also carried the Iberian ribbed newt in 2005.
The newts were chosen because they are a good model organism for the study of microgravity. They are a good model organism because of the female's ability to retain live sperm in her cloaca for up to five months, allowing her to be inseminated on Earth, and later (in space) have fertilisation induced through hormonal stimulation. Another advantage to this species is their development is slow, so all the key stages of ontogenesis can be observed, from the oocyte to swimming tailbud embryos or larvae.
Studies looked at the newts' ability to regenerate (which was faster in space overall, and up to two times as fast in early stages) as well as the stages of development and reproduction in space.
On the ground, studies of hypergravity (up to 3g) on P. waltl fertilisation have also been conducted, as well as on the fertility of the space-born newts once they arrived back on Earth (they were fertile, and without problems).
Similar microgravity experiments have also been conducted for other species, namely the frog species Hyla japonica, and no effects on long term health are similarly observed.
Regeneration
Pleurodeles waltl is a model system for the study of adult regeneration. Similar to other salamanders, P. waltl are animals that can regenerate lost limbs, injured heart tissue, lesioned brain cells in addition to other body parts such as the eye lens and the spinal cord. The 20 Gb genome of P. waltl has been sequenced to facilitate research into the genetic basis of this extraordinary regenerative ability.
See also
List of Mir Expeditions
Animals in space
References
External links
Spanish ribbed newt - Pleurodeles waltl, BioFresh Cabinet of Freshwater Curiosities.
Caudata Culture: Pleurodeles waltl
Livingworld.org: Pleurodeles waltl
Bizarre newt uses ribs as weapons, BBC Earth News.
Animal models
Newts
Amphibians of North Africa
Amphibians of Europe
Amphibians described in 1830 | Iberian ribbed newt | [
"Biology"
] | 1,434 | [
"Model organisms",
"Animal models"
] |
4,256,069 | https://en.wikipedia.org/wiki/Comparison%20of%20application%20virtualization%20software | Application virtualization software refers to both application virtual machines and software responsible for implementing them. Application virtual machines are typically used to allow application bytecode to run portably on many different computer architectures and operating systems. The application is usually run on the computer using an interpreter or just-in-time compilation (JIT). There are often several implementations of a given virtual machine, each covering a different set of functions.
Comparison of virtual machines
JavaScript machines not included. See List of ECMAScript engines to find them.
The table here summarizes elements for which the virtual machine designs are intended to be efficient, not the list of abilities present in any implementation.
Virtual machine instructions process data in local variables using a main model of computation, typically that of a stack machine, register machine, or random access machine often called the memory machine. Use of these three methods is motivated by different tradeoffs in virtual machines vs physical machines, such as ease of interpreting, compiling, and verifying for security.
Memory management in these portable virtual machines is addressed at a higher level of abstraction than in physical machines. Some virtual machines, such as the popular Java virtual machines (JVM), are involved with addresses in such a way as to require safe automatic memory management by allowing the virtual machine to trace pointer references, and disallow machine instructions from manually constructing pointers to memory. Other virtual machines, such as LLVM, are more like traditional physical machines, allowing direct use and manipulation of pointers. Common Intermediate Language (CIL) offers a hybrid in between, allowing both controlled use of memory (like the JVM, which allows safe automatic memory management), while also allowing an 'unsafe' mode that allows direct pointer manipulation in ways that can violate type boundaries and permission.
Code security generally refers to the ability of the portable virtual machine to run code while offering it only a prescribed set of abilities. For example, the virtual machine might only allow the code access to a certain set of functions or data. The same controls over pointers which make automatic memory management possible and allow the virtual machine to ensure typesafe data access are used to assure that a code fragment is only allowed to certain elements of memory and cannot bypass the virtual machine itself. Other security mechanisms are then layered on top as code verifiers, stack verifiers, and other methods.
An interpreter allows programs made of virtual instructions to be loaded and run immediately without a potentially costly compile into native machine instructions. Any virtual machine which can be run can be interpreted, so the column designation here refers to whether the design includes provisions for efficient interpreting (for common usage).
Just-in-time compilation (JIT), refers to a method of compiling to native instructions at the latest possible time, usually immediately before or during the running of the program. The challenge of JIT is more one of implementation than of virtual machine design, however, modern designs have begun to make considerations to help efficiency. The simplest JIT methods simply compile to a code fragment similar to an offline compiler. However, more complex methods are often employed, which specialize compiled code fragments to parameters known only at runtime (see Adaptive optimization).
Ahead-of-time compilation (AOT) refers to the more classic method of using a precompiler to generate a set of native instructions which do not change during the runtime of the program. Because aggressive compiling and optimizing can take time, a precompiled program may launch faster than one which relies on JIT alone for execution. JVM implementations have mitigated this startup cost by initial interpreting to speed launch times, until native code fragments can be generated by JIT.
Shared libraries are a facility to reuse segments of native code across multiple running programs. In modern operating systems, this generally means using virtual memory to share the memory pages containing a shared library across different processes which are protected from each other via memory protection. It is interesting that aggressive JIT methods such as adaptive optimization often produce code fragments unsuitable for sharing across processes or successive runs of the program, requiring a tradeoff be made between the efficiencies of precompiled and shared code and the advantages of adaptively specialized code. For example, several design provisions of CIL are present to allow for efficient shared libraries, possibly at the cost of more specialized JIT code. The JVM implementation on OS X uses a Java Shared Archive to provide some of the benefits of shared libraries.
Comparison of application virtual machine implementations
In addition to the portable virtual machines described above, virtual machines are often used as an execution model for individual scripting languages, usually by an interpreter. This table lists specific virtual machine implementations, both of the above portable virtual machines, and of scripting language virtual machines.
See also
Application virtualization
Language binding
Foreign function interface
Calling convention
Name mangling
Application programming interface (API)
Application binary interface (ABI)
Comparison of platform virtualization software
List of ECMAScript engines
WebAssembly
References
application virtualization software
pt:Comparação entre aplicações de virtualização de máquinas | Comparison of application virtualization software | [
"Technology"
] | 1,041 | [
"Software comparisons",
"Computing comparisons"
] |
4,256,071 | https://en.wikipedia.org/wiki/Ammunition%20technician | An ammunition technician (AT) is a British Army soldier, formerly of the Royal Army Ordnance Corps but since 1993 of the Royal Logistic Corps, trained to inspect, repair, test, store, and modify all ammunition, guided missiles, and explosives used by the British Army. These technicians are also trained to use demolition to safely dispose of individual items of ammunition and explosives (EODs) or to conduct logistics disposal of bulk stocks of multi items. After gaining sufficient experience, those who show the appropriate qualities are given extra training to render safe improvised explosive devices (IEDs) by a process called improvised explosive device disposal. Experienced ATs may be called to give evidence as expert witnesses in criminal or coroner's courts in relation to ammunition or explosives or to EOD and IEDD duties.
History
Within the Royal Army Ordnance Corps, the receipt into service, storage, examination and issue of ammunition was possibly the oldest and most important function of the Corps. War could not be waged without ammunition, and to be waged successfully the ammunition had to be in every respect serviceable and dependable. The trade were previously called Ammunition Examiners (AE) and it was in the safeguarding of ammunition stockpiles during the wars that the Ammunition Examiner proved his worth. Promotion however was limited up to Warrant Officer Class 2 and at this stage the AE had to re-muster in the trade of RAOC Clerk in order to obtain higher rank. In 1948, the increased responsibility of the ammunition organization in Ordnance Services and in order to use the experience of these highly skilled tradesmen both as Warrant Officers and as Officers, the RAOC decided that promotion to WO1 would be introduced. RAOC Instruction No 466 introduced a new type of Quartermaster commission into the Royal Army Ordnance Corps to permit the Warrant Officer Ammunition Examiner being commissioned within the sphere of his normal employment on ammunition duties. These commissioned WOs would be called Assistant Inspecting Ordnance Officers (AIOOs).
Training
Training was initially undertaken at Bramley in Hampshire at the School of Ammunition. However the school moved to Kineton in 1974. To qualify to attend the Ammunition Technician Class 2 course, a soldier must first pass a pre-select course, during which time they will be assessed for suitability for role. The pre-selection includes psychometric testing, leadership skills, problem solving, resource planning and numeracy tests.
The basic AT course is 9 months in duration, the first part of which is spent at The Royal Military College of Science. The instruction within the Defence College of Management and Technology forms the first phase of the 9-month course. The aim of the first part is to provide the scientific and technical basis for further training in ammunition and explosives. The syllabus is an integrated study of mathematics, ballistics, explosives and general chemistry, physics, metallurgy, electronics and the design of armoured vehicles, artillery and infantry weapons. Time is also spent on nuclear, biological and chemical weapons design and the related protection systems. The remainder of the course covers conventional land munitions, explosive demolitions, conventional munitions disposal, guided weapons and explosive theory and safety. The majority of the course takes place at the Defence EOD Munitions Search Training Regiment (DEMS Trg Regt). Training previously took place at the Defence EOD Munitions Search School Kineton, DEMSS Kineton, and before that the Army School of Ammunition.
After 3 years gaining experience in trade, these technicians will be selected to return to Kineton to attend their Class 2 to Class 1 Upgrading Course, a 3-month course to broaden their technical knowledge and ability in munitions incident investigations, large scale demolitions and the disposal of chemical and biological munitions.
The Royal Logistic Corps Ammunition Technicians trained at Kineton are regarded throughout the world as the subject matter experts in the management of munitions and in Improvised Explosive Device (IED) disposal as a result of their combined experience in Palestine, Cyprus, Hong Kong, Northern Ireland, Iraq, Afghanistan, Aden, Malaya and other conflicts.
Commissioned officers are known as Ammunition Technical Officers and for the Sandhurst entrant, they complete a 17-month technical course in the rank of captain. ATs that become commissioned later in their service are also referred to as ATOs and will be granted the ato qualification by a testing board based on their experience, knowledge and competence.
Scope of Work
ATs are employed within the Royal Logistic Corps of the British Army and are the technical experts in storing and processing ammunition in base depots or field storage sites at home or on operations where safety in storage is paramount to overall force protection. Being an Ammunition Technician calls for intelligence, clear thinking and analytical skills, a calm outlook coupled with excellent attention to detail, discipline and courage. ATs develop specialist skills to look after the MoDs global stockpiles of ammunition by carrying out surveillance tasks, testing, inspecting, maintaining and disposing of all sorts of ammunition, from bullet clips, anti-aircraft guided weapon systems, mines, mortars, tank rounds and aircraft bombs. The Ammunition Technician profession is not exclusive to the UK MoD but similar technical personnel also exist in the Canadian, Australian RAAOC, and New Zealand RNZALR. Ammunition Technicians trained at the Defence EOD Munitions Search School, Kineton also work on loan service engagements in a number of African, Far Eastern and Middle Eastern armed forces.
In the United Kingdom, bomb disposal is carried out by two of three services (Royal Navy, and the Royal Logistic Corps and Royal Engineers of the British Army). The majority of counter terrorist bomb disposal and conventional munitions disposal activity in the UK is carried out by the Ammunition Technicians of the Royal Logistic Corps, the Royal Navy Clearance Divers deal with items below the high water mark and underwater tasks. The Royal Engineers deal with minefields, conventional, and German WWII aircraft bombs that occasionally turn up.
Operational Honours
The trade of Ammunition Technician is one of the most highly decorated professions in the British Army. The trade has been awarded 231 British gallantry awards as follows:
George Cross - 9
George Medal - 80
Conspicuous Gallantry Cross - 1
Military Cross - 3
Queen's Gallantry Medal - 100
MBE for Gallantry - 14
BEM for Gallantry - 23
In addition, Ammunition Technicians and Ammunition Technical Officers have also received almost 200 Mention in Dispatches, King's or Queen's Commendations for Bravery.
A further 100 awards of the MBE and BEM have been made to Ammunition Technicians for distinguished service within their trade.
These decorations have been awarded since 1940 and in places such as Aden, Afghanistan, Albania, Burma, Cyprus, Egypt, France, Germany, Gibraltar, Great Britain, Greece, Hong Kong, Iraq, Italy, Kuwait, Malaya, Malta, Northern Ireland, Pacific, Sicily and Yugoslavia.
George Cross
Staff Sergeant Sydney Rogerson GC. Royal Army Ordnance Corps. 11 October 1946.
Warrant Officer Class 1 Barry Johnson GC Royal Army Ordnance Corps. 6 November 1990
Staff Sergeant Olaf Sean Schmid GC Royal Logistic Corps 19 March 2010
Staff Sergeant Kim Spencer Hughes GC Royal Logistic Corps 19 March 2010
George Medal
Sergeant FW Pearce GM Royal Army Ordnance Corps 1944.
Sergeant AT Taylor GM Royal Army Ordnance Corps 8 March 1957.
Warrant Officer Class 2 BJC Reid GM Royal Army Ordnance Corps 1966.
Sergeant AE Dedman GM Royal Army Ordnance Corps 1972.
Warrant Officer Class 1 PES Gurney GM Royal Army Ordnance Corps 1973. Peter Gurney was later awarded a bar to his GM as a civilian.
Sergeant JA Anderson GM Royal Army Ordnance Corps 1980.
Warrant Officer Class 1 JRT Balding GM Royal Logistic Corps 1993, first GM awarded to member of the newly formed Royal Logistic Corps.
Warrant Officer Class 1 NB Thomsen GM Royal Logistic Corps 1995.
Warrant Officer Class 2 A Islam GM QGM Royal Logistic Corps 1997.
Warrant Officer Class 2 G O'Donnell Royal Logistic Corps 2006 and 2009. Posthumously awarded a second GM in March 2009 for "repeated and sustained acts of immense bravery" in Afghanistan.
Warrant Officer Class 2 K Ley GM Royal Logistic Corps 24 September 2010
Conspicuous Gallantry Cross
Staff Sergeant James Anthony Wadsworth CGC Royal Logistic Corps. 7 March 2008
Military Cross
Staff Sergeant Gareth Wood MC Royal Logistic Corps 24 September 2010
Queen's Gallantry Medal
Warrant Officer Class 1 Richard Gill QGM Royal Army Ordnance Corps 7 October 1974
Staff Sergeant Arthur Burns QGM Royal Army Ordnance Corps 6 January 1975
Warrant Officer Class 1 Thomas Edward Robinson QGM Royal Army Ordnance Corps 16 July 1975
Warrant Officer Class 2 Kevin Callaghan GM QGM Royal Army Ordnance Corps 20 October 1980
Warrant Officer Class 1 Ernest Lenard Bienkowski QGM Royal Army Ordnance Corps 14 April 1987
Warrant Officer Class 1 Robert John McLelland QGM, Royal Logistic Corps. 21 November 1994
WO1 Eamon Conrad Heakin QGM and Bar, BSM, Royal Logistic Corps. 7 September 2004. Eamon Heakin was later awarded a bar to his QGM in 2008 along with the Bronze Star Medal.
Warrant Officer Class 2 Colin Robert George Grant QGM, Royal Logistic Corps 11 September 2009
Warrant Officer Class 2 Ian Trevor Grey QGM Royal Army Ordnance Corps 14 April 1980
MBE for Gallantry
WO2 Henry Albert Vaughan MBE RAOC 16 February 1968.
WO1 Stanley Gordon Woods MBE RAOC 10 May 1968.
WO1 Frederick William Wood MBE RAOC 12 November 1968.
BEM for Gallantry
Sergeant Gordon Epps BEM RAOC 31 December 1946
WO2 Donald Frederick Tildesley BEM RAOC 4 November 1949.
Sergeant Donald Lawrence Birch BEM RAOC 10 May 1968.
Staff Sergeant David Greenaway BEM RAOC 18 Mar 1974.
RAOC/RLC EOD Memorial
Although a highly decorated trade, the price of recognition for Ammunition Technicians and Ammunition Technical Officers has been high. The Ammunition Technician trade has lost a number of their colleagues killed in action whilst undertaking operational Explosive Ordnance Disposal tasks worldwide. Ammunition Technicians proudly have their own memorial at Marlborough Barracks, Temple Herdewyke in Warwickshire, the home of the trade.
The idea of a memorial was initiated by the senior Warrant Officers of the trade and supported by the Director of Land Service Ammunition and his staff. A RAOC EOD Memorial Working Party was set up and reported progress to the Director General of Ordnance Services. The memorial was funded by RAOC central funds, donations from industry and from private donations from individual technicians within the trade. There were also some significant donations in kind, all the bricks for the enclosure and surrounding wall were gifted by a local brickworks and the shrubbery was donated and planted by a local nursery. The memorial was designed by the Fine Arts Department of Coventry Polytechnic and sculpted from local sandstone. The memorial represents a single bomb disposal operator, dressed in the bomb suit and holding his protective helmet. This scene is one that every EOD operator will recognise as being the last few moments before donning the helmet and becoming totally shut off from the team and ready to make the longest walk into danger towards an explosive device. The memorial is enclosed behind double wrought iron gates bearing the trade badges of the ATO and AT. The gates lead into a walled garden with 2 stone benches. The walls bear grey slate tablets, each engraved with the name of those killed, the date and location of the incident. A small brass plaque records the award of posthumous gallantry medals or decorations.
The memorial was formally opened during a dedication service on 23 June 1991. The service of dedication was led by the Chaplain General to the Forces, The Reverend James Harkness OBE QHC MA with readings by WO1 (Staff Sergeant Major) B Johnson GC and Major General PWE Istead CB OBE GM, Representative Colonel Commandant, RAOC. Amongst the guests at the service where the widows and families of many of those whose names appear on the memorial. A parade and the annual service of remembrance by members of the units based at Kineton is held at the EOD Memorial on Remembrance Sunday in November each year.
The EOD memorial is dedicated to the fallen ATO's and AT's of The Royal Army Ordnance Corps and The Royal Logistic Corps who through their selfless commitment, have singularly taken the "Longest Walk" in the service of their country but sadly, have not returned. Members of the ammunition trade have been killed in Cyprus, Hong Kong, Northern Ireland, England, Iraq and Afghanistan, "Sua Tela Tonanti / We Sustain"
In Memoriam
SSgt JA Culkin
SSgt R Kirby
Sgt CC Workman
Capt DA Stewardson
WO2 CJL Davies
SSgt CR Cracknell
Sgt AS Butcher
Maj BC Calladene
Capt JH Young
WO 2 WJ Clark
Sgt RE Hills
Capt BS Gritten
SSgt RF Beckett
Capt R Wilkinson
SSgt AN (Allan) Brammagh. 18 February, 1974. N.Ireland.
SSgt VI Rose
WO2 JA Maddocks
SSgt JC Crawshaw
WO2 E Garside
Cpl CW Brown
Sgt ME Walsh
WO2 M O'Neill
WO2 JR Howard
SSgt CD Muir
WO2 GJ O'Donnell GM+
Capt DM Shepherd GM
SSgt OSG Schmid GC
Capt D Read
SSgt BG Linley GM
Capt LJ Head
References
See also
Ammunition Technical Officer (ATO)
William DG Hunt
Bomb disposal
RAAOC - Royal Australian Ordnance Corps
British Army specialisms
Bomb disposal
Technicians
Royal Logistic Corps | Ammunition technician | [
"Chemistry"
] | 2,697 | [
"Explosion protection",
"Bomb disposal"
] |
4,256,076 | https://en.wikipedia.org/wiki/Queued%20Telecommunications%20Access%20Method | Queued Telecommunications Access Method (QTAM) is an IBM System/360 communications access method incorporating built-in queuing. QTAM was an alternative to the lower level Basic Telecommunications Access Method (BTAM).
History
QTAM was announced by IBM in 1965 as part of OS/360 and DOS/360 aimed at inquiry and data collection. As announced it also supported remote job entry (RJE) applications, called job processing, which was dropped by 1968. Originally QTAM supported the IBM 1030 Data Collection System, IBM 1050 Data Communications System, the IBM 1060 Data Communications System, the IBM 2671 Paper Tape Reader, AT&T 83B2 Selective Calling Stations, Western Union Plan 115A Outstations, and AT&T Teletype Model 33 or 35 Teletypewriters. By 1968 terminal support had expanded to include the IBM 2260 display complex, and the IBM 2740 communications terminal.
QTAM devices were attached to a System/360 multiplexor channel through an IBM 2701 Data Adapter or IBM 2702 Transmission Control. By 1968 support for the IBM 2703 Transmission Control Unit had been added.
QTAM was succeeded by TCAM which provided roughly similar facilities, but was not supported under DOS.
Structure
QTAM consists of a Message Control Program (MCP) and zero or more Message Processing Programs (MPP). The MCP handles communications with the terminals, identifies input messages and starts MPPs to process them as required. This is similar in concept to the much later internet service daemon (inetd) in unix and other systems.
The MCP is assembled by the user installation from a set of macros supplied by IBM. These macros define the lines and terminals comprising the system, the datasets required, and the procedures used to process received and transmitted messages.
The MPPs, incorporating logic to process the various messages, are supplied by the installation, and use standard OS/360 or DOS/360 data management macros OPEN, CLOSE, GET, and PUT. PL/I includes the TRANSIENT file declaration attribute to allow MPPs to be written in a high-level language.
References
Other sources
IBM mainframe operating systems | Queued Telecommunications Access Method | [
"Technology"
] | 443 | [
"Computing stubs",
"Computer network stubs"
] |
4,256,110 | https://en.wikipedia.org/wiki/Metre%20per%20hour | Metre per hour (American spelling: meter per hour) is a metric unit of both speed (scalar) and velocity (Vector (geometry)). Its symbol is m/h or m·h−1 (not to be confused with the imperial unit symbol mph). By definition, an object travelling at a speed of 1 m/h for an hour would move 1 metre.
The term is rarely used however as the units of metres per second and kilometres per hour are considered sufficient for the majority of circumstances. Metres per hour can however be convenient for documenting extremely slow moving objects. A Garden Snail for instance, typically moves at a speed of up to 47 metres per hour.
Conversions
3,600 m/h ≡ 1 m·s−1, the SI derived unit of speed, metre per second
1 m/h ≈ 0.00027778 m/s
1 m/h ≈ 0.00062137 mph ≈ 0.00091134 feet per second
How to convert
To convert from kilometers per hour to meters per hour, multiply the figure by 1,000 (hence the prefix kilo- from the ancient Greek language word for thousand).
To convert from meters per second to meters per hour, divide the figure by 3,600 (that is 60 * 60, i.e. 60 seconds for each of the 60 minutes).
See also
Orders of magnitude (speed)
References
Units of velocity | Metre per hour | [
"Mathematics"
] | 289 | [
"Quantity",
"Units of velocity",
"Units of measurement"
] |
3,118,805 | https://en.wikipedia.org/wiki/Waterline%20length | A vessel's length at the waterline (abbreviated to L.W.L) is the length of a ship or boat at the level where it sits in the water (the waterline). The LWL will be shorter than the length of the boat overall (length overall or LOA) as most boats have bows and stern protrusions that make the LOA greater than the LWL. As a ship becomes more loaded, it will sit lower in the water and its ambient waterline length may change; but the registered L.W.L is measured from a default load condition.
Measurement
This measure is significant in determining several of a vessel's properties, such as how much water it displaces, where the bow and stern waves occur, hull speed, amount of bottom-paint needed, etc. Traditionally, a stripe called the "boot top" is painted around the hull just above the waterline.
In sailing boats, longer waterline length will usually enable a greater maximum speed, because it allows greater sail area, without increasing beam or draft. Greater beam and draft produces a larger wetted surface, thereby causing higher hull drag. In particular, any "displacement" or non-planing boat requires much greater power to accelerate beyond its hull speed, which is determined by the length of the waterline, and can be calculated using the formula: Vmax (in knots) = square root of LWL (in feet) x 1.34. The hull speed is the speed at which the wavelength of the bow wave stretches out to the length of the waterline, thus dropping the boat into a hollow between the two waves. While small boats like canoes can overcome this effect fairly easily, heavier sailboats cannot.
Usage history
Since waterline length provides a practical limit for the speed of a typical sailboat, traditional rules for racing sailboats often classed boats using waterline length as a principal measure. To get around this rule, designers in the early 20th century began building racing sailboats with long overhangs fore and aft. This resulted in a nominally shorter waterline, but when the boats were sailed they heeled over, pulling the sides of the overhangs into the water as well and creating a much longer effective waterline, and thereby achieving much greater speed. The first recorded use of a line (documented by New Jersey marine museum) is by the small and rather unknown naval fleet of Thomas Jefferson.
See also
Length between perpendiculars
Notes
References
Nautical terminology
Shipbuilding
Ship measurements | Waterline length | [
"Engineering"
] | 507 | [
"Shipbuilding",
"Marine engineering"
] |
3,118,823 | https://en.wikipedia.org/wiki/Disk%20mirroring | In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard disks in real time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored volume is a complete logical representation of separate volume copies.
In a disaster recovery context, mirroring data over long distance is referred to as storage replication. Depending on the technologies used, replication can be performed synchronously, asynchronously, semi-synchronously, or point-in-time. Replication is enabled via microcode on the disk array controller or via server software. It is typically a proprietary solution, not compatible between various data storage device vendors.
Mirroring is typically only synchronous. Synchronous writing typically achieves a recovery point objective (RPO) of zero lost data. Asynchronous replication can achieve an RPO of just a few seconds while the remaining methodologies provide an RPO of a few minutes to perhaps several hours.
Disk mirroring differs from file shadowing that operates on the file level, and disk snapshots where data images are never re-synced with their origins.
Overview
Typically, mirroring is provided in either hardware solutions such as disk arrays, or in software within the operating system (such as Linux mdadm and device mapper). Additionally, file systems like Btrfs or ZFS provide integrated data mirroring. There are additional benefits from Btrfs and ZFS, which maintain both data and metadata integrity checksums, making themselves capable of detecting bad copies of blocks, and using mirrored data to pull up data from correct blocks.
There are several scenarios for what happens when a disk fails. In a hot swap system, in the event of a disk failure, the system itself typically diagnoses a disk failure and signals a failure. Sophisticated systems may automatically activate a hot standby disk and use the remaining active disk to copy live data onto this disk. Alternatively, a new disk is installed and the data is copied to it. In less sophisticated systems, the system is operated on the remaining disk until a spare disk can be installed.
The copying of data from one side of a mirror pair to another is called rebuilding or, less commonly, resilvering.
Mirroring can be performed site to site either by rapid data links, for example fibre optic links, which over distances of 500 m or so can maintain adequate performance to support real-time mirroring. Longer distances or slower links maintain mirrors using an asynchronous copying system. For remote disaster recovery systems, this mirroring may not be done by integrated systems but simply by additional applications on primary and secondary machines.
Additional benefits
In addition to providing an additional copy of the data for the purpose of redundancy in case of hardware failure, disk mirroring can allow each disk to be accessed separately for reading purposes. Under certain circumstances, this can significantly improve performance as the system can choose for each read which disk can seek most quickly to the required data. This is especially significant where there are several tasks competing for data on the same disk, and thrashing (where the switching between tasks takes up more time than the task itself) can be reduced. This is an important consideration in hardware configurations that frequently access the data on the disk.
See also
Disk cloning
Distributed Replicated Block Device (DRBD)
Mirror site
Stable storage
References
Fault-tolerant computer systems
RAID | Disk mirroring | [
"Technology",
"Engineering"
] | 693 | [
"Fault-tolerant computer systems",
"Reliability engineering",
"Computer systems"
] |
3,119,260 | https://en.wikipedia.org/wiki/Mentalism%20%28psychology%29 | In psychology, mentalism refers to those branches of study that concentrate on perception and thought processes, for example: mental imagery, consciousness and cognition, as in cognitive psychology. The term mentalism has been used primarily by behaviorists who believe that scientific psychology should focus on the structure of causal relationships to reflexes and operant responses or on the functions of behavior.
Neither mentalism nor behaviorism are mutually exclusive fields; elements of one can be seen in the other, perhaps more so in modern times compared to the advent of psychology over a century ago.
Classical mentalism
Psychologist Allan Paivio used the term classical mentalism to refer to the introspective psychologies of Edward Titchener and William James. Despite Titchener being concerned with structure and James with function, both agreed that consciousness was the subject matter of psychology, making psychology an inherently subjective field.
The rise of behaviorism
Concurrently thriving alongside mentalism since the inception of psychology was the functional perspective of behaviorism. However, it was not until 1913, when psychologist John B. Watson published his article "Psychology as the Behaviorist Views It" that behaviorism began to have a dominant influence. Watson's ideas sparked what some have called a paradigm shift in American psychology, emphasizing the objective and experimental study of human behavior, rather than subjective, introspective study of human consciousness. Behaviorists considered that the study of consciousness was impossible to do, or unnecessary, and that the focus on it to that point had only been a hindrance to the field reaching its full potential. For a time, behaviorism would go on to be a dominant force driving psychological research, advanced by the work of scholars including Ivan Pavlov, Edward Thorndike, Watson, and especially B.F. Skinner.
The new mentalism
Critical to the successful revival of the mind or consciousness as a primary focus of study in psychology (and in related fields such as cognitive neuroscience) were technological and methodological advances, which eventually allowed for brain mapping, among other new techniques. These advances provided an experimental way to begin to study perception and consciousness.
However, the cognitive revolution did not kill behaviorism as a research program; in fact, research on operant conditioning actually grew at a rapid pace during the cognitive revolution. In 1994, scholar Terry L. Smith surveyed the history of radical behaviorism and concluded that "even though radical behaviorism may have been a failure, the operant program of research has been a success. Furthermore, operant psychology and cognitive psychology complement one another, each having its own domain within which it contributes something valuable to, but beyond the reach of, the other."
See also
Cartesianism
Cognitivism (psychology)
Dualism (philosophy of mind)
Property dualism
References
Further reading
See also the six responses to Burgos in volume 44 of Behavior & Philosophy.
Cognitive psychology
Philosophy of psychology
Psychological theories | Mentalism (psychology) | [
"Biology"
] | 581 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
3,119,345 | https://en.wikipedia.org/wiki/Folch%20solution | A Folch solution is a solution containing chloroform and methanol, usually in a 2:1 (vol/vol) ratio. One of its uses is in separating polar from nonpolar compounds, for example separating nonpolar lipids from polar proteins and carbohydrates in blood serum.
References
Reagents for organic chemistry
Solutions | Folch solution | [
"Chemistry"
] | 77 | [
"Homogeneous chemical mixtures",
"Organic compounds",
"Reagents for organic chemistry",
"Solutions",
"Organic compound stubs",
"Organic chemistry stubs"
] |
3,119,517 | https://en.wikipedia.org/wiki/The%20Journal%20of%20Physical%20Chemistry%20A | The Journal of Physical Chemistry A is a scientific journal which reports research on the chemistry of molecules - including their dynamics, spectroscopy, kinetics, structure, bonding, and quantum chemistry. It is published weekly by the American Chemical Society.
Before 1997 the title was simply Journal of Physical Chemistry. Owing to the ever-growing amount of research in the area, in 1997 the journal was split into Journal of Physical Chemistry A (molecular theoretical and experimental physical chemistry) and The Journal of Physical Chemistry B (solid state, soft matter, liquids, etc.). Beginning in 2007, the latter underwent a further split, with The Journal of Physical Chemistry C now being dedicated to nanotechnology, molecular electronics, and related subjects.
According to the Journal Citation Reports, the journal have an impact factor of 2.7 for 2023.
Editors-in-chief
1896–1932 Wilder Dwight Bancroft, Joseph E. Trevor
1933–1951 S. C. Lind
1952–1964 William A. Noyes
1965–1969 F. T. Wall
1970–1980 Bryce Crawford
1980–2004 Mostafa El-Sayed
2005–2019 George C. Schatz
2020–present Joan-Emma Shea
Popular culture
Sheldon Cooper, a fictional physicist from the television series The Big Bang Theory, appeared on the cover of a fictional issue of the journal.
See also
The Journal of Physical Chemistry B
The Journal of Physical Chemistry C
The Journal of Physical Chemistry Letters
Russian Journal of Physical Chemistry A
Russian Journal of Physical Chemistry B
External links
References
American Chemical Society academic journals
English-language journals
Physical chemistry journals
Weekly journals
Academic journals established in 1997 | The Journal of Physical Chemistry A | [
"Chemistry"
] | 321 | [
"Physical chemistry journals",
"Physical chemistry stubs"
] |
3,119,646 | https://en.wikipedia.org/wiki/The%20Journal%20of%20Physical%20Chemistry%20B | The Journal of Physical Chemistry B is a peer-reviewed scientific journal that covers research on several fields of material chemistry (macromolecules, soft matter, and surfactants) as well as statistical mechanics, thermodynamics, and biophysical chemistry. It has been published weekly since 1997 by the American Chemical Society. According to the Journal Citation Reports, the journal had an impact factor of 3.5 for 2023.
Due to the growing amount of research in the fields it covers, the journal was split into two at the beginning of 2007, with The Journal of Physical Chemistry C specializing in nanostructures, the structures and properties of surfaces and interfaces, electronics, and related topics.
List of editors-in-chief
The following persons have been editor-in-chief:
1997–2005 Mostafa El-Sayed
2005–2019 George C. Schatz
2020–Present Joan-Emma Shea
See also
The Journal of Physical Chemistry A
The Journal of Physical Chemistry C
The Journal of Physical Chemistry Letters
External links
References
American Chemical Society academic journals
Weekly journals
English-language journals
Academic journals established in 1997
Physical chemistry journals | The Journal of Physical Chemistry B | [
"Chemistry"
] | 231 | [
"Physical chemistry journals",
"Physical chemistry stubs"
] |
3,119,811 | https://en.wikipedia.org/wiki/Wolf%E2%80%93Rayet%20nebula | A Wolf-Rayet nebula is a type of nebula created from stellar winds expelled by Wolf-Rayet stars. Wolf-Rayet stars are very hot, highly luminous, and rapidly evolving massive stars that are fusing helium or heavier elements in their cores.
Characteristics
The strong, dense stellar winds from Wolf-Rayet stars consist of streams of charged particles traveling at speeds of thousands of kilometers per second. These winds slam into the surrounding interstellar medium, generating shock waves that heat and ionize the gas and dust, causing it to glow and emit radiation in visible and other wavelengths.
This process creates an enveloping nebula around the Wolf-Rayet star with a distinctive multi-ring structure. The nebula contains shells and cavities carved out by the stellar winds, surrounded by dense swept-up material. These structures become visible due to fluorescent emission from ionized gas as well as scattered starlight.
Notable examples
Some well-studied Wolf-Rayet nebulae include:
NGC 6888 (Crescent Nebula)
NGC 2359 (Thor's Helmet Nebula)
M1-67 (Luminous Blue Variable Nebula)
NGC 3199 (Nebula around WR 18)
These nebulae exhibit intricate structures revealed in visible light as well as infrared, X-ray, and other wavelengths, providing insight into the powerful stellar winds and evolutionary processes around Wolf-Rayet stars.
Formation and evolution
Wolf-Rayet stars represent a brief late stage in the evolution of some very massive stars. They have shed their outer hydrogen envelopes and their stellar winds now consist of heavier elements like helium, carbon, nitrogen, and oxygen.
As a Wolf-Rayet star evolves and loses mass, its winds shape the surrounding gas and dust into bubble-like nebular structures. The inner region forms from the current stellar wind, while outer shells are remnants of previous mass-loss episodes.
Eventually the star will shed more matter, ending its life in a spectacular supernova explosion that will dramatically alter the Wolf-Rayet nebula's structure and composition.
References
Nebulae | Wolf–Rayet nebula | [
"Astronomy"
] | 411 | [
"Nebulae",
"Astronomical objects"
] |
3,119,918 | https://en.wikipedia.org/wiki/Zoopharmacognosy | Zoopharmacognosy is a behaviour in which non-human animals self-medicate by selecting and ingesting or topically applying plants, soils and insects with medicinal properties, to prevent or reduce the harmful effects of pathogens, toxins, and even other animals. The term derives from Greek roots zoo ("animal"), pharmacon ("drug, medicine"), and gnosy ("knowing").
An example of zoopharmacognosy occurs when dogs eat grass to induce vomiting. However, the behaviour is more diverse than this. Animals ingest or apply non-foods such as clay, charcoal and even toxic plants and invertebrates, apparently to prevent parasitic infestation or poisoning.
Whether animals truly self-medicate remains a somewhat controversial subject because early evidence is mostly circumstantial or anecdotal. However, more recent examinations have adopted an experimental, hypothesis-driven approach.
The methods by which animals self-medicate vary, but can be classified according to function as prophylactic (preventative, before infection or poisoning) or therapeutic (after infection, to combat the pathogen or poisoning). The behaviour is believed to have widespread adaptive significance.
History and etymology
In 1978, Janzen suggested that vertebrate herbivores might benefit medicinally from the secondary metabolites in their plant food.
In 1993, the term "zoopharmacognosy" was coined, derived from the Greek roots zoo ("animal"), pharma ("drug"), and gnosy ("knowing"). The term gained popularity from academic works and in a book by Cindy Engel entitled Wild Health: How Animals Keep Themselves Well and What We Can Learn from Them.
Mechanisms
The anti-parasitic effect of zoopharmacognosy could occur by at least two mechanisms, namely demonstrated through the modes of deglutition or ingestion. First, ingested material may have pharmacological antiparasitic properties, such as phytochemicals decreasing the ability of worms to attach to the mucosal lining of the intestines or chemotaxis attracting worms into the folds of leaves. Additionally, many plants have trichomes, often presented as hooked and spiky hairs, that can attach to parasites and dislodge them from the intestines. Another possible mode of action is that the ingested material may initiate a purging response of the gastrointestinal tract by rapidly inducing diarrhoea. This substantially decreases gut transit time, causing worm expulsion and interruption in the life cycle of parasites. This, or a similar, mechanism could explain undigested materials in the faeces of various animals such as birds, carnivores and primates.
The topical application of materials is often used by animals to treat wounds or repel insects. When plant leaves are chewed and then directly rubbed onto fur, compounds from said leaves are released for use. These compounds can often be analgesic or antiparasitic in nature. In regards to an insect repellant, the secondary metabolites traditionally used by plants to deter herbivores and insects from eating them can be used by animals as a protective measure. By interfering with neuroreceptors, these secondary metabolites can specifically act as olfactory cues for insects to avoid a certain source.
Methods of self-medication
The three reported methods of self-medication are deglutition, ingestion, and topical application. When using one of these methods while appearing well, an animal may be using self-medication as a prophylactic measure. When it is unwell, the animal could be using self-medication as a curative measure.
Deglutition
Some examples of zoopharmacognosy are demonstrated when animals, namely apes, swallow materials whole instead of chewing and ingesting them.
Chimpanzees
Wild chimpanzees sometimes seek leaves of the Aspilia plant. These contain thiarubrine-A, a chemical active against intestinal nematode parasites. Because this compound is quickly broken down by the stomach, chimpanzees will pick up the Aspilia leaves and, rather than chewing them, they roll them around in their mouths, sometimes for as long as 25 seconds. They then swallow the capsule-like leaves whole. Afterwards, the trichomes of the leaves can attach to any intestinal parasites, namely the nodular worm (Oesophagostomum stephanostomum) and tapeworm (Bertiella studeri), and allow the chimpanzee to physically expel the parasites. As many as 15 to 35 Aspilia leaves may be used in each bout of this behaviour, particularly in the rainy season when there is an abundance of many parasitic larvae that can cause an increased risk of infection.
Chimpanzees sometimes eat the leaves of the herbaceous Desmodium gangeticum. Undigested, non-chewed leaves were recovered in 4% of faecal samples of wild chimpanzees and clumps of sharp-edged grass leaves in 2%. The leaves have a rough surface or sharp-edges and the fact they were not chewed and excreted whole indicates they were not ingested for nutritional purposes. Furthermore, this leaf-swallowing was restricted to the rainy season when parasite re-infections are more common, and parasitic worms (Oesophagostomum stephanostomum) were found together with the leaves.
Bonobos sometimes swallow non-chewed stem-strips of Manniophyton fulvum. Despite the plant being abundantly available all year, M. fulvum is ingested only at specific times, in small amounts, and by a small proportion of bonobos in each group, demonstrating that it is indeed only utilized when the bonobos are unwell.
Monkeys
Tamarins were observed swallowing the large seeds of the fruit they regularly ingest. Although they are consumed along with the rest of the fruit, these seeds have no nutritional value for the monkeys. Since tamarins are routinely infected by trematodes, cestodes, nematodes, and acanthocephalans, there is speculation that the deliberate swallowing of these large seeds can help dislodge the parasites from the monkey's body.
Bears
Similar to the wild chimpanzees, Alaskan brown bears will swallow whole Carex leaves in the springtime to ensure the complete expulsion of parasites during their hibernation. Specifically, as tapeworms thrive off previously digested nutrients in the gut, the rough Carex leaves will lacerate their scolices, facilitating the defecation process. The proactive swallowing of these leaves will ensure low levels of active parasites within a hibernating bear.
Ingestion
Many examples of zoopharmacognosy involve an animal ingesting a substance with (potential) medicinal properties.
Birds
Many parrot species in the Americas, Africa, and Papua New Guinea consume kaolin or clay, which both releases minerals and absorbs toxic compounds from the gut.
Great bustards eat blister beetles of the genus Meloe maybe to decrease parasite load in the digestive system; cantharidin, the toxic compound in blister beetles, can kill a great bustard if too many beetles are ingested. Great bustards may eat toxic blister beetles of the genus Meloe to increase the sexual arousal of males. Some plants selected in the mating season showed in-vitro activity against laboratory models of parasites and pathogens.
Invertebrates
Woolly bear caterpillars (Grammia incorrupta) are sometimes lethally endoparasitised by tachinid flies. The caterpillars ingest plant toxins called pyrrolizidine alkaloids, which improve survival by conferring resistance against the flies. Crucially, parasitised caterpillars are more likely than non-parasitised caterpillars to specifically ingest large amounts of pyrrolizidine alkaloids, and excessive ingestion of these toxins reduces the survival of non-parasitised caterpillars. These three findings are all consistent with the adaptive plasticity theory.
The tobacco hornworm ingests nicotine which reduces colony growth and toxicity of Bacillus thuringiensis, leading to increased survival of the hornworm.
Ants
Ants infected with Beauveria bassiana, a fungus, selectively consume harmful substances (reactive oxygen species, ROS) upon exposure to a fungal pathogen, yet avoid these in the absence of infection.
Mammals
Great apes often consume plants that have no nutritional values but which have beneficial effects on gut acidity or combat intestinal parasitic infection.
Chimpanzees sometimes select bitter leaves for chewing. Parasite infection drops noticeably after chimpanzees chew leaves of pith (Vernonia amygdalina), which contain sesquiterpene lactones and steroid glucosides that are particularly effective against schistosoma, plasmodium and Leishmania. Specifically, these compounds can induce paralysis within the parasites and impair its ability to absorb nutrients, move, and reproduce. Chimpanzees do not consume bitter on a regular basis, but when they do, it is often in small amounts by individuals that appear ill. Jane Goodall witnessed chimpanzees eating particular bushes, apparently to make themselves vomit.
Chimpanzees, bonobos, and gorillas eat the fruits of Aframomum angustifolium. Laboratory assays of homogenized fruit and seed extracts show significant anti-microbial activity.
Illustrating the medicinal knowledge of some species, apes have been observed selecting a particular part of a medicinal plant by taking off leaves and breaking the stem to suck out the juice.
Anubis baboons (Papio anubis) and hamadryas baboons (Papio hamadryas) in Ethiopia use fruits and leaves of Balanites aegyptiaca to control schistosomiasis. Its fruits contain diosgenin, a hormone precursor that presumably hinders the development of schistosomes.
African elephants (Loxodonta africana) apparently self-medicate to induce labour by chewing on the leaves of a particular tree from the family Boraginaceae; Kenyan women brew a tea from this tree for the same purpose.
White-nosed coatis (Nasua narica) in Panama take the menthol-scented resin from freshly scraped bark of Trattinnickia aspera (Burseraceae) and vigorously rub it into their own fur or that of other coatis, possibly to kill ectoparasites such as fleas, ticks, and lice, as well as biting insects such as mosquitoes; the resin contains triterpenes α- and β-amyrin, the eudesmane derivative β-selinene, and the sesquiterpene lactone 8β-hydroxyasterolide.
Domestic cats and dogs often select and ingest plant material either to induce vomiting or for anti-parasitic purposes.
Indian wild boars selectively dig up and eat the roots of pigweed which humans use as an anthelmintic. Mexican folklore indicates that pigs eat pomegranate roots because they contain an alkaloid that is toxic to tapeworms.
A study on domestic sheep (Ovis aries) has provided clear experimental proof of self-medication via individual learning. Lambs in a treatment group were allowed to consume foods and toxins (grain, tannins, oxalic acid) that lead to malaise (negative internal states) and then allowed to eat a substance known to alleviate each malaise (sodium bentonite, polyethylene glycol and dicalcium phosphate, respectively). Control lambs ate the same foods and medicines, but this was disassociated temporally so they did not recuperate from the illness. After the conditioning, lambs were fed grain or food with tannins or oxalates and then allowed to choose the three medicines. The treatment animals preferred to eat the specific compound known to rectify the state of malaise induced by the food previously ingested. However, control animals did not change their pattern of use of the medicines, irrespective of the food consumed before the choice. Other ruminants learn to self-medicate against gastrointestinal parasites by increasing consumption of plant secondary compounds with antiparasitic actions.
Standard laboratory cages prevent mice from performing several natural behaviours for which they are highly motivated. As a consequence, laboratory mice sometimes develop abnormal behaviours indicative of emotional disorders such as depression and anxiety. To improve welfare, these cages are sometimes enriched with items such as nesting material, shelters and running wheels. Sherwin and Olsson tested whether such enrichment influenced the consumption of Midazolam, a drug widely used to treat anxiety in humans. Mice in standard cages, standard cages but with unpredictable husbandry, or enriched cages, were given a choice of drinking either non-drugged water or a solution of the Midazolam. Mice in the standard and unpredictable cages drank a greater proportion of the anxiolytic solution than mice from enriched cages, presumably because they had been experiencing greater anxiety. Early studies indicated that autoimmune (MRL/lpr) mice readily consume solutions with cyclophosphamide, an immunosuppressive drug that prevents inflammatory damage to internal organs. However, further studies provided contradictory evidence.
During the cold and rainy seasons, the crested porcupines (Hystrix cristata) in Central Italy tend to become infected by seven different species of ectoparasites and seven different species of endoparasites. During this time, it is observed that these porcupine populations actively sought out a rather large variety of medicinal plants, mostly with antiparasitic properties, to consume. When ingested, these plants appeared to be relieving the symptoms of the infections, such as inflammation.
Geophagy
Many animals eat soil or clay, a behaviour known as geophagy. Clay is the primary ingredient of kaolin. It has been proposed that for primates, there are four hypotheses relating to geophagy in alleviating gastrointestinal disorders or upsets:
soils adsorb toxins such as phenolics and secondary metabolites
soil ingestion has an antacid action and adjusts the gut pH
soils act as an antidiarrhoeal agent
soils counteract the effects of endoparasites.
Furthermore, two hypotheses pertain to geophagy in supplementing minerals and elements:
Tapirs, forest elephants, colobus monkeys, mountain gorillas and chimpanzees seek out and eat clay, which absorbs intestinal bacteria and their toxins and alleviates stomach upset and diarrhoea. Cattle eat clay-rich termite mound soil, which deactivates ingested pathogens or fruit toxins.
Topical application
Some animals apply substances with medicinal properties to their skin. Again, this can be prophylactic or curative. In some cases, this is known as self-anointing.
Mammals
A female capuchin monkey in captivity was observed using tools covered in a sugar-based syrup to groom her wounds and those of her infant.
North American brown bears (Ursos arctos) make a paste of Osha roots (Ligusticum porteri) and saliva and rub it through their fur to repel insects or soothe bites. This plant, locally known as "bear root", contains 105 active compounds, such as coumarins that may repel insects when topically applied. Navajo Indians are said to have learned to use this root medicinally from the bear for treating stomach aches and infections.
A range of primates rub millipedes onto their fur and skin; millipedes contain benzoquinones, compounds known to be potently repellent to insects. As the millipede secretions are also psychoactive, the behavior may also be a form of recreational drug use in animals.
Tufted capuchins (Cebus apella) rub various parts of their body with carpenter ants (Camponotus rufipes) or allow the ants to crawl over them, in a behaviour called anting. The capuchins often combine anting with urinating into their hands and mixing the ants with the urine.
Callicebus oenanthes have been observed rubbing leaves of Piper aduncum on their furs and abdominal areas. Since these leaves contain insecticides like dillapiole and phenylpropanoids, it is speculated that this fur-rubbing is an indication of a preventative measure to ward off insects. Additionally, another species of titi monkeys, Plecturocebus cupreus, were seen rubbing their furs with the leaves of Psychotria, whose compounds have antiviral, antifungal, and analgesic properties.
A male Sumatran orangutan known to researchers as Rakus "appeared to have used the plant intentionally" when he chewed up leaves of the "antibacterial, anti-inflammatory, anti-fungal, antioxidant, pain-killing and anticarcinogenic" vine Fibraurea tinctoria and applied the masticated plant material to an open wound on his face. According to primatologists who had been observing Rakus at a nature preserve, "Five days later the facial wound was closed, while within a few weeks it had healed, leaving only a small scar".
Birds
More than 200 species of song birds rub insects, usually ants, on their feathers and skin, a behaviour known as anting. Birds either grasp ants in their bill and wipe them vigorously along the spine of each feather down to the base, or sometimes roll in ant hills twisting and turning so the ants crawl through their feathers. Birds most commonly use ants that spray formic acid. In laboratory tests, this acid is harmful to feather lice. Its vapour alone can kill them.
Some birds select nesting material rich in anti-microbial agents that may protect themselves and their young from harmful infestations or infections. European starlings (Sturnus vulgaris) preferentially select and line their nests with wild carrot (Daucus carota); chicks from nests lined with this have greater levels of haemoglobin compared to those from nests which are not, although there is no difference in the weight or feather development of the chicks. Laboratory studies show that wild carrot substantially reduces the emergence of the instars of mites. House sparrows (Passer domesticus) have been observed to line their nests with materials from the neem tree (Azadirachta indica) but change to quinine-rich leaves of the Krishnachua tree (Caesalpinia pulcherrima) during an outbreak of malaria; quinine controls the symptoms of malaria.
Social zoopharmacognosy
Zoopharmacognosy is not always exhibited in a way that benefits the individual. Sometimes the target of the medication is the group or the colony.
Wood ants (Formica paralugubris) often incorporate large quantities of solidified conifer resin into their nests. Laboratory studies have shown this resin inhibits the growth of bacteria and fungi in a context mimicking natural conditions. The ants show a strong preference for resin over twigs and stones, which are building materials commonly available in their environment. There is seasonal variation in the foraging of ants: the preference for resin over twigs is more pronounced in spring than in summer, whereas in autumn the ants collect twigs and resin at equal rates. The relative collection rate of resin versus stones does not depend on infection with the entomopathogenic fungus Metarhizium anisopliae in laboratory conditions, indicating the resin collection is prophylactic rather than therapeutic.
Honey bees also incorporate plant-produced resins into their nest architecture, which can reduce chronic elevation of an individual bee's immune response. When colonies of honey bees are challenged with the fungal parasite (Ascophaera apis), the bees increase their resin foraging. Additionally, colonies experimentally enriched with resin have decreased infection intensities of the fungus.
Transgenerational zoopharmacognosy
Zoopharmacognosy can be classified depending on the target of the medication. Some animals lay their eggs in such a way that their offspring are the target of the medication.
Adult monarch butterflies preferentially lay their eggs on toxic plants such as milkweed which reduce parasite growth and disease in their offspring caterpillars. This has been termed transgenerational therapeutic medication.
When detecting endoparasitoid wasps, fruit flies (Drosophila melanogaster) lay their eggs in leaves with high ethanol content as a means of protection for their offspring. These wasps, especially those of the Leptopilina genus, will inject their eggs in approximately 80% of fruit fly larvae. As these wasp eggs develop, they will consume extensively through the larvae. To combat this, the fruit fly larvae will consume a large amount of ethanol from the food source to medicate themselves after wasp infection. Specifically, as the wasps are consuming more of the larvae, they will unknowingly consume more ethanol, which promptly leads to their deaths. This has been termed transgenerational prophylaxis.
Value to humans
In an interview with Neil Campbell, Eloy Rodriguez describes the importance of biodiversity to medicine:
Media
2002 British documentary television series Weird Nature episode 6 "Peculiar Potions" documents variety of animals engaging in intoxication or zoopharmacognosy.
2014 documentary Dolphins - Spy in the Pod shows dolphins getting intoxicated on pufferfish.
See also
Biomimicry
List of abnormal behaviours in animals
Mineral lick
Pica (disorder)
Wound licking
References
Further reading
Samorini, Giorgio (2002) Animals and Psychedelics: The Natural World And The Instinct To Alter Consciousness
Ethology
Clinical pharmacology | Zoopharmacognosy | [
"Chemistry",
"Biology"
] | 4,543 | [
"Pharmacology",
"Behavior",
"Clinical pharmacology",
"Behavioural sciences",
"Ethology"
] |
3,120,256 | https://en.wikipedia.org/wiki/Pi%20Tau%20Sigma | Pi Tau Sigma () is an international honor society in the field of mechanical engineering, with most chapters established in the United States. It honors mechanical engineering students who have exemplified the "principles of scholarship, character and service..." in the mechanical engineering profession.
History
Pi Tau Sigma came into being on March 16, 1915, at the University of Illinois. A similar organization was formed November 15, 1915, at the University of Wisconsin. The two schools then met to join their societies, doing so in Chicago on March 12, 1916. To date, 167 chapters have been inducted into the organization throughout the United States, with 157 still active.
Membership
Both undergraduate and graduate students are eligible to join Pi Tau Sigma based on academic achievement. Juniors in the top 25% of their class and seniors in the top 35% of their class, based on grades, are invited to join. Membership fees are due at initiation, and membership lasts a lifetime.
Pi Tau Sigma members are chosen on a basis of sound engineering ability, scholarship, personality, and probable future success in their chosen field of mechanical engineering. There are three grades of membership: Honorary, Graduate, and Active. Honorary members are technical graduates actively engaged in engineering work, or members of mechanical engineering faculties. Graduate membership is conferred upon persons who would have been eligible had Pi Tau Sigma been established earlier in schools not having chapters, or upon those continuing graduate study. Active members are selected from the junior and senior mechanical engineering classes at their respective schools whose mechanical engineering curriculum must be accredited by the A.B.E.T.
Insignia
The colors of the society are murrey and azure. The flower is the white rose.
References
Association of College Honor Societies
Honor societies
Engineering honor societies
Student organizations established in 1915
1915 establishments in Illinois | Pi Tau Sigma | [
"Engineering"
] | 357 | [
"Engineering societies",
"Engineering honor societies"
] |
3,120,531 | https://en.wikipedia.org/wiki/Sulbactam | Sulbactam is a β-lactamase inhibitor. This drug is given in combination with β-lactam antibiotics to inhibit β-lactamase, an enzyme produced by bacteria that destroys the antibiotics.
It was patented in 1977 and approved for medical use in 1986.
Medical uses
The combination ampicillin/sulbactam (Unasyn) is available in the United States.
The combination cefoperazone/sulbactam (Sulperazon) is available in many countries but not in the United States.
The co-packaged combination sulbactam/durlobactam was approved for medical use in the United States in May 2023.
Mechanism
Sulbactam is primarily used as a suicide inhibitor of β-lactamase, shielding more potent beta-lactams such as ampicillin. Sulbactam itself contains a beta-lactam ring, and has weak antibacterial activity by inhibiting penicillin binding proteins (PBP) 1 and 3, but not 2.
References
Further reading
Beta-lactamase inhibitors
Lactams
Sulfones
Carboxylic acids | Sulbactam | [
"Chemistry"
] | 236 | [
"Sulfones",
"Carboxylic acids",
"Functional groups"
] |
3,120,688 | https://en.wikipedia.org/wiki/Sultamicillin | Sultamicillin, sold under the brand name Unasyn among others, is an oral form of the penicillin antibiotic combination ampicillin/sulbactam. It is used for the treatment of bacterial infections of the upper and lower respiratory tract, the kidneys and urinary tract, skin and soft tissues, among other organs. It contains esterified ampicillin and sulbactam.
Sultamicillin is better absorbed from the gut than ampicillin/sulbactam, decreasing the chances of diarrhea and dysentery. The inclusion of sulbactam extends ampicillin's spectrum of action to beta-lactamase producing strains of bacteria. Oral sulbactam with the intravenous form provides a regimen of continuous sulbactam therapy throughout the treatment, resulting in better clinical results.
It was patented in 1979 and approved for medical use in 1987.
Medical uses
Medical uses for sultamicillin include:
Skin and soft tissue infections - furuncles, carbuncles, cellulitis, paronychia, impetigo contagiosa, diabetic foot ulcers and abscesses caused by Staphylococcus aureus and Streptococcus pyogenes.
Upper respiratory tract infections - pharyngitis and tonsillitis caused by S. pyogenes and S. aureus. Acute and chronic sinusitis caused by S. aureus, S. pneumoniae, H. influenzae and S. progenies. Otitis media, particularly suppurative otitis media, with or without mastoiditis antrum.
Lower respiratory tract infections - bacterial pneumonias, bronchitis, bronchiectasis caused by S. pneumoniae, H. influenzae, Staphylococcus aureus and S. progenies. Acute exacerbations of COPD.
Urinary tract infections - pyelonephritis, cystitis caused by Escherichia coli, Proteus mirabilis, Klebsiella and Staphylococcus aureus.
Surgical infections - prophylaxis and treatment of surgical site infections, peri-operative prophylaxis in orthopaedic and cardiovascular surgery.
Gynecological infections - Caused by beta-lactamase producing strains of E. coli and Bacteroides sp. (including B. fragilis).
Infections of the gastrointestinal tract - Bacterial esophagitis, treatment of H. pylori infections as a part of MDT in ulcer management.
Contraindications
Sultamicillin is contraindicated in people with penicillin allergy and those with mononucleosis, as these have an increased risk of developing severe rashes.
Adverse effects
The most common side effect, as with many other antibiotics, is diarrhoea and soft stool. In Japanese clinical trials, these occurred with a frequency of 3.7% and 1.1%, respectively; however, in studies outside Japan, diarrhoea was much more common at 10% to over 50% in patients taking sultamicillin. Other adverse effects occurring in the range of 1 to 10% of people include nausea, vomiting, stomach ache, headache, rashes, and infections with Candida albicans. Haemorrhagic colitis caused by Clostridioides difficile infections is a rare complication.
Interactions
Interactions with other drugs are similar to other penicillins: allopurinol increases the risk for patients to develop rashes. Penicillins slow down the elimination of methotrexate, potentially increasing its adverse effects. Conversely, the elimination of sultamicillin's active constituents (ampicillin and sulbactam) is reduced by probenecid and probably by the nonsteroidal anti-inflammatory drugs (NSAIDs) aspirin, indometacin and phenylbutazone.
Pharmacology
Pharmacokinetics
Sultamicillin is a codrug or (mutual prodrug) of ampicillin and sulbactam. After oral intake, it is absorbed and hydrolytically cleaved to ampicillin and sulbactam by enzymes in the gut wall. These two substances are then released into the system in a 1:1 molar ratio. Their pharmacokinetic behaviour is similar (and practically independent of food intake): they reach peak concentrations after about one hour; their plasma protein binding is 26% (ampicillin) and 29% (sulbactam); and their elimination half-lives are 45–80 minutes and 40–70 minutes, respectively. Both drugs are mainly eliminated via the kidneys: within eight hours after intake, 46 to 80% of the ampicillin and 41 to 66% of the sulbactam are found in the urine.
Mechanism of action
Ampicillin, a semi-synthetic orally active broad-spectrum penicillin antibiotic, exerts antibacterial activity against sensitive organisms by inhibiting biosynthesis of cell wall mucopeptide. Sulbactam, a beta-lactamase inhibitor, irreversibly inhibits many beta-lactamases that occur in resistant bacteria strains.
Chemistry
Ampicillin and sulbactam are linked via a methylene group, forming two ester bonds (or more accurately acylal bonds). Sultamicillin is used in form of the tosylate salt.
References
Penicillins
Codrugs
Formals
Drugs developed by Pfizer | Sultamicillin | [
"Chemistry"
] | 1,165 | [
"Functional groups",
"Formals"
] |
3,120,758 | https://en.wikipedia.org/wiki/Peetre%20theorem | In mathematics, the (linear) Peetre theorem, named after Jaak Peetre, is a result of functional analysis that gives a characterisation of differential operators in terms of their effect on generalized function spaces, and without mentioning differentiation in explicit terms. The Peetre theorem is an example of a finite order theorem in which a function or a functor, defined in a very general way, can in fact be shown to be a polynomial because of some extraneous condition or symmetry imposed upon it.
This article treats two forms of the Peetre theorem. The first is the original version which, although quite useful in its own right, is actually too general for most applications.
The original Peetre theorem
Let M be a smooth manifold and let E and F be two vector bundles on M. Let
be the spaces of smooth sections of E and F. An operator
is a morphism of sheaves which is linear on sections such that the support of D is non-increasing: supp Ds ⊆ supp s for every smooth section s of E. The original Peetre theorem asserts that, for every point p in M, there is a neighborhood U of p and an integer k (depending on U) such that D is a differential operator of order k over U. This means that D factors through a linear mapping iD from the k-jet of sections of E into the space of smooth sections of F:
where
is the k-jet operator and
is a linear mapping of vector bundles.
Proof
The problem is invariant under local diffeomorphism, so it is sufficient to prove it when M is an open set in Rn and E and F are trivial bundles. At this point, it relies primarily on two lemmas:
Lemma 1. If the hypotheses of the theorem are satisfied, then for every x∈M and C > 0, there exists a neighborhood V of x and a positive integer k such that for any y∈V\{x} and for any section s of E whose k-jet vanishes at y (jks(y)=0), we have |Ds(y)|<C.
Lemma 2. The first lemma is sufficient to prove the theorem.
We begin with the proof of Lemma 1.
Suppose the lemma is false. Then there is a sequence xk tending to x, and a sequence of very disjoint balls Bk around the xk (meaning that the geodesic distance between any two such balls is non-zero), and sections sk of E over each Bk such that jksk(xk)=0 but |Dsk(xk)|≥C>0.
Let ρ(x) denote a standard bump function for the unit ball at the origin: a smooth real-valued function which is equal to 1 on B1/2(0), which vanishes to infinite order on the boundary of the unit ball.
Consider every other section s2k. At x2k, these satisfy
j2ks2k(x2k)=0.
Suppose that 2k is given. Then, since these functions are smooth and each satisfy j2k(s2k)(x2k)=0, it is possible to specify a smaller ball B′δ(x2k) such that the higher order derivatives obey the following estimate:
where
Now
is a standard bump function supported in B′δ(x2k), and the derivative of the product s2kρ2k is bounded in such a way that
As a result, because the following series and all of the partial sums of its derivatives converge uniformly
q(y) is a smooth function on all of V.
We now observe that since s2k and 2ks2k are equal in a neighborhood of x2k,
So by continuity |Dq(x)|≥ C>0. On the other hand,
since Dq(x2k+1)=0 because q is identically zero in B2k+1 and D is support non-increasing. So Dq(x)=0. This is a contradiction.
We now prove Lemma 2.
First, let us dispense with the constant C from the first lemma. We show that, under the same hypotheses as Lemma 1, |Ds(y)|=0. Pick a y in V\{x} so that jks(y)=0 but |Ds(y)|=g>0. Rescale s by a factor of 2C/g. Then if g is non-zero, by the linearity of D, |Ds(y)|=2C>C, which is impossible by Lemma 1. This proves the theorem in the punctured neighborhood V\{x}.
Now, we must continue the differential operator to the central point x in the punctured neighborhood. D is a linear differential operator with smooth coefficients. Furthermore, it sends germs of smooth functions to germs of smooth functions at x as well. Thus the coefficients of D are also smooth at x.
A specialized application
Let M be a compact smooth manifold (possibly with boundary), and E and F be finite dimensional vector bundles on M. Let
be the collection of smooth sections of E. An operator
is a smooth function (of Fréchet manifolds) which is linear on the fibres and respects the base point on M:
The Peetre theorem asserts that for each operator D, there exists an integer k such that D is a differential operator of order k. Specifically, we can decompose
where is a mapping from the jets of sections of E to the bundle F. See also intrinsic differential operators.
Example: Laplacian
Consider the following operator:
where and is the sphere centered at with radius . This is in fact the Laplacian, as can be seen using Taylor's theorem. We show will show is a differential operator by Peetre's theorem. The main idea is that since is defined only in terms of 's behavior near , it is local in nature; in particular, if is locally zero, so is , and hence the support cannot grow.
The technical proof goes as follows.
Let and and be the rank trivial bundles.
Then and are simply the space of smooth functions on . As a sheaf, is the set of smooth functions on the open set and restriction is function restriction.
To see is indeed a morphism, we need to check for open sets and such that and . This is clear because for , both and are simply , as the eventually sits inside both and anyway.
It is easy to check that is linear:
and
Finally, we check that is local in the sense that . If , then such that in the ball of radius centered at . Thus, for ,
for , and hence .
Therefore, .
So by Peetre's theorem, is a differential operator.
References
Peetre, J., Une caractérisation abstraite des opérateurs différentiels, Math. Scand. 7 (1959), 211-218.
Peetre, J., Rectification à l'article Une caractérisation abstraite des opérateurs différentiels, Math. Scand. 8 (1960), 116-120.
Terng, C.L., Natural vector bundles and natural differential operators, Am. J. Math. 100 (1978), 775-828.
Articles containing proofs
Differential operators
Theorems in functional analysis | Peetre theorem | [
"Mathematics"
] | 1,533 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in functional analysis",
"Articles containing proofs",
"Differential operators"
] |
3,120,850 | https://en.wikipedia.org/wiki/Chronic%20wound | A chronic wound is a wound that does not progress through the normal stages of wound healing—haemostasis, inflammation, proliferation, and remodeling—in a predictable and timely manner. Typically, wounds that do not heal within three months are classified as chronic. Chronic wounds may remain in the inflammatory phase due to factors like infection or bacterial burden, ischaemia, presence of necrotic tissue, improper moisture balance of wound site, or underlying diseases such as diabetes mellitus.
In acute wounds, a regulated balance of pro-inflammatory cytokines (signalling molecules) and proteases (enzymes) prevent the degradation of the extracellular matrix (ECM) and collagen to ensure proper wound healing.
In chronic wounds, there is excessive levels of inflammatory cytokines and proteases, leading to excessive degradation of the ECM and collagen. This disrupts tissue repair and impedes recovery, keeping the wound in a non-healing state.
Chronic wounds may take years to heal or, in some cases, may never heal, causing significant physical and emotional stress for patients and placing a financial burden on healthcare systems. Acute and chronic wounds are part of a spectrum, with chronic wounds requiring prolonged and complex care compared to acute wounds.
Signs and symptoms
Chronic wound patients often report pain as dominant in their lives.
It is recommended that healthcare providers handle the pain related to chronic wounds as one of the main priorities in chronic wound management (together with addressing the cause). Six out of ten venous leg ulcer patients experience pain with their ulcer, and similar trends are observed for other chronic wounds.
Persistent pain (at night, at rest, and with activity) is the main problem for patients with chronic ulcers. Frustrations regarding ineffective analgesics and plans of care that they were unable to adhere to were also identified.
Cause
In addition to poor circulation, neuropathy, and difficulty moving, factors that contribute to chronic wounds include systemic illnesses, age, and repeated trauma. The genetic skin disorders collectively known as epidermolysis bullosa display skin fragility and a tendency to develop chronic, non-healing wounds. Comorbid ailments that may contribute to the formation of chronic wounds include vasculitis (an inflammation of blood vessels), immune suppression, pyoderma gangrenosum, and diseases that cause ischemia. Immune suppression can be caused by illnesses or medical drugs used over a long period, like steroids. Emotional stress can also negatively affect the healing of a wound, possibly by raising blood pressure and levels of cortisol, which lowers immunity.
What appears to be a chronic wound may also be a malignancy; for example, cancerous tissue can grow until blood cannot reach the cells and the tissue becomes an ulcer. Cancer, especially squamous cell carcinoma, may also form as the result of chronic wounds, probably due to repetitive tissue damage that stimulates rapid cell proliferation.
Another factor that may contribute to chronic wounds is old age. The skin of older people is more easily damaged, and older cells do not proliferate as fast and may not have an adequate response to stress in terms of gene upregulation of stress-related proteins. In older cells, stress response genes are overexpressed when the cell is not stressed, but when it is, the expression of these proteins is not upregulated by as much as in younger cells.
Comorbid factors that can lead to ischemia are especially likely to contribute to chronic wounds. Such factors include chronic fibrosis, edema, sickle cell disease, and peripheral artery disease such as by atherosclerosis.
Repeated physical trauma plays a role in chronic wound formation by continually initiating the inflammatory cascade. The trauma may occur by accident, for example when a leg is repeatedly bumped against a wheelchair rest, or it may be due to intentional acts. Heroin users who lose venous access may resort to 'skin popping', or injecting the drug subcutaneously, which is highly damaging to tissue and frequently leads to chronic ulcers. Children who are repeatedly seen for a wound that does not heal are sometimes found to be victims of a parent with Munchausen syndrome by proxy, a disease in which the abuser may repeatedly inflict harm on the child in order to receive attention.
Periwound skin damage caused by excessive amounts of exudate and other bodily fluids can perpetuate the non-healing status of chronic wounds. Maceration, excoriation, dry (fragile) skin, hyperkeratosis, callus and eczema are frequent problems that interfere with the integrity of periwound skin. They can create a gateway for infection as well as cause wound edge deterioration preventing wound closure.
Pathophysiology
Chronic wounds may affect only the epidermis and dermis, or they may affect tissues all the way to the fascia. They may be formed originally by the same things that cause acute ones, such as surgery or accidental trauma, or they may form as the result of systemic infection, vascular, immune, or nerve insufficiency, or comorbidities such as neoplasias or metabolic disorders. The reason a wound becomes chronic is that the body's ability to deal with the damage is overwhelmed by factors such as repeated trauma, continued pressure, ischemia, or illness.
Though much progress has been accomplished in the study of chronic wounds lately, advances in the study of their healing have lagged behind expectations. This is partly because animal studies are difficult because animals do not get chronic wounds, since they usually have loose skin that quickly contracts, and they normally do not get old enough or have contributing diseases such as neuropathy or chronic debilitating illnesses. Nonetheless, current researchers now understand some of the major factors that lead to chronic wounds, among which are ischemia, reperfusion injury, and bacterial colonization.
Ischemia
Ischemia is an important factor in the formation and persistence of wounds, especially when it occurs repetitively (as it usually does) or when combined with a patient's old age. Ischemia causes tissue to become inflamed and cells to release factors that attract neutrophils such as interleukins, chemokines, leukotrienes, and complement factors.
While they fight pathogens, neutrophils also release inflammatory cytokines and enzymes that damage cells. One of their important jobs is to produce Reactive Oxygen Species (ROS) to kill bacteria, for which they use an enzyme called myeloperoxidase. The enzymes and ROS produced by neutrophils and other leukocytes damage cells and prevent cell proliferation and wound closure by damaging DNA, lipids, proteins, the extracellular matrix (ECM), and cytokines that speed healing. Neutrophils remain in chronic wounds for longer than they do in acute wounds, and contribute to the fact that chronic wounds have higher levels of inflammatory cytokines and ROS. Since wound fluid from chronic wounds has an excess of proteases and ROS, the fluid itself can inhibit healing by inhibiting cell growth and breaking down growth factors and proteins in the ECM. This impaired healing response is considered uncoordinated. However, soluble mediators of the immune system (growth factors), cell-based therapies and therapeutic chemicals can propagate coordinated healing.
It has been suggested that the three fundamental factors underlying chronic wound pathogenesis are cellular and systemic changes of aging, repeated bouts of ischemia-reperfusion injury, and bacterial colonization with resulting inflammatory host response.
Bacterial colonization
Since more oxygen in the wound environment allows white blood cells to produce ROS to kill bacteria, patients with inadequate tissue oxygenation, for example those who developed hypothermia during surgery, are at higher risk for infection. The host's immune response to the presence of bacteria prolongs inflammation, delays healing, and damages tissue. Infection can lead not only to chronic wounds but also to gangrene, loss of the infected limb, and death of the patient. More recently, an interplay between bacterial colonization and increases in reactive oxygen species leading to formation and production of biofilms has been shown to generate chronic wounds.
Like ischemia, bacterial colonization and infection damage tissue by causing a greater number of neutrophils to enter the wound site. In patients with chronic wounds, bacteria with resistances to antibiotics may have time to develop. In addition, patients that carry drug resistant bacterial strains such as methicillin-resistant Staphylococcus aureus (MRSA) have more chronic wounds.
Growth factors and proteolytic enzymes
Chronic wounds also differ in makeup from acute wounds in that their levels of proteolytic enzymes such as elastase. and matrix metalloproteinases (MMPs) are higher, while their concentrations of growth factors such as Platelet-derived growth factor and Keratinocyte Growth Factor are lower.
Since growth factors (GFs) are imperative in timely wound healing, inadequate GF levels may be an important factor in chronic wound formation. In chronic wounds, the formation and release of growth factors may be prevented, the factors may be sequestered and unable to perform their metabolic roles, or degraded in excess by cellular or bacterial proteases.
Chronic wounds such as diabetic and venous ulcers are also caused by a failure of fibroblasts to produce adequate ECM proteins and by keratinocytes to epithelialize the wound. Fibroblast gene expression is different in chronic wounds than in acute wounds.
Though all wounds require a certain level of elastase and proteases for proper healing, too high a concentration is damaging. Leukocytes in the wound area release elastase, which increases inflammation, destroys tissue, proteoglycans, and collagen, and damages growth factors, fibronectin, and factors that inhibit proteases. The activity of elastase is increased by human serum albumin, which is the most abundant protein found in chronic wounds. However, chronic wounds with inadequate albumin are especially unlikely to heal, so regulating the wound's levels of that protein may in the future prove helpful in healing chronic wounds.
Excess matrix metalloproteinases, which are released by leukocytes, may also cause wounds to become chronic. MMPs break down ECM molecules, growth factors, and protease inhibitors, and thus increase degradation while reducing construction, throwing the delicate compromise between production and degradation out of balance.
Diagnosis
Infection
If a chronic wound becomes more painful this is a good indication that it is infected. A lack of pain however does not mean that it is not infected. Other methods of determination are less effective.
Classification
The vast majority of chronic wounds can be classified into three categories: venous ulcers, diabetic, and pressure ulcers. A small number of wounds that do not fall into these categories may be due to causes such as radiation poisoning or ischemia.
Venous and arterial ulcers
Venous ulcers, which usually occur in the legs, account for about 70% to 90% of chronic wounds and mostly affect the elderly. They are thought to be due to venous hypertension caused by improper function of valves that exist in the veins to prevent blood from flowing backward. Ischemia results from the dysfunction and, combined with reperfusion injury, causes the tissue damage that leads to the wounds.
Diabetic ulcers
Another major cause of chronic wounds, diabetes, is increasing in prevalence. Diabetics have a 15% higher risk for amputation than the general population due to chronic ulcers. Diabetes causes neuropathy, which inhibits nociception and the perception of pain. Thus patients may not initially notice small wounds to legs and feet, and may therefore fail to prevent infection or repeated injury. Further, diabetes causes immune compromise and damage to small blood vessels, preventing adequate oxygenation of tissue, which can cause chronic wounds. Pressure also plays a role in the formation of diabetic ulcers.
Pressure ulcers
Another leading type of chronic wounds is pressure ulcers, which usually occur in people with conditions such as paralysis that inhibit movement of body parts that are commonly subjected to pressure such as the heels, shoulder blades, and sacrum. Pressure ulcers are caused by ischemia that occurs when pressure on the tissue is greater than the pressure in capillaries, and thus restricts blood flow into the area. Muscle tissue, which needs more oxygen and nutrients than skin does, shows the worst effects from prolonged pressure. As in other chronic ulcers, reperfusion injury damages tissue.
Treatment
Though treatment of the different chronic wound types varies slightly, appropriate treatment seeks to address the problems at the root of chronic wounds, including ischemia, bacterial load, and imbalance of proteases. Periwound skin issues should be assessed and their abatement included in a proposed treatment plan.
Various methods exist to ameliorate these problems, including antibiotic and antibacterial use, debridement, irrigation, vacuum-assisted closure, warming, oxygenation, moist wound healing (the term pioneered by George D. Winter), removing mechanical stress, and adding cells or other materials to secrete or enhance levels of healing factors.
It is uncertain whether intravenous metronidazole is useful in reducing foul smelling from malignant wounds. There is insufficient evidence to use silver-containing dressings or topical agents for the treatment of infected or contaminated chronic wounds. For infected wounds, the following antibiotics are often used (if organisms are susceptible) as oral therapy due to their high bioavailability and good penetration into soft tissues: ciprofloxacin, clindamycin, minocycline, linezolid, moxifloxacin, and trimethoprim-sulfamethoxazole.
The challenge of any treatment is to address as many adverse factors as possible simultaneously, so each of them receives equal attention and does not continue to impede healing as the treatment progresses.
Preventing and treating infection
To lower the bacterial count in wounds, therapists may use topical antibiotics, which kill bacteria and can also help by keeping the wound environment moist,
which is important for speeding the healing of chronic wounds. Some researchers have experimented with the use of tea tree oil, an antibacterial agent which also has anti-inflammatory effects. Disinfectants are contraindicated because they damage tissues and delay wound contraction. Further, they are rendered ineffective by organic matter in wounds like blood and exudate and are thus not useful in open wounds.
A greater amount of exudate and necrotic tissue in a wound increases likelihood of infection by serving as a medium for bacterial growth away from the host's defenses. Since bacteria thrive on dead tissue, wounds are often surgically debrided to remove the devitalized tissue. Debridement and drainage of wound fluid are an especially important part of the treatment for diabetic ulcers, which may create the need for amputation if infection gets out of control. Mechanical removal of bacteria and devitalized tissue is also the idea behind wound irrigation, which is accomplished using pulsed lavage.
Removing necrotic or devitalized tissue is also the aim of maggot therapy, the intentional introduction by a health care practitioner of live, disinfected maggots into non-healing wounds. Maggots dissolve only necrotic, infected tissue; disinfect the wound by killing bacteria; and stimulate wound healing. Maggot therapy has been shown to accelerate debridement of necrotic wounds and reduce the bacterial load of the wound, leading to earlier healing, reduced wound odor and less pain. The combination and interactions of these actions make maggots an extremely potent tool in chronic wound care.
Negative pressure wound therapy (NPWT) is a treatment that improves ischemic tissues and removes wound fluid used by bacteria. This therapy, also known as vacuum-assisted closure, reduces swelling in tissues, which brings more blood and nutrients to the area, as does the negative pressure itself. The treatment also decompresses tissues and alters the shape of cells, causes them to express different mRNAs and to proliferate and produce ECM molecules.
Recent technological advancements produced novel approaches such as self-adaptive wound dressings that rely on properties of smart polymers sensitive to changes in humidity levels. The dressing delivers absorption or hydration as needed over each independent wound area and aids in the natural process of autolytic debridement. It effectively removes liquefied slough and necrotic tissue, disintegrated bacterial biofilm as well as harmful exudate components, known to slow the healing process. The treatment also reduces bacterial load by effective evacuation and immobilization of microorganisms from the wound bed, and subsequent chemical binding of available water that is necessary for their replication. Self-adaptive dressings protect periwound skin from extrinsic factors and infection while regulating moisture balance over vulnerable skin around the wound.
Treating trauma and painful wounds
Persistent chronic pain associated with non-healing wounds is caused by tissue (nociceptive) or nerve (neuropathic) damage and is influenced by dressing changes and chronic inflammation. Chronic wounds take a long time to heal and patients can experience chronic wounds for many years. Chronic wound healing may be compromised by coexisting underlying conditions, such as venous valve backflow, peripheral vascular disease, uncontrolled edema and diabetes mellitus.
If wound pain is not assessed and documented it may be ignored and/or not addressed properly. It is important to remember that increased wound pain may be an indicator of wound complications that need treatment, and therefore practitioners must constantly reassess the wound as well as the associated pain.
Optimal management of wounds requires holistic assessment. Documentation of the patient's pain experience is critical and may range from the use of a patient diary, (which should be patient driven), to recording pain entirely by the healthcare professional or caregiver. Effective communication between the patient and the healthcare team is fundamental to this holistic approach. The more frequently healthcare professionals measure pain, the greater the likelihood of introducing or changing pain management practices.
At present there are few local options for the treatment of persistent pain, whilst managing the exudate levels present in many chronic wounds. Important properties of such local options are that they provide an optimal wound healing environment, while providing a constant local low dose release of ibuprofen while worn.
If local treatment does not provide adequate pain reduction, it may be necessary for patients with chronic painful wounds to be prescribed additional systemic treatment for the physical component of their pain. Clinicians should consult with their prescribing colleagues referring to the WHO pain relief ladder of systemic treatment options for guidance. For every pharmacological intervention there are possible benefits and adverse events that the prescribing clinician will need to consider in conjunction with the wound care treatment team.
Ischemia and hypoxia
Blood vessels constrict in tissue that becomes cold and dilate in warm tissue, altering blood flow to the area. Thus keeping the tissues warm is probably necessary to fight both infection and ischemia. Some healthcare professionals use 'radiant bandages' to keep the area warm, and care must be taken during surgery to prevent hypothermia, which increases rates of post-surgical infection.
Underlying ischemia may also be treated surgically by arterial revascularization, for example in diabetic ulcers, and patients with venous ulcers may undergo surgery to correct vein dysfunction.
Diabetics that are not candidates for surgery (and others) may also have their tissue oxygenation increased by Hyperbaric Oxygen Therapy, or HBOT, which may provide a short-term improvement in healing by improving the oxygenated blood supply to the wound. In addition to killing bacteria, higher oxygen content in tissues speeds growth factor production, fibroblast growth, and angiogenesis. However, increased oxygen levels also means increased production of ROS. Antioxidants, molecules that can lose an electron to free radicals without themselves becoming radicals, can lower levels of oxidants in the body and have been used with some success in wound healing.
Low level laser therapy has been repeatedly shown to significantly reduce the size and severity of diabetic ulcers as well as other pressure ulcers.
Pressure wounds are often the result of local ischemia from the increased pressure. Increased pressure also plays a roles in many diabetic foot ulcerations as changes due to the disease causes the foot to have limited joint mobility and creates pressure points on the bottom of the foot. Effective measures to treat this includes a surgical procedure called the gastrocnemius recession in which the calf muscle is lengthened to decrease the fulcrum created by this muscle and resulting in a decrease in plantar forefoot pressure.
Growth factors and hormones
Since chronic wounds underexpress growth factors necessary for healing tissue, chronic wound healing may be speeded by replacing or stimulating those factors and by preventing the excessive formation of proteases like elastase that break them down.
One way to increase growth factor concentrations in wounds is to apply the growth factors directly. This generally takes many repetitions and requires large amounts of the factors, although biomaterials are being developed that control the delivery of growth factors over time. Another way is to spread onto the wound a gel of the patient's own blood platelets, which then secrete growth factors such as vascular endothelial growth factor (VEGF), insulin-like growth factor 1–2 (IGF), PDGF, transforming growth factor-β (TGF-β), and epidermal growth factor (EGF). Other treatments include implanting cultured keratinocytes into the wound to reepithelialize it and culturing and implanting fibroblasts into wounds. Some patients are treated with artificial skin substitutes that have fibroblasts and keratinocytes in a matrix of collagen to replicate skin and release growth factors.
In other cases, skin from cadavers is grafted onto wounds, providing a cover to keep out bacteria and preventing the buildup of too much granulation tissue, which can lead to excessive scarring. Though the allograft (skin transplanted from a member of the same species) is replaced by granulation tissue and is not actually incorporated into the healing wound, it encourages cellular proliferation and provides a structure for epithelial cells to crawl across. On the most difficult chronic wounds, allografts may not work, requiring skin grafts from elsewhere on the patient, which can cause pain and further stress on the patient's system.
Collagen dressings are another way to provide the matrix for cellular proliferation and migration, while also keeping the wound moist and absorbing exudate. Additionally Collagen has been shown to be chemotactic to human blood monocytes, which can enter the wound site and transform into beneficial wound-healing cells.
Since levels of protease inhibitors are lowered in chronic wounds, some researchers are seeking ways to heal tissues by replacing these inhibitors in them. Secretory leukocyte protease inhibitor (SLPI), which inhibits not only proteases but also inflammation and microorganisms like viruses, bacteria, and fungi, may prove to be an effective treatment.
Research into hormones and wound healing has shown estrogen to speed wound healing in elderly humans and in animals that have had their ovaries removed, possibly by preventing excess neutrophils from entering the wound and releasing elastase. Thus the use of estrogen is a future possibility for treating chronic wounds.
Epidemiology
Chronic wounds mostly affect people over the age of 60.
The incidence is 0.78% of the population and the prevalence ranges from 0.18 to 0.32%. As the population ages, the number of chronic wounds is expected to rise. Ulcers that heal within 12 weeks are usually classified as acute, and longer-lasting ones as chronic.
References
Further reading
Skin conditions resulting from physical factors
Necrosis | Chronic wound | [
"Biology"
] | 4,987 | [
"Cellular processes",
"Necrosis"
] |
3,121,095 | https://en.wikipedia.org/wiki/Polyhedral%20skeletal%20electron%20pair%20theory | In chemistry the polyhedral skeletal electron pair theory (PSEPT) provides electron counting rules useful for predicting the structures of clusters such as borane and carborane clusters. The electron counting rules were originally formulated by Kenneth Wade, and were further developed by others including Michael Mingos; they are sometimes known as Wade's rules or the Wade–Mingos rules. The rules are based on a molecular orbital treatment of the bonding. These rules have been extended and unified in the form of the Jemmis mno rules.
Predicting structures of cluster compounds
Different rules (4n, 5n, or 6n) are invoked depending on the number of electrons per vertex.
The 4n rules are reasonably accurate in predicting the structures of clusters having about 4 electrons per vertex, as is the case for many boranes and carboranes. For such clusters, the structures are based on deltahedra, which are polyhedra in which every face is triangular. The 4n clusters are classified as closo-, nido-, arachno- or hypho-, based on whether they represent a complete (closo-) deltahedron, or a deltahedron that is missing one (nido-), two (arachno-) or three (hypho-) vertices.
However, hypho clusters are relatively uncommon due to the fact that the electron count is high enough to start to fill antibonding orbitals and destabilize the 4n structure. If the electron count is close to 5 electrons per vertex, the structure often changes to one governed by the 5n rules, which are based on 3-connected polyhedra.
As the electron count increases further, the structures of clusters with 5n electron counts become unstable, so the 6n rules can be implemented. The 6n clusters have structures that are based on rings.
A molecular orbital treatment can be used to rationalize the bonding of cluster compounds of the 4n, 5n, and 6n types.
4n rules
The following polyhedra are closo polyhedra, and are the basis for the 4n rules; each of these have triangular faces. The number of vertices in the cluster determines what polyhedron the structure is based on.
Using the electron count, the predicted structure can be found. n is the number of vertices in the cluster. The 4n rules are enumerated in the following table.
When counting electrons for each cluster, the number of valence electrons is enumerated. For each transition metal present, 10 electrons are subtracted from the total electron count. For example, in Rh6(CO)16 the total number of electrons would be = = 26. Therefore, the cluster is a closo polyhedron because , with .
Other rules may be considered when predicting the structure of clusters:
For clusters consisting mostly of transition metals, any main group elements present are often best counted as ligands or interstitial atoms, rather than vertices.
Larger and more electropositive atoms tend to occupy vertices of high connectivity and smaller more electronegative atoms tend to occupy vertices of low connectivity.
In the special case of boron hydride clusters, each boron atom connected to 3 or more vertices has one terminal hydride, while a boron atom connected to two other vertices has two terminal hydrogen atoms. If more hydrogen atoms are present, they are placed in open face positions to even out the coordination number of the vertices.
For the special case of transition metal clusters, ligands are added to the metal centers to give the metals reasonable coordination numbers, and if any hydrogen atoms are present they are placed in bridging positions to even out the coordination numbers of the vertices.
In general, closo structures with n vertices are n-vertex polyhedra.
To predict the structure of a nido cluster, the closo cluster with n + 1 vertices is used as a starting point; if the cluster is composed of small atoms a high connectivity vertex is removed, while if the cluster is composed of large atoms a low connectivity vertex is removed.
To predict the structure of an arachno cluster, the closo polyhedron with n + 2 vertices is used as the starting point, and the n + 1 vertex nido complex is generated by following the rule above; a second vertex adjacent to the first is removed if the cluster is composed of mostly small atoms, a second vertex not adjacent to the first is removed if the cluster is composed mostly of large atoms.
Example:
Electron count: 10 × Pb + 2 (for the negative charge) = 10 × 4 + 2 = 42 electrons.
Since n = 10, 4n + 2 = 42, so the cluster is a closo bicapped square antiprism.
Example:
Electron count: 4 × S – 2 (for the positive charge) = 4 × 6 – 2 = 22 electrons.
Since n = 4, 4n + 6 = 22, so the cluster is arachno.
Starting from an octahedron, a vertex of high connectivity is removed, and then a non-adjacent vertex is removed.
Example: Os6(CO)18
Electron count: 6 × Os + 18 × CO – 60 (for 6 osmium atoms) = 6 × 8 + 18 × 2 – 60 = 24
Since n = 6, 4n = 24, so the cluster is capped closo.
Starting from a trigonal bipyramid, a face is capped. The carbonyls have been omitted for clarity.
Example:
Electron count: 5 × B + 5 × H + 4 (for the negative charge) = 5 × 3 + 5 × 1 + 4 = 24
Since n = 5, 4n + 4 = 24, so the cluster is nido.
Starting from an octahedron, one of the vertices is removed.
The rules are useful in also predicting the structure of carboranes.
Example: C2B7H13
Electron count = 2 × C + 7 × B + 13 × H = 2 × 4 + 7 × 3 + 13 × 1 = 42
Since n in this case is 9, 4n + 6 = 42, the cluster is arachno.
The bookkeeping for deltahedral clusters is sometimes carried out by counting skeletal electrons instead of the total number of electrons. The skeletal orbital (electron pair) and skeletal electron counts for the four types of deltahedral clusters are:
n-vertex closo: n + 1 skeletal orbitals, 2n + 2 skeletal electrons
n-vertex nido: n + 2 skeletal orbitals, 2n + 4 skeletal electrons
n-vertex arachno: n + 3 skeletal orbitals, 2n + 6 skeletal electrons
n-vertex hypho: n + 4 skeletal orbitals, 2n + 8 skeletal electrons
The skeletal electron counts are determined by summing the total of the following number of electrons:
2 from each BH unit
3 from each CH unit
1 from each additional hydrogen atom (over and above the ones on the BH and CH units)
the anionic charge electrons
5n rules
As discussed previously, the 4n rule mainly deals with clusters with electron counts of , in which approximately 4 electrons are on each vertex. As more electrons are added per vertex, the number of the electrons per vertex approaches 5. Rather than adopting structures based on deltahedra, the 5n-type clusters have structures based on a different series of polyhedra known as the 3-connected polyhedra, in which each vertex is connected to 3 other vertices. The 3-connected polyhedra are the duals of the deltahedra. The common types of 3-connected polyhedra are listed below.
The 5n rules are as follows.
Example: P4
Electron count: 4 × P = 4 × 5 = 20
It is a 5n structure with n = 4, so it is tetrahedral
Example: P4S3
Electron count 4 × P + 3 × S = 4 × 5 + 3 × 6 = 38
It is a 5n + 3 structure with n = 7. Three vertices are inserted into edges
Example: P4O6
Electron count 4 × P + 6 × O = 4 × 5 + 6 × 6 = 56
It is a 5n + 6 structure with n = 10. Six vertices are inserted into edges
6n rules
As more electrons are added to a 5n cluster, the number of electrons per vertex approaches 6. Instead of adopting structures based on 4n or 5n rules, the clusters tend to have structures governed by the 6n rules, which are based on rings. The rules for the 6n structures are as follows.
Example: S8
Electron count = 8 × S = 8 × 6 = 48 electrons.
Since n = 8, 6n = 48, so the cluster is an 8-membered ring.
Hexane (C6H14)
Electron count = 6 × C + 14 × H = 6 × 4 + 14 × 1 = 38
Since n = 6, 6n = 36 and 6n + 2 = 38, so the cluster is a 6-membered chain.
Isolobal vertex units
Provided a vertex unit is isolobal with BH then it can, in principle at least, be substituted for a BH unit, even though BH and CH are not isoelectronic. The CH+ unit is isolobal, hence the rules are applicable to carboranes. This can be explained due to a frontier orbital treatment. Additionally there are isolobal transition-metal units. For example, Fe(CO)3 provides 2 electrons. The derivation of this is briefly as follows:
Fe has 8 valence electrons.
Each carbonyl group is a net 2 electron donor after the internal σ- and π-bonding are taken into account making 14 electrons.
3 pairs are considered to be involved in Fe–CO σ-bonding and 3 pairs are involved in π-backbonding from Fe to CO reducing the 14 to 2.
Bonding in cluster compounds
closo-
The boron atoms lie on each vertex of the octahedron and are sp hybridized. One sp-hybrid radiates away from the structure forming the bond with the hydrogen atom. The other sp-hybrid radiates into the center of the structure forming a large bonding molecular orbital at the center of the cluster. The remaining two unhybridized orbitals lie along the tangent of the sphere like structure creating more bonding and antibonding orbitals between the boron vertices. The orbital diagram breaks down as follows:
The 18 framework molecular orbitals, (MOs), derived from the 18 boron atomic orbitals are:
1 bonding MO at the center of the cluster and 5 antibonding MOs from the 6 sp-radial hybrid orbitals
6 bonding MOs and 6 antibonding MOs from the 12 tangential p-orbitals.
The total skeletal bonding orbitals is therefore 7, i.e. .
Transition metal clusters
Transition metal clusters use the d orbitals for bonding. Thus, they have up to nine bonding orbitals, instead of only the four present in boron and main group clusters. PSEPT also applies to metallaboranes
Clusters with interstitial atoms
Owing their large radii, transition metals generally form clusters that are larger than main group elements. One consequence of their increased size, these clusters often contain atoms at their centers. A prominent example is [Fe6C(CO)16]2-. In such cases, the rules of electron counting assume that the interstitial atom contributes all valence electrons to cluster bonding. In this way, [Fe6C(CO)16]2- is equivalent to [Fe6(CO)16]6- or [Fe6(CO)18]2-.
See Also
Styx rule
References
General references
Chemical bonding
Inorganic chemistry
Organometallic chemistry
Cluster chemistry | Polyhedral skeletal electron pair theory | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,421 | [
"Cluster chemistry",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Organometallic chemistry"
] |
3,121,206 | https://en.wikipedia.org/wiki/Moldavite | Moldavite () is a forest green, olive green or blue greenish vitreous silica projectile glass formed by a meteorite impact in southern Germany (Nördlinger Ries Crater) that occurred about 15 million years ago. It is a type of tektite and a gemstone. Material ejected from the impact crater includes moldavite, which was strewn across parts of Germany, the Czech Republic and Austria.
Early studies
Moldavite was introduced to the scientific public for the first time in 1786 as "chrysolites" from Týn nad Vltavou in a lecture by Josef Mayer of Prague University, read at a meeting of the Bohemian Scientific Society (Mayer 1788). Zippe (1836) first used the term "moldavite", derived from the Vltava (Moldau) river in Bohemia (the Czech Republic), from where the first described pieces came.
Origin
In 1900, Franz Eduard Suess pointed out that the gravel-size moldavites exhibited curious pittings and wrinkles on the surface, which could not be due to the action of water, but resembled the characteristic markings on many meteorites. He attributed the material to a cosmic origin and regarded moldavites as a special type of meteorite for which he proposed the name of tektite. Based on an analysis of 23 Bohemian and Moravian samples, in 1966 it was theorised that variations in their composition derived from fractional volatilization, and were not similar in origin to sedimentary or igneous rocks. Values were reported for a range of attributes: oxides, and densities and refractive values index. In 1987 it was recognised that moldavites were created following meteor impact which melted material and launched it into the air. As the material was airborne, it cooled and solidified. However, the plasma-like vapor at the impact site separated primary melt droplets from other residual vapour. The former then cooled into moldavite. In 2019 the first LIBS (Laser Induced Breakdown Spectroscopy) study on two typical moldavite samples, followed by routine EPMA (Electron Probe Microanalysis), indicated agreement with EPMA studies and also revealed siderophile elements (Chromium, Iron, Cobalt and Nickel).
Moldavites' highly textured surfaces are now known to be the result of pervasive etching by naturally occurring and humic acids present in groundwater. Because of their extremely low water content and chemical composition, the current consensus among earth scientists is that moldavites were formed about 14.7 million years ago during the impact of a giant meteorite in the present-day Nördlinger Ries crater. Currently, moldavites have been found in an area that includes southern Bohemia, western Moravia, the Cheb Basin (northwest Bohemia), Lusatia (Germany), and Waldviertel (Austria). Isotope analysis of samples of moldavites have shown a beryllium-10 isotope composition similar to the composition of Australasian tektites (australites) and Ivory Coast tektites (ivorites).
Most moldavites are from South Bohemian localities, with just a few found in South Moravian localities. Rare moldavites have been found in the Lusatian area (near Dresden), Cheb basin area (West Bohemia) and Northern Austria (near Radessen). Principal occurrences of moldavites in Bohemia are associated with Tertiary sediments of the České Budějovice and Třeboň basins. The most prominent localities are concentrated in a NW-SE strip along the western margin of the České Budějovice Basin. The majority of these occurrences are bound to the Vrábče Member and Koroseky Sandy Gravel. Prominent localities in the Třeboň Basin are bound to gravels and sands of the Domanín Formation.
In Moravia, moldavite occurrences are restricted to an area roughly bounded by the towns of Třebíč, Znojmo and Brno. The colour of Moravian moldavites usually differs from their Bohemian counterparts, as it tends to be brownish. Taking into account the number of pieces found, Moravian localities are considerably less productive than the Bohemian ones; however, the average weight of the moldavites found is much higher. The oldest (primary) moldavite-bearing sediments lie between Slavice and Třebíč. The majority of other localities in southern Moravia are associated with sediments of Miocene as well as Pleistocene rivers that flowed across this area more or less to the southeast, similar to the present streams of Jihlava, Oslava and Jevišovka.
Properties
The chemical formula of moldavite is SiO2(+Al2O3). Its properties are similar to those of other types of glass, and reported Mohs hardness varies from 5.5 to 7. Moldavite can be transparent or translucent with a mossy green color, with swirls and bubbles accentuating its mossy appearance. Moldavites can be distinguished from most green glass imitations by observing their worm-like schlieren.
Use
Moldavites were discovered by prehistoric people in the Czech Republic and Austria and were used to make flaked tools. Some of the worked moldavites date to the Aurignacian period of the Upper Paleolithic, approximately 43,000 to 26,000 years before the present.
In the modern world, moldavites are often used, rough or cut, as semi-precious stones in jewelry. They have purported metaphysical qualities and are often used in crystal healing.
Presentation
There is the Moldavite Museum in Český Krumlov, Czech Republic.
Gallery
References
J. Baier: Zur Herkunft und Bedeutung der Ries-Auswurfprodukte für den Impakt-Mechanismus. – Jber. Mitt. oberrhein. geol. Ver., N. F. 91, 9–29, 2009.
J. Baier: Die Auswurfprodukte des Ries-Impakts, Deutschland, in Documenta Naturae, Vol. 162, München, 2007.
Further reading
Milan PRCHAL "60 years on the green wave". (Robert Jelinek, Admir Mesic Eds). Der Konterfei 072, Vienna, 2021.
The Austrian Moldavite – On the Traces of the Green Tektite (Robert Jelinek Ed.). Der Konterfei 078, Vienna, 2023.
External links
Moldavite Museum in Český Krumlov
Gemstones
Glass in nature
Impact event minerals | Moldavite | [
"Physics"
] | 1,339 | [
"Materials",
"Gemstones",
"Matter"
] |
3,121,852 | https://en.wikipedia.org/wiki/Order%20dimension | In mathematics, the dimension of a partially ordered set (poset) is the smallest number of total orders the intersection of which gives rise to the partial order.
This concept is also sometimes called the order dimension or the Dushnik–Miller dimension of the partial order.
first studied order dimension; for a more detailed treatment of this subject than provided here, see .
Formal definition
The dimension of a poset P is the least integer t for which there exists a family
of linear extensions of P so that, for every x and y in P, x precedes y in P if and only if it precedes y in all of the linear extensions, if any such t exists. That is,
An alternative definition of order dimension is the minimal number of total orders such that P embeds into their product with componentwise ordering i.e. if and only if for all i (, ).
Realizers
A family of linear orders on X is called a realizer of a poset P = (X, <P) if
,
which is to say that for any x and y in X,
x <P y precisely when x <1 y, x <2 y, ..., and x <t y.
Thus, an equivalent definition of the dimension of a poset P is "the least cardinality of a realizer of P."
It can be shown that any nonempty family R of linear extensions is a realizer of a finite partially ordered set P if and only if, for every critical pair (x,y) of P, y <i x for some order
<i in R.
Example
Let n be a positive integer, and let P be the partial order on the elements ai and bi (for 1 ≤ i ≤ n) in which ai ≤ bj whenever i ≠ j, but no other pairs are comparable. In particular, ai and bi are incomparable in P; P can be viewed as an oriented form of a crown graph. The illustration shows an ordering of this type for n = 4.
Then, for each i, any realizer must contain a linear order that begins with all the aj except ai (in some order), then includes bi, then ai, and ends with all the remaining bj. This is so because if there were a realizer that didn't include such an order, then the intersection of that realizer's orders would have ai preceding bi, which would contradict the incomparability of ai and bi in P. And conversely, any family of linear orders that includes one order of this type for each i has P as its intersection. Thus, P has dimension exactly n. In fact, P is known as the standard example of a poset of dimension n, and is usually denoted by Sn.
Order dimension two
The partial orders with order dimension two may be characterized as the partial orders whose comparability graph is the complement of the comparability graph of a different partial order . That is, P is a partial order with order dimension two if and only if there exists a partial order Q on the same set of elements, such that every pair x, y of distinct elements is comparable in exactly one of these two partial orders. If P is realized by two linear extensions, then partial order Q complementary to P may be realized by reversing one of the two linear extensions. Therefore, the comparability graphs of the partial orders of dimension two are exactly the permutation graphs, graphs that are both themselves comparability graphs and complementary to comparability graphs.
The partial orders of order dimension two include the series-parallel partial orders . They are exactly the partial orders whose Hasse diagrams have dominance drawings, which can be obtained by using the positions in the two permutations of a realizer as Cartesian coordinates.
Computational complexity
It is possible to determine in polynomial time whether a given finite partially ordered set has order dimension at most two, for instance, by testing whether the comparability graph of the partial order is a permutation graph. However, for any k ≥ 3, it is NP-complete to test whether the order dimension is at most k .
Incidence posets of graphs
The incidence poset of any undirected graph G has the vertices and edges of G as its elements; in this poset, x ≤ y if either x = y or x is a vertex, y is an edge, and x is an endpoint of y. Certain kinds of graphs may be characterized by the order dimensions of their incidence posets: a graph is a path graph if and only if the order dimension of its incidence poset is at most two, and according to Schnyder's theorem it is a planar graph if and only if the order dimension of its incidence poset is at most three .
For a complete graph on n vertices, the order dimension of the incidence poset is . It follows that all simple n-vertex graphs have incidence posets with order dimension .
k-dimension and 2-dimension
A generalization of dimension is the notion of k-dimension (written ) which is the minimal number of chains of length at most k in whose product the partial order can be embedded. In particular, the 2-dimension of an order can be seen as the size of the smallest set such that the order embeds in the inclusion order on this set.
See also
Interval dimension
References
.
.
.
.
.
.
.
.
.
Order theory
Dimension theory
NP-complete problems | Order dimension | [
"Mathematics"
] | 1,118 | [
"NP-complete problems",
"Mathematical problems",
"Order theory",
"Computational problems"
] |
3,122,018 | https://en.wikipedia.org/wiki/List%20of%20automation%20protocols | This is a list of communication protocols used for the automation of processes (industrial or otherwise), such as for building automation, power-system automation, automatic meter reading, and vehicular automation.
Process automation protocols
AS-i – Actuator-sensor interface, a low level 2-wire bus establishing power and communications to basic digital and analog devices
BSAP – Bristol Standard Asynchronous Protocol, developed by Bristol Babcock Inc.
CC-Link Industrial Networks – Supported by the CLPA
CIP (Common Industrial Protocol) – can be treated as application layer common to DeviceNet, CompoNet, ControlNet and EtherNet/IP
ControlNet – an implementation of CIP, originally by Allen-Bradley
DeviceNet – an implementation of CIP, originally by Allen-Bradley
DF-1 - used by Allen-Bradley ControlLogix, CompactLogix, PLC-5, SLC-500, and MicroLogix class devices
DNP3 - a protocol used to communicate by industrial control and utility SCADA systems
DirectNet – Koyo / Automation Direct proprietary, yet documented PLC interface
EtherCAT
Ethernet Global Data (EGD) – GE Fanuc PLCs (see also SRTP)
EtherNet/IP – IP stands for "Industrial Protocol". An implementation of CIP, originally created by Rockwell Automation
Ethernet Powerlink – an open protocol managed by the Ethernet POWERLINK Standardization Group (EPSG).
FINS, Omron's protocol for communication over several networks, including Ethernet.
FOUNDATION fieldbus – H1 & HSE
HART Protocol
HostLink Protocol, Omron's protocol for communication over serial links.
Interbus, Phoenix Contact's protocol for communication over serial links, now part of PROFINET IO
MECHATROLINK – open protocol originally developed by Yaskawa, supported by the MMA
MelsecNet, and MelsecNet II, /B, and /H, supported by Mitsubishi Electric.
Modbus PEMEX
Modbus Plus
Modbus RTU or ASCII or TCP
MPI – Multi Point Interface
OSGP – The Open Smart Grid Protocol, a widely use protocol for smart grid devices built on ISO/IEC 14908.1
OpenADR – Open Automated Demand Response; protocol to manage electricity consuming/controlling devices
Optomux – Serial (RS-422/485) network protocol originally developed by Opto 22 in 1982. The protocol was openly documented and over time used for industrial automation applications.
PieP – An Open Fieldbus Protocol
Profibus – by PROFIBUS & PROFINET International (PI)
PROFINET - by PROFIBUS & PROFINET International (PI)
RAPIEnet – Real-time Automation Protocols for Industrial Ethernet
Honeywell SDS – Smart Distributed System – Originally developed by Honeywell. Currently supported by Holjeron.
SERCOS III, Ethernet-based version of SERCOS real-time interface standard
SERCOS interface, Open Protocol for hard real-time control of motion and I/O
GE SRTP – GE Fanuc PLCs
Sinec H1 – Siemens
SynqNet – Danaher
TTEthernet – TTTech
Industrial control system protocols
Data Distribution Service from the Object Management Group
EPICS Channel Access and PV Access (PVA), particle accelerator control system framework
MTConnect
OPC Unified Architecture
Open Platform Communications, formerly OLE for process control
Building automation protocols
1-Wire – from Dallas/Maxim
BACnet – for Building Automation and Control networks, maintained by ASHRAE Committee SSPC 135.
BatiBUS - merged to KNX
C-Bus Clipsal Integrated Systems Main Proprietary Protocol
CC-Link Industrial Networks, supported by Mitsubishi Electric
DALI - Digital Addressable Lighting Interface specified in IEC 62386.
DSI - Digital Serial Interface for the controlling of lighting in building, precursor to DALI.
Dynet - lighting and automation control protocol developed in Sydney, Australia by the company Dynalite
EnOcean – Low Power Wireless protocol for energy harvesting and very lower power devices.
European Home Systems Protocol (EHS) - merged to KNX
European Installation Bus (EIB) named also Instabus - merged to KNX
INSTEON - SmartHome Labs Pro New 2-way Protocol based on Power-BUS.
KNX – Standard for building control. Previously Batibus/EHS/EIB
LonTalk – protocol for LonWorks technology by Echelon Corporation
Modbus RTU or ASCII or TCP
oBIX - Open Building Information Exchange is a standard for RESTful Web Services-based interfaces to building control systems developed by OASIS.
UPB - 2-way Peer to Peer Protocol
VSCP - Very Simple Control Protocol is a free protocol with main focus on building- or home-automation
xAP – Open protocol
X10 – Open standard for communication among electronic devices used for home automation (domotics)
Z-Wave - Wireless RF Protocol
Zigbee – Open protocol for Mesh Networks
Power system automation protocols
DNP3 – Distributed Network Protocol
IEC 60870-5
IEC 61850
IEC 62351 – Security for IEC 60870, 61850, DNP3 & ICCP protocols
Automatic meter reading protocols
ANSI C12.18
DLMS/IEC 62056
IEC 61107
M-Bus
OMS
Zigbee Smart Energy 2.0
Modbus
ANSI C12.21
ANSI C12.22
Automobile / Vehicle protocol buses
Controller Area Network (CAN) – an inexpensive low-speed serial bus for interconnecting automotive components
FlexRay – a general purpose high-speed protocol with safety-critical features
IDB-1394
IEBus
J1708 – RS-485 based SAE specification used in commercial vehicles, agriculture, and heavy equipment.
J1939 and ISO11783 – an adaptation of CAN for agricultural and commercial vehicles
Keyword Protocol 2000 (KWP2000) – a protocol for automotive diagnostic devices (runs either on a serial line or over CAN)
Local Interconnect Network (LIN) – a very low cost in-vehicle sub-network
Media Oriented Systems Transport (MOST) – a high-speed multimedia interface
Vehicle Area Network (VAN)
UAVCAN - a lightweight protocol for in-vehicle communication over CAN or Ethernet
See also
Lists of network protocols
Protocol converter
Serial communication
Vehicle bus
References
Industrial Ethernet
Control engineering
automation | List of automation protocols | [
"Technology",
"Engineering"
] | 1,289 | [
"Computing-related lists",
"Lists of network protocols",
"Control engineering",
"Industrial Ethernet"
] |
3,122,050 | https://en.wikipedia.org/wiki/Fagin%27s%20theorem | Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems.
The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP.
It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) by James Lynch in 1981, and several results of Étienne Grandjean have provided tighter bounds on nondeterministic random-access machines.
Proof
In addition to Fagin's 1974 paper, the 1999 textbook by Immerman provides a detailed proof of the theorem. It is straightforward to show that every existential second-order formula can be recognized in NP, by nondeterministically choosing the value of all existentially-qualified variables, so the main part of the proof is to show that every language in NP can be described by an existential second-order formula. To do so, one can use second-order existential quantifiers to arbitrarily choose a computation tableau. In more detail, for every timestep of an execution trace of a non-deterministic Turing machine, this tableau encodes the state of the Turing machine, its position in the tape, the contents of every tape cell, and which nondeterministic choice the machine makes at that step. A first-order formula can constrain this encoded information so that it describes a valid execution trace, one in which the tape contents and Turing machine state and position at each timestep follow from the previous timestep.
A key lemma used in the proof is that it is possible to encode a linear order of length (such as the linear orders of timesteps and tape contents at any timestep) as a relation on a universe of One way to achieve this is to choose a linear ordering of and then define to be the lexicographical ordering of from with respect
See also
Spectrum of a sentence
Notes
References
Descriptive complexity
Theorems in computational complexity theory | Fagin's theorem | [
"Mathematics"
] | 457 | [
"Theorems in computational complexity theory",
"Theorems in discrete mathematics"
] |
3,122,052 | https://en.wikipedia.org/wiki/Schnyder%27s%20theorem | In graph theory, Schnyder's theorem is a characterization of planar graphs in terms
of the order dimension of their incidence posets. It is named after Walter Schnyder, who published its proof in 1989.
The incidence poset of an undirected graph with vertex set and edge set is the partially ordered set of height 2 that has as its elements. In this partial order, there is an order relation when is a vertex, is an edge, and is one of the two endpoints of .
The order dimension of a partial order is the smallest number of total orderings whose intersection is the given partial order; such a set of orderings is called a realizer of the partial order.
Schnyder's theorem states that a graph is planar if and only if the order dimension of is at most three.
Extensions
This theorem has been generalized by to a tight bound on the dimension of the height-three partially ordered sets formed analogously from the vertices, edges and faces of a convex polyhedron, or more generally of an embedded planar graph: in both cases, the order dimension of the poset is at most four. However, this result cannot be generalized to higher-dimensional convex polytopes, as there exist four-dimensional polytopes whose face lattices have unbounded order dimension.
Even more generally, for abstract simplicial complexes, the order dimension of the face poset of the complex is at most , where is the minimum dimension of a Euclidean space in which the complex has a geometric realization .
Other graphs
As Schnyder observes, the incidence poset of a graph G has order dimension two if and only if the graph is a path or a subgraph of a path. For, in when an incidence poset has order dimension is two, its only possible realizer consists of two total orders that (when restricted to the graph's vertices) are the reverse of each other. Any other two orders would have an intersection that includes an order relation between two vertices, which is not allowed for incidence posets. For these two orders on the vertices, an edge between consecutive vertices can be included in the ordering by placing it immediately following the later of the two edge endpoints, but no other edges can be included.
If a graph can be colored with four colors, then its incidence poset has order dimension at most four .
The incidence poset of a complete graph on n vertices has order dimension .
References
.
.
.
.
.
.
Order theory
Planar graphs
Theorems in graph theory | Schnyder's theorem | [
"Mathematics"
] | 520 | [
"Order theory",
"Statements about planar graphs",
"Planar graphs",
"Theorems in discrete mathematics",
"Planes (geometry)",
"Theorems in graph theory"
] |
3,122,083 | https://en.wikipedia.org/wiki/Four%20Happiness%20Boys | The image of the Four Happiness Boys is believed to have begun during the Ming Dynasty (1368–1644) by a child prodigy by the name of Jie Jin.
By the age of five, this remarkable child had studied and mastered the ancient Chinese ‘Four Books’ and the ‘Five Classics' and soon made his way into formal studies alongside other renowned Chinese scholars of the period. The "Four Happiness Boys" is the ancient Chinese image or drawing of two interconnected boys to create the illusion of four laughing boys lying in four directions. The picture symbolizes ‘four happiness joined together’, which basically were: (a) a wedding night, (b)passing the imperial exams, (c) running into a friend in a faraway place, and (d) rain after a long drought – instances all considered to be among life's major fortunes in ancient China.
To this day, this image continues to be painted, drawn or cast in many materials including bronze, brass, and porcelain and is often given as a symbolic wedding gift for an abundant marriage, many generations of children, and good fortune and happiness.
See also
He-He er xian
Chinese numismatic charm
References
Chinese culture
Chinese iconography
Visual motifs | Four Happiness Boys | [
"Mathematics"
] | 248 | [
"Symbols",
"Visual motifs"
] |
3,122,187 | https://en.wikipedia.org/wiki/Degree%20of%20anonymity | In anonymity networks (e.g., Tor, Crowds, Mixmaster, I2P, etc.), it is important to be able to measure quantitatively the guarantee that is given to the system. The degree of anonymity is a device that was proposed at the 2002 Privacy Enhancing Technology (PET) conference. Two papers put forth the idea of using entropy as the basis for formally measuring anonymity: "Towards an Information Theoretic Metric for Anonymity", and "Towards Measuring Anonymity". The ideas presented are very similar with minor differences in the final definition of .
Background
Anonymity networks have been developed and many have introduced methods of proving the anonymity guarantees that are possible, originally with simple Chaum Mixes and Pool Mixes the size of the set of users was seen as the security that the system could provide to a user. This had a number of problems; intuitively if the network is international then it is unlikely that a message that contains only Urdu came from the United States, and vice versa. Information like this and via methods like the predecessor attack and intersection attack helps an attacker increase the probability that a user sent the message.
Example With Pool Mixes
As an example consider the network shown above, in here and are users (senders), , and are servers (receivers), the boxes are mixes, and , and where denotes the anonymity set. Now as there are pool mixes let the cap on the number of incoming messages to wait before sending be ; as such if , or is communicating with and receives a message then knows that it must have come from (as the links between the mixes can only have message at a time). This is in no way reflected in 's anonymity set, but should be taken into account in the analysis of the network.
Degree of Anonymity
The degree of anonymity takes into account the probability associated with each user, it begins by defining the entropy of the system (here is where the papers differ slightly but only with notation, we will use the notation from .):
,
where is the entropy of the network, is the number of nodes in the network, and is the probability associated with node .
Now the maximal entropy of a network occurs when there is uniform probability associated with each node and this yields .
The degree of anonymity (now the papers differ slightly in the definition here, defines a bounded degree where it is compared to and gives an unbounded definition—using the entropy directly, we will consider only the bounded case here) is defined as
.
Using this anonymity systems can be compared and evaluated using a quantitatively analysis.
Definition of Attacker
These papers also served to give concise definitions of an attacker:
Internal/External an internal attacker controls nodes in the network, whereas an external can only compromise communication channels between nodes.
Passive/Active an active attacker can add, remove, and modify any messages, whereas a passive attacker can only listen to the messages.
Local/Global a local attacker has access to only part of the network, whereas a global can access the entire network.
Example
In the papers there are a number of example calculations of ; we will walk through some of them here.
Crowds
In Crowds there is a global probability of forwarding (), which is the probability a node will forward the message internally instead of routing it to the final destination. Let there be corrupt nodes and total nodes. In Crowds the attacker is internal, passive, and local. Trivially , and overall the entropy is , is this value divided by .
Onion routing
In onion routing, assuming the attacker can exclude a subset of the nodes from the network, the entropy would easily be , where is the size of the subset of non-excluded nodes. Under an attack model where a node can both globally listen to message passing and is a node on the path this decreases to , where is the length of the onion route (this could be larger or smaller than ), as there is no attempt in onion routing to remove the correlation between the incoming and outgoing messages.
Applications of this metric
In 2004, Diaz, Sassaman, and DeWitte presented an analysis of two anonymous remailers using the Serjantov and Danezis metric, showing one of them to provide zero anonymity under certain realistic conditions.
See also
Onion routing
Tor (anonymity network)
Entropy
Crowds
References
See Towards Measuring Anonymity
See Towards an Information Theoretic Metric for Anonymity
See Comparison Between Two Practical Mix Designs
Anonymity networks
Computer network analysis
Cryptographic software
Internet privacy
Routing software | Degree of anonymity | [
"Mathematics"
] | 928 | [
"Cryptographic software",
"Mathematical software"
] |
3,122,365 | https://en.wikipedia.org/wiki/Gas%20turbine%20modular%20helium%20reactor | The Gas Turbine Modular Helium Reactor (GT-MHR) is a class of nuclear fission power reactor designed that was under development by a group of Russian enterprises (OKBM Afrikantov, Kurchatov Institute, VNIINM and others), an American group headed by General Atomics, French Framatome and Japanese Fuji Electric. It is a helium cooled, graphite moderated reactor and uses TRISO fuel compacts in a prismatic core design. The power is generated via a gas turbine rather than via the more common steam turbine.
A conceptual design was produced by 1997, and it was hoped to have a final design by 2005, and a prototype plant commissioning by 2010.
Construction
The core consists of a graphite cylinder with a radius of and a height of which includes axial reflectors at top and bottom. The cylinder allocates three or four concentric rings, each of 36 hexagonal blocks with an interstitial gap of . Each hexagonal block contains 108 helium coolant channels and 216 fuel pins. Each fuel pin contains a random lattice of TRISO particles dispersed into a graphite matrix. The reactor exhibits a thermal spectrum with a peak neutron energy located at about 0.2 eV. The TRISO fuel concept allows the reactor to be inherently safe. The reactor and containment structure is located below grade and in contact with the ground, which serves as a passive safety measure to conduct heat away from the reactor in the event of a coolant failure.
Advantages
The Gas Turbine Modular Helium Reactor utilizes the Brayton cycle turbine arrangement, which gives it an efficiency of up to 48% – higher than any other reactor, as of 1995. Commercial light water reactors (LWRs) generally use the Rankine cycle, which is what coal-fired power plants use. Commercial LWRs average 32% efficiency, again as of 1995.
Legacy
Energy Multiplier Module (EM2)
In 2010 General Atomics conceptualized a new reactor that utilizes the power conversion features of the GT-MHR, the Energy Multiplier Module (EM2). The EM2 uses fast neutrons and is a gas-cooled fast reactor, enabling it to reduce nuclear waste considerably by transmutation.
See also
Gas-cooled fast reactor
Pebble bed reactor
Very high temperature reactor
References
External links
Nuclear power reactor types
Gas turbines | Gas turbine modular helium reactor | [
"Technology"
] | 474 | [
"Engines",
"Gas turbines"
] |
3,122,489 | https://en.wikipedia.org/wiki/Sodium-cooled%20fast%20reactor | A sodium-cooled fast reactor is a fast neutron reactor cooled by liquid sodium.
The initials SFR in particular refer to two Generation IV reactor proposals, one based on existing liquid metal cooled reactor (LMFR) technology using mixed oxide fuel (MOX), and one based on the metal-fueled integral fast reactor.
Several sodium-cooled fast reactors have been built and some are in current operation, particularly in Russia. Others are in planning or under construction. For example, in 2022, in the US, TerraPower (using its Traveling Wave technology) is planning to build its own reactors along with molten salt energy storage in partnership with GEHitachi's PRISM integral fast reactor design, under the Natrium appellation in Kemmerer, Wyoming.
Aside from the Russian experience, Japan, India, China, France and the USA are investing in the technology.
Fuel cycle
The nuclear fuel cycle employs a full actinide recycle with two major options: One is an intermediate-size (150–600 MWe) sodium-cooled reactor with uranium-plutonium-minor-actinide-zirconium metal alloy fuel, supported by a fuel cycle based on pyrometallurgical reprocessing in facilities integrated with the reactor. The second is a medium to large (500–1,500 MWe) sodium-cooled reactor with mixed uranium-plutonium oxide fuel, supported by a fuel cycle based upon advanced aqueous processing at a central location serving multiple reactors. The outlet temperature is approximately 510–550 degrees C for both.
Sodium coolant
Liquid metallic sodium may be used to carry heat from the core. Sodium has only one stable isotope, sodium-23, which is a weak neutron absorber. When it does absorb a neutron it produces sodium-24, which has a half-life of 15 hours and decays to stable isotope magnesium-24.
Pool or loop type
The two main design approaches to sodium-cooled reactors are pool type and loop type.
In the pool type, the primary coolant is contained in the main reactor vessel, which therefore includes the reactor core and a heat exchanger. The US EBR-2, French Phénix and others used this approach, and it is used by India's Prototype Fast Breeder Reactor and China's CFR-600.
In the loop type, the heat exchangers are outside the reactor tank. The French Rapsodie, British Prototype Fast Reactor and others used this approach.
Advantages
All fast reactors have several advantages over the current fleet of water based reactors in that the waste streams are significantly reduced. Crucially, when a reactor runs on fast neutrons, the plutonium isotopes are far more likely to fission upon absorbing a neutron. Thus, fast neutrons have a smaller chance of being captured by the uranium and plutonium, but when they are captured, have a much bigger chance of causing a fission. This means that the inventory of transuranic waste is non existent from fast reactors.
The primary advantage of liquid metal coolants, such as liquid sodium, is that metal atoms are weak neutron moderators. Water is a much stronger neutron moderator because the hydrogen atoms found in water are much lighter than metal atoms, and therefore neutrons lose more energy in collisions with hydrogen atoms. This makes it difficult to use water as a coolant for a fast reactor because the water tends to slow (moderate) the fast neutrons into thermal neutrons (although concepts for reduced moderation water reactors exist).
Another advantage of liquid sodium coolant is that sodium melts at 371K (98°C) and boils / vaporizes at 1156K (883°C), a difference of 785K (785°C) between solid / frozen and gas / vapor states. By comparison, the liquid temperature range of water (between ice and gas) is just 100K at normal, sea-level atmospheric pressure conditions. Despite sodium's low specific heat (as compared to water), this enables the absorption of significant heat in the liquid phase, while maintaining large safety margins.
Moreover, the high thermal conductivity of sodium effectively creates a reservoir of heat capacity that provides thermal inertia against overheating.
Sodium need not be pressurized since its boiling point is much higher than the reactor's operating temperature, and sodium does not corrode steel reactor parts, and in fact, protects metals from corrosion.
The high temperatures reached by the coolant (the Phénix reactor outlet temperature was 833K (560°C)) permit a higher thermodynamic efficiency than in water cooled reactors. The electrically conductive molten sodium can be moved by electromagnetic pumps.
The fact that the sodium is not pressurized implies that a much thinner reactor vessel can be used (e.g. 2 cm thick). Combined with the much higher temperatures achieved in the reactor, this means that the reactor in shutdown mode can be passively cooled. For example, air ducts can be engineered so that all the decay heat after shutdown is removed by natural convection, and no pumping action is required.
Reactors of this type are self-controlling. If the temperature of the core increases, the core will expand slightly, which means that more neutrons will escape the core, slowing down the reaction.
Disadvantages
A disadvantage of sodium is its chemical reactivity, which requires special precautions to prevent and suppress fires. If sodium comes into contact with water it reacts to produce sodium hydroxide and hydrogen, and the hydrogen burns in contact with air. This was the case at the Monju Nuclear Power Plant in a 1995 accident. In addition, neutron capture causes it to become radioactive; albeit with a half-life of only 15 hours.
Another problem is leaks. Sodium at high temperatures ignites in contact with oxygen. Such sodium fires can be extinguished by powder, or by replacing the air with nitrogen. A Russian breeder reactor, the BN-600, reported 27 sodium leaks in a 17-year period, 14 of which led to sodium fires.
Design goals
The operating temperature must not exceed the fuel's boiling temperature. Fuel-to-cladding chemical interaction (FCCI) has to be accommodated. FCCI is eutectic melting between the fuel and the cladding; uranium, plutonium, and lanthanum (a fission product) inter-diffuse with the iron of the cladding. The alloy that forms has a low eutectic melting temperature. FCCI causes the cladding to reduce in strength and even rupture. The amount of transuranic transmutation is limited by the production of plutonium from uranium. One work-around is to have an inert matrix, using, e.g., magnesium oxide. Magnesium oxide has an order of magnitude lower probability of interacting with neutrons (thermal and fast) than elements such as iron.
High-level wastes and, in particular, management of plutonium and other actinides must be handled. Safety features include a long thermal response time, a large margin to coolant boiling, a primary cooling system that operates near atmospheric pressure, and an intermediate sodium system between the radioactive sodium in the primary system and the water and steam in the power plant. Innovations can reduce capital cost, such as modular designs, removing a primary loop, integrating the pump and intermediate heat exchanger, and better materials.
The SFR's fast spectrum makes it possible to use available fissile and fertile materials (including depleted uranium) considerably more efficiently than thermal spectrum reactors with once-through fuel cycles.
History
In 2020 Natrium received an $80M grant from the US Department of Energy for development of its SFR. The program plans to use High-Assay, Low Enriched Uranium fuel containing 5-20% uranium. The reactor was expected to be sited underground and have gravity-inserted control rods. Because it operates at atmospheric pressure, a large containment shield is not necessary. Because of its large heat storage capacity, it was expected to be able to produce surge power of 500 MWe for 5+ hours, beyond its continuous power of 345 MWe.
Reactors
Sodium-cooled reactors have included:
Most of these were experimental plants that are no longer operational. On November 30, 2019, CTV reported that the Canadian provinces of New Brunswick, Ontario and Saskatchewan planned an announcement about a joint plan to cooperate on small sodium fast modular nuclear reactors from New Brunswick-based ARC Nuclear Canada.
See also
Fast breeder reactor
Fast neutron reactor
Integral fast reactor
Lead-cooled fast reactor
Gas-cooled fast reactor
Generation IV reactor
References
External links
Idaho National Laboratory Sodium-cooled Fast Reactor Fact Sheet
Generation IV International Forum SFR website
INL SFR workshop summary
ALMR/PRISM
ASME
Liquid metal fast reactors
Radioactive waste | Sodium-cooled fast reactor | [
"Chemistry",
"Technology"
] | 1,779 | [
"Environmental impact of nuclear power",
"Radioactive waste",
"Hazardous waste",
"Radioactivity"
] |
3,122,592 | https://en.wikipedia.org/wiki/European%20Automotive%20Design | European Automotive Design was a British magazine, which was closed in January 2009 because the publishing company behind it—Findlay Publications Ltd—was taken into administration by its major shareholder, Robert Findlay. When he re-invented the company as Findlay Media Ltd, he 'left behind' European Automotive Design and its sister publications (European Truck & Bus Technology and Automotive Design Asia) along with its founding editor and publisher.
References
Automobile magazines published in the United Kingdom
Magazines with year of establishment missing
Magazines disestablished in 2009
Automotive design
Design magazines
Defunct magazines published in the United Kingdom | European Automotive Design | [
"Engineering"
] | 116 | [
"Design magazines",
"Design"
] |
3,122,600 | https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Steenrod%20axioms | In mathematics, specifically in algebraic topology, the Eilenberg–Steenrod axioms are properties that homology theories of topological spaces have in common. The quintessential example of a homology theory satisfying the axioms is singular homology, developed by Samuel Eilenberg and Norman Steenrod.
One can define a homology theory as a sequence of functors satisfying the Eilenberg–Steenrod axioms. The axiomatic approach, which was developed in 1945, allows one to prove results, such as the Mayer–Vietoris sequence, that are common to all homology theories satisfying the axioms.
If one omits the dimension axiom (described below), then the remaining axioms define what is called an extraordinary homology theory. Extraordinary cohomology theories first arose in K-theory and cobordism.
Formal definition
The Eilenberg–Steenrod axioms apply to a sequence of functors from the category of pairs of topological spaces to the category of abelian groups, together with a natural transformation called the boundary map (here is a shorthand for ). The axioms are:
Homotopy: Homotopic maps induce the same map in homology. That is, if is homotopic to , then their induced homomorphisms are the same.
Excision: If is a pair and U is a subset of A such that the closure of U is contained in the interior of A, then the inclusion map induces an isomorphism in homology.
Dimension: Let P be the one-point space; then for all .
Additivity: If , the disjoint union of a family of topological spaces , then
Exactness: Each pair (X, A) induces a long exact sequence in homology, via the inclusions and :
If P is the one point space, then is called the coefficient group. For example, singular homology (taken with integer coefficients, as is most common) has as coefficients the integers.
Consequences
Some facts about homology groups can be derived directly from the axioms, such as the fact that homotopically equivalent spaces have isomorphic homology groups.
The homology of some relatively simple spaces, such as n-spheres, can be calculated directly from the axioms. From this it can be easily shown that the (n − 1)-sphere is not a retract of the n-disk. This is used in a proof of the Brouwer fixed point theorem.
Dimension axiom
A "homology-like" theory satisfying all of the Eilenberg–Steenrod axioms except the dimension axiom is called an extraordinary homology theory (dually, extraordinary cohomology theory). Important examples of these were found in the 1950s, such as topological K-theory and cobordism theory, which are extraordinary cohomology theories, and come with homology theories dual to them.
See also
Zig-zag lemma
Notes
References
Homology theory
Mathematical axioms | Eilenberg–Steenrod axioms | [
"Mathematics"
] | 614 | [
"Mathematical logic",
"Mathematical axioms"
] |
3,122,629 | https://en.wikipedia.org/wiki/Chromium%28II%29%20oxide | Chromium(II) oxide (CrO) is an inorganic compound composed of chromium and oxygen. It is a black powder that crystallises in the rock salt structure.
Hypophosphites may reduce chromium(III) oxide to chromium(II) oxide:
H3PO2 + 2 Cr2O3 → 4 CrO + H3PO4
It is readily oxidized by the atmosphere. CrO is basic, while is acidic, and is amphoteric.
CrO occurs in the spectra of luminous red novae, which occur when two stars collide. It is not known why red novae are the only objects that feature this molecule; one possible explanation is an as-yet-unknown nucleosynthesis process.
See also
Chromium(IV) oxide
Chromium(VI) oxide
References
Chromium(II) compounds
Transition metal oxides
Reducing agents
Chromium–oxygen compounds
Rock salt crystal structure | Chromium(II) oxide | [
"Chemistry"
] | 202 | [
"Reducing agents",
"Redox",
"Inorganic compounds",
"Inorganic compound stubs"
] |
3,122,802 | https://en.wikipedia.org/wiki/QuickTime%20Broadcaster | QuickTime Broadcaster is an audio and video RTP/RTSP server by Apple Inc. for Mac OS X. It is separate from Apple's QuickTime Streaming Server, as it is not a service daemon but a desktop application. It is able to stream live video and audio over a network in any QuickTime supported streaming codec.
The latest version delivers increased compatibility with Mac OS X Leopard and provides important bug fixes.
New features include:
H.264 (MPEG-4 Part 10 video) live broadcasting
3G streaming support for sending live broadcasts to multimedia enabled cell phones
Dramatically improved performance for streaming 640x480 30fps video
Increased standards support including 3GPP and ISMA (Internet Streaming Media Alliance)
External links
Broadcaster
MacOS-only software made by Apple Inc.
Streaming software
MacOS Server | QuickTime Broadcaster | [
"Technology"
] | 166 | [
"Computing stubs",
"Computer network stubs"
] |
3,122,904 | https://en.wikipedia.org/wiki/American%20Society%20for%20Engineering%20Education | The American Society for Engineering Education (ASEE) is a non-profit member association, founded in 1893, dedicated to promoting and improving engineering and engineering technology education. The purpose of ASEE is the advancement of education in all of its functions which pertain to engineering and allied branches of science and technology, including the processes of teaching and learning, counseling, research, extension services and public relations. ASEE administers the engineering technology honor society Tau Alpha Pi.
History
A full reading of the history of ASEE can be found in a 1993 centennial article in the Journal of Engineering Education.
Founded initially as the Society for the Promotion of Engineering Education (SPEE) in 1893, the society was created at a time of great growth in American higher education. In 1862, Congress passed the Morrill Land-Grant Act, which provided money for states to establish public institutions of higher education. These institutions focused on providing practical skills, especially "for the benefit of Agriculture and the Mechanic Arts". As a result of increasingly available higher education, more Americans started entering the workforce with advanced training in applied fields of knowledge. However, they often lacked grounding in the science and engineering principles underlying this practical knowledge.
After a generation of students had passed through these new public universities, professors of engineering began to question whether they should adopt a more rigorous approach to teaching the fundamentals of their field. Ultimately, they concluded that engineering curricula should stress fundamental scientific and mathematical principles, not hands-on apprenticeship experiences. To organize support for this approach to engineering education, SPEE was formed in the midst of the 1893 Chicago World’s Fair. Known as the World's Columbian Exposition, this event heralded the promise of science and engineering by introducing many Americans, for example, to the wonders of electricity. Emerging out of the Fair’s World Engineering Congress, SPEE members dedicated themselves to improving engineering education at the classroom level. Over its history, the society has put out several reports on the subject, such as the Mann Report (1907), the Wickenden Study (1920s), and the Grinter Report (1955).
During World War II, the federal government started to place more emphasis on research, prompting SPEE to form the Engineering College Research Association (ECRA), which was more concerned with research than SPEE had ever been. The ECRA spoke for most engineering researchers, sought federal funds, and collected and published information on academic engineering research. Colonel and University Dean Blake R. Van Leer was the chairman and oversaw several committees during this process. After the war, the desire to integrate the less research-oriented SPEE with the ECRA resulted in the disbanding of SPEE and the formation of ASEE in 1946.
ASEE was a volunteer-run organization through the 1950s. In 1961, ASEE established a staff headquarters in Washington, DC, and undertook a more activist posture. However, through the 1960s, the Vietnam War and social unrest, in general, made the mood on many campuses anti-technology, anti-business, and anti-establishment. In the 1960s and 1970s, ASEE presidents Merritt Williams and George Hawkins reorganized ASEE to better represent its members and return its focus to teaching. As a result of this new focus, ASEE began to administer several teaching-related government contracts, including NASA's summer faculty fellowships and the Defense Department's Civil Defense Summer Institutes and Fellowships. ASEE administered over ten government contracts, including the prestigious National Science Foundation's Graduate Research Fellowship Program until 2019.
Another result of the renewed emphasis on teaching was ASEE’s initiative for recruiting minorities and women into engineering. ASEE created the Black Engineering College Development program which used industry funding to upgrade engineering faculty in traditionally black colleges and to develop public information on these schools. ASEE also received several grants in the 1970s to research the status of women and American Indians and develop programs to attract more of these students to enter engineering. Since then, ASEE has continued to release studies on the subject in its Journal of Engineering Education, and has created divisions specifically devoted to developing programs and research in this area.
Publications
ASEE produces many publications on the topic of engineering education, including the general-interest Prism, a monthly magazine covering the pervasive role of engineering in the world, the journals Journal of Engineering Education and Advances in Engineering Education, peer-reviewed journals covering research in engineering education, Profiles of Engineering and Technology Colleges, providing data on engineering colleges and universities, and the eGFI: Engineering, Go For It! magazine and associated website, designed to attract high school students and their parents and teachers to engineering.
Prism
The magazine reports about cutting-edge technology and other important trends in engineering education, including:
New instructional methods
Innovative curricula
Lifelong learning
Research opportunities, trends, and developments
Education and research projects with government and industry K-12 outreach activities that encourage youth to pursue studies and careers in engineering.
Journal of Engineering Education
The Journal of Engineering Education is a peer-reviewed academic journal published quarterly in partnership with a global community of engineering education societies and associations. The journal is a founding member of the International Federation of Engineering Education Societies.
Advances in Engineering Education
Advances in Engineering Education covers engineering education practice, especially the creative use of multimedia.
Profiles of Engineering and Engineering Technology Colleges
This directory provides profiles of United States and Canadian schools offering undergraduate and graduate engineering, as well as engineering technology programs with the intent of preparing prospective students for their future education in engineering.
Computers in Education
Computers in Education () is an academic journal covering all aspects of computation in education. It is published by the Northeast Consortium for Engineering Education on behalf of the Computers in Education Division of the American Society for Engineering Education.
Presidents since 2000
2000-2001 - Wallace T. Fowler
2001-2002 - Gerald S. Jakubowski
2002-2003 - Eugene M. DeLoatch
2003-2004 - Duane L. Abata
2004-2005 - Sherra E. Kerns
2005-2006 - Ronald Barr
2006-2007 - David Wormley
2007-2008 - Jim Melsa
2008-2009 - Sarah Rajala
2009-2010 - J.P. Moshen
2010-2011 - Renata Engel
2011-2012 - Don Giddens
2012-2013 - Walter Buchanan
2013-2014 - Kenneth Galloway
2014-2015 - Nicholas Altiero
2015-2016 - Joe Rencis
2016-2017 - Louis Martin-Vega
2017-2018 - Bevlee Watford
2018-2019 - Stephanie Farrell
2019-2020 - Stephanie G. Adams
2020-2021 - Sheryl Sorby
2021-2022 - Adrienne Minerick
2022-2023 - Jenna Carpenter
2023-2024 - Doug Tougaw
2025-2026 - Christi Luks
Awards
ASEE annually recognizes the outstanding accomplishments of engineering and engineering technology educators through the ASEE awards program. By their commitment to their profession, desire to further the Society's mission, and participation in civic and community affairs, ASEE award winners exemplify the best in engineering and engineering technology education.
Current awards
Former awards
George Westinghouse Award (1946-1999)
Conferences
ASEE and its members organize a number of conferences, meetings, and workshops, foremost among them the ASEE Annual Conference. Other events include regional member meetings, professional-interest focused conferences, and K-12 teacher training.
Fellowships
ASEE administers a number of fellowship and research opportunities with funding provided by federal agencies including the Department of Defense (DOD), NASA, and the National Science Foundation (NSF). These range from programs that provide summer internships for high school students to research programs for faculty members during the summer or while on sabbatical. Programs include undergraduate and graduate research support and postdoctoral research programs for recent PhDs at government and industrial research facilities. ASEE provides support tasks that include outreach and promotion activities, application processing support, application review activities, and administration of stipend and tuition payments for program participants.
See also
Ira Osborn Baker
References
External links
Triangle Coalition Membership
Engineering organizations
1893 establishments in Washington, D.C. | American Society for Engineering Education | [
"Engineering"
] | 1,621 | [
"nan"
] |
3,123,067 | https://en.wikipedia.org/wiki/Corey%E2%80%93Fuchs%20reaction | The Corey–Fuchs reaction, also known as the Ramirez–Corey–Fuchs reaction, is a series of chemical reactions designed to transform an aldehyde into an alkyne. The formation of the 1,1-dibromoolefins via phosphine-dibromomethylenes was originally discovered by Desai, McKelvie and Ramirez. The phosphine can be partially substituted by zinc dust, which can improve yields and simplify product separation. The second step of the reaction to convert dibromoolefins to alkynes is known as Fritsch–Buttenberg–Wiechell rearrangement. The overall combined transformation of an aldehyde to an alkyne by this method is named after its developers, American chemists Elias James Corey and Philip L. Fuchs.
By suitable choice of base, it is often possible to stop the reaction at the 1-bromoalkyne, a useful functional group for further transformation.
Reaction mechanism
The Corey–Fuchs reaction is based on a special case of the Wittig reaction, where two equivalents of triphenylphosphine are used with carbon tetrabromide to produce the triphenylphosphine-dibromomethylene ylide.
This ylide undergoes a Wittig reaction when exposed to an aldehyde. Alternatively, using a ketone generates a gem-dibromoalkene.
The second part of the reaction converts the isolable gem-dibromoalkene intermediate to the alkyne. Deuterium-labelling studies show that this step proceeds through a carbene mechanism. Lithium-Bromide exchange is followed by α-elimination to afford the carbene. 1,2-shift then affords the deuterium-labelled terminal alkyne. The 50% H-incorporation could be explained by deprotonation of the (acidic) terminal deuterium with excess BuLi.
See also
Appel reaction
Fritsch-Buttenberg-Wiechell rearrangement
Seyferth-Gilbert homologation
Wittig reaction
References
Corey, E. J.; Fuchs, P. L. Tetrahedron Lett. 1972, 13, 3769–3772.
Mori, M.; Tonogaki, K.; Kinoshita, A. Organic Syntheses, Vol. 81, p. 1 (2005). (Article )
Marshall, J. A.; Yanik, M. M.; Adams, N. D.; Ellis, K. C.; Chobanian, H. R. Organic Syntheses, Vol. 81, p. 157 (2005). (Article )
N. B. Desai, N. McKelvie, F. Ramirez JACS, Vol. 84, p. 1745-1747 (1962).
External links
Corey-Fuchs Alkyne Synthesis
Carbon-carbon bond forming reactions
Rearrangement reactions
Name reactions | Corey–Fuchs reaction | [
"Chemistry"
] | 615 | [
"Name reactions",
"Carbon-carbon bond forming reactions",
"Rearrangement reactions",
"Organic reactions"
] |
3,123,914 | https://en.wikipedia.org/wiki/Feigenbaum%20function | In the study of dynamical systems the term Feigenbaum function has been used to describe two different functions introduced by the physicist Mitchell Feigenbaum:
the solution to the Feigenbaum-Cvitanović functional equation; and
the scaling function that described the covers of the attractor of the logistic map
Idea
Period-doubling route to chaos
In the logistic map,
we have a function , and we want to study what happens when we iterate the map many times. The map might fall into a fixed point, a fixed cycle, or chaos. When the map falls into a stable fixed cycle of length , we would find that the graph of and the graph of intersects at points, and the slope of the graph of is bounded in at those intersections.
For example, when , we have a single intersection, with slope bounded in , indicating that it is a stable single fixed point.
As increases to beyond , the intersection point splits to two, which is a period doubling. For example, when , there are three intersection points, with the middle one unstable, and the two others stable.
As approaches , another period-doubling occurs in the same way. The period-doublings occur more and more frequently, until at a certain , the period doublings become infinite, and the map becomes chaotic. This is the period-doubling route to chaos.
Scaling limit
Looking at the images, one can notice that at the point of chaos , the curve of looks like a fractal. Furthermore, as we repeat the period-doublings, the graphs seem to resemble each other, except that they are shrunken towards the middle, and rotated by 180 degrees.
This suggests to us a scaling limit: if we repeatedly double the function, then scale it up by for a certain constant : then at the limit, we would end up with a function that satisfies . Further, as the period-doubling intervals become shorter and shorter, the ratio between two period-doubling intervals converges to a limit, the first Feigenbaum constant .
The constant can be numerically found by trying many possible values. For the wrong values, the map does not converge to a limit, but when it is , it converges. This is the second Feigenbaum constant.
Chaotic regime
In the chaotic regime, , the limit of the iterates of the map, becomes chaotic dark bands interspersed with non-chaotic bright bands.
Other scaling limits
When approaches , we have another period-doubling approach to chaos, but this time with periods 3, 6, 12, ... This again has the same Feigenbaum constants . The limit of is also the same function. This is an example of universality.
We can also consider period-tripling route to chaos by picking a sequence of such that is the lowest value in the period- window of the bifurcation diagram. For example, we have , with the limit . This has a different pair of Feigenbaum constants . And converges to the fixed point toAs another example, period-4-pling has a pair of Feigenbaum constants distinct from that of period-doubling, even though period-4-pling is reached by two period-doublings. In detail, define such that is the lowest value in the period- window of the bifurcation diagram. Then we have , with the limit . This has a different pair of Feigenbaum constants .
In general, each period-multiplying route to chaos has its own pair of Feigenbaum constants. In fact, there are typically more than one. For example, for period-7-pling, there are at least 9 different pairs of Feigenbaum constants.
Generally, , and the relation becomes exact as both numbers increase to infinity: .
Feigenbaum-Cvitanović functional equation
This functional equation arises in the study of one-dimensional maps that, as a function of a parameter, go through a period-doubling cascade. Discovered by Mitchell Feigenbaum and Predrag Cvitanović, the equation is the mathematical expression of the universality of period doubling. It specifies a function g and a parameter by the relation
with the initial conditionsFor a particular form of solution with a quadratic dependence of the solution
near is one of the Feigenbaum constants.
The power series of is approximately
Renormalization
The Feigenbaum function can be derived by a renormalization argument.
The Feigenbaum function satisfies for any map on the real line at the onset of chaos.
Scaling function
The Feigenbaum scaling function provides a complete description of the attractor of the logistic map at the end of the period-doubling cascade. The attractor is a Cantor set, and just as the middle-third Cantor set, it can be covered by a finite set of segments, all bigger than a minimal size dn. For a fixed dn the set of segments forms a cover Δn of the attractor. The ratio of segments from two consecutive covers, Δn and Δn+1 can be arranged to approximate a function σ, the Feigenbaum scaling function.
See also
Logistic map
Presentation function
Notes
Bibliography
Bound as Order in Chaos, Proceedings of the International Conference on Order and Chaos held at the Center for Nonlinear Studies, Los Alamos, New Mexico 87545, USA 24–28 May 1982, Eds. David Campbell, Harvey Rose; North-Holland Amsterdam .
Chaos theory
Dynamical systems | Feigenbaum function | [
"Physics",
"Mathematics"
] | 1,107 | [
"Mechanics",
"Dynamical systems"
] |
3,124,236 | https://en.wikipedia.org/wiki/European%20embedded%20value | The European embedded value (EEV) is an effort by the CFO Forum to standardize the calculation of the embedded value. For this purpose the CFO Forum has released guidelines how embedded value should be calculated.
There is a lot of subjectivity involved in calculating the value of a life insurer. Insurance contracts are long-term contracts, so the value of the company now is dependent on how each of those contracts end up performing. Profit is made if the policyholder does not die, for example, and just contributes premiums over many years. Losses are possible for policies where the insured dies soon after signing the contract. And profitability is also affected by whether (and when) a policy might terminate early.
An actuary calculates an embedded value by making certain assumptions about life expectancy, persistency, investment conditions, and so on - thus making an estimate of what the company is worth now. But if each person has a different opinion on how things will turn out, you could expect a range of inconsistent estimates of the worth of the company. With this range of approaches, it is very difficult to compare EV calculations between companies.
The CFO Forum was formed to consider general issues relevant to measuring the value of insurance companies. The EEV was the output of this forum, and allows greater consistency in the such calculations, making them more useful.
Types
EEV can be "real world" or "market consistent". The former takes the best estimate for parameters that are available, whereas the latter uses a slightly constrained set of parameters which are close to best estimate, but which produce results which match market-related hedge costs.
Real-world EEV usually uses a risk discount rate made up of the risk-free rate plus a risk margin which reflects the weighted average cost of capital and Beta from the CAPM model. Using company-level economic models clearly reflects a top-down approach to determining the risk discount rate.
Market-consistent EEV makes use of a bottom-up approach for determining the risk discount rate, which produces a number which equals the risk free rate plus an explicit allowance for operational risk and market risk.
Although initially there was an equal use of these two types of EEV, as time passes companies appear to be moving towards the market-consistent approach.
References
General
European Embedded Value Principles
Tillinghast publication on trends in EEV usage
Actuarial science | European embedded value | [
"Mathematics"
] | 483 | [
"Applied mathematics",
"Actuarial science"
] |
3,124,343 | https://en.wikipedia.org/wiki/Kokee%20Ditch | The Kōkee Ditch is an irrigation canal on the island of Kauai.
In 1923, construction began on the Kōkee Ditch system to open the mauka hills to sugar cane production.
By 1926, the Kōkee Ditch was completed, diverting water from Mohihi Stream and the headwaters of the Waimea River in the Alakai Swamp at an altitude of about 3400 feet. About one-fourth of the Kōkee Ditch supply irrigated the highland sugar cane fields below Puu Ōpae reservoir on Niu Ridge, and the balance irrigated the highland fields east of Kōkee Road.
Canals in Hawaii
Geography of Kauai
Irrigation projects
Irrigation in the United States
Canals opened in 1926 | Kokee Ditch | [
"Engineering"
] | 144 | [
"Irrigation projects"
] |
3,124,369 | https://en.wikipedia.org/wiki/Simple%20Features | Simple Features (officially Simple Feature Access) is a set of standards that specify a common storage and access model of geographic features made of mostly two-dimensional geometries (point, line, polygon, multi-point, multi-line, etc.) used by geographic databases and geographic information systems.
It is formalized by both the Open Geospatial Consortium (OGC) and the International Organization for Standardization (ISO).
The ISO 19125 standard comes in two parts. Part 1, ISO 19125-1 (SFA-CA for "common architecture"), defines a model for two-dimensional simple features, with linear interpolation between vertices, defined in a hierarchy of classes; this part also defines representation of geometry in text and binary forms. Part 2 of the standard, ISO 19125-2 (SFA-SQL), defines a "SQL/MM" language binding API for SQL under the prefix "ST_". The open access OGC standards cover additionally APIs for CORBA and OLE/COM, although these have lagged behind the SQL one and are not standardized by ISO. There are also adaptations to other languages covered below.
The ISO/IEC 13249-3 SQL/MM Spatial extends the Simple Features data model, originally based on straight-line segments, adding circular interpolations (e.g. circular arcs) and other features like coordinate transformations and methods for validating geometries, as well as Geography Markup Language support.
Details
Part 1
The geometries are associated with spatial reference systems. The standard also specifies attributes, methods and assertions with the geometries, in the object-oriented style. In general, a 2D geometry is simple if it contains no self-intersection. The specification defines DE-9IM spatial predicates and several spatial operators that can be used to generate new geometries from existing geometries.
Part 2
Part 2 is a SQL binding to Part 1, providing a translation of the interface to non-object-oriented environments. For example, instead of a someGeometryObject.isEmpty() as in Part 1, SQL/MM uses a ST_IsEmpty(...) function in SQL.
Spatial
The spatial extension adds the datatypes "Circularstring", "CompoundCurve", "CurvePolygon", "PolyhedralSurface", the last of which is also included into the OGC standard. It also defines the SQL/MM versions of these types and operations on them.
Implementations
Direct implementations of Part 2 (SQL/MM) include:
MySQL Spatial Extensions. Up to MySQL 5.5, all of the functions that calculate relations between geometries are implemented using bounding boxes not the actual geometries. Starting from version 5.6, MySQL offers support for precise object shapes.
MonetDB/GIS extension for MonetDB.
PostGIS extension for PostgreSQL, also supporting some of the SQL/MM Spatial features.
SpatiaLite extension for SQLite
Oracle Spatial, which also implements some of the advanced features from SQL/MM Spatial.
IBM Db2 Spatial Extender and IBM Informix Spatial DataBlade.
Microsoft SQL Server since version 2008, with significant additions in the 2012 version.
SAP Sybase IQ.
SAP HANA as of 1.0 SPS6.
Adaptations include:
Implementations of the CORBA and OLE/COM interfaces detailed above are mainly produced by commercial vendors maintaining legacy technology.
R: The sf package implements Simple Features and contains functions that bind to GDAL for reading and writing data, to GEOS for geometrical operations, and to PROJ for projection conversions and datum transformations.
The GDAL library implements the Simple Features data model in its OGR component.
The Java-based deegree framework implements SFA (part 1) and various other OGC standards.
The Rust library geo_types implements geometry primitives that adhere to the simple feature access standards.
GeoSPARQL is an OGC standard that is intended to allow geospatially-linked data representation and querying based on RDF and SPARQL by defining an ontology for geospatial reasoning supporting a small Simple Features (as well as DE-9IM and RCC8) RDFS/OWL vocabulary for GML and WKT literals.
As of 2012, various NoSQL databases had very limited support for "anything more complex than a bounding box or proximity search".
See also
DE-9IM
Well-known text
Well-known binary
References
External links
Simple Features SWG
Standard documents
ISO/IEC:
ISO 19125-1:2004 Geographic information -- Simple feature access -- Part 1: Common architecture
ISO 19125-2:2004 Geographic information -- Simple feature access -- Part 2: SQL option
OpenGIS
OpenGIS Implementation Specification for Geographic information - Simple feature access - Part 1: Common architecture (05-126, 06-103r3, 06-103r4), current version 1.2.1
OpenGIS Simple Feature Access - Part 2: SQL Option (99-054, 05-134, 06-104r3, 06-104r4), current version 1.2.1, formerly OpenGIS Simple Features [Implementation Specification] for SQL
OpenGIS Simple Features Implementation Specification for CORBA (99-054), current version 1.0
OpenGIS Simple Features Implementation Specification for OLE/COM (99-050), current version 1.1
Geographic information systems
Open Geospatial Consortium
ISO/TC 211
Spatial database management systems | Simple Features | [
"Technology"
] | 1,150 | [
"Information systems",
"Geographic information systems"
] |
3,124,498 | https://en.wikipedia.org/wiki/Semantic%20resolution%20tree | A semantic resolution tree is a tree used for the definition of the semantics of a programming language. They have often been used as a theoretical tool for showing the unsatisfiability of clauses in first-order predicate logic.
References
Trees (data structures) | Semantic resolution tree | [
"Technology"
] | 54 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
3,124,804 | https://en.wikipedia.org/wiki/Tate%20conjecture | In number theory and algebraic geometry, the Tate conjecture is a 1963 conjecture of John Tate that would describe the algebraic cycles on a variety in terms of a more computable invariant, the Galois representation on étale cohomology. The conjecture is a central problem in the theory of algebraic cycles. It can be considered an arithmetic analog of the Hodge conjecture.
Statement of the conjecture
Let V be a smooth projective variety over a field k which is finitely generated over its prime field. Let ks be a separable closure of k, and let G be the absolute Galois group Gal(ks/k) of k. Fix a prime number ℓ which is invertible in k. Consider the ℓ-adic cohomology groups (coefficients in the ℓ-adic integers Zℓ, scalars then extended to the ℓ-adic numbers Qℓ) of the base extension of V to ks; these groups are representations of G. For any i ≥ 0, a codimension-i subvariety of V (understood to be defined over k) determines an element of the cohomology group
which is fixed by G. Here Qℓ(i ) denotes the ith Tate twist, which means that this representation of the Galois group G is tensored with the ith power of the cyclotomic character.
The Tate conjecture states that the subspace WG of W fixed by the Galois group G is spanned, as a Qℓ-vector space, by the classes of codimension-i subvarieties of V. An algebraic cycle means a finite linear combination of subvarieties; so an equivalent statement is that every element of WG is the class of an algebraic cycle on V with Qℓ coefficients.
Known cases
The Tate conjecture for divisors (algebraic cycles of codimension 1) is a major open problem. For example, let f : X → C be a morphism from a smooth projective surface onto a smooth projective curve over a finite field. Suppose that the generic fiber F of f, which is a curve over the function field k(C), is smooth over k(C). Then the Tate conjecture for divisors on X is equivalent to the Birch and Swinnerton-Dyer conjecture for the Jacobian variety of F. By contrast, the Hodge conjecture for divisors on any smooth complex projective variety is known (the Lefschetz (1,1)-theorem).
Probably the most important known case is that the Tate conjecture is true for divisors on abelian varieties. This is a theorem of Tate for abelian varieties over finite fields, and of Faltings for abelian varieties over number fields, part of Faltings's solution of the Mordell conjecture. Zarhin extended these results to any finitely generated base field. The Tate conjecture for divisors on abelian varieties implies the Tate conjecture for divisors on any product of curves C1 × ... × Cn.
The (known) Tate conjecture for divisors on abelian varieties is equivalent to a powerful statement about homomorphisms between abelian varieties. Namely, for any abelian varieties A and B over a finitely generated field k, the natural map
is an isomorphism. In particular, an abelian variety A is determined up to isogeny by the Galois representation on its Tate module H1(Aks, Zℓ).
The Tate conjecture also holds for K3 surfaces over finitely generated fields of characteristic not 2. (On a surface, the nontrivial part of the conjecture is about divisors.) In characteristic zero, the Tate conjecture for K3 surfaces was proved by André and Tankeev. For K3 surfaces over finite fields of characteristic not 2, the Tate conjecture was proved by Nygaard, Ogus, Charles, Madapusi Pera, and Maulik.
surveys known cases of the Tate conjecture.
Related conjectures
Let X be a smooth projective variety over a finitely generated field k. The semisimplicity conjecture predicts that the representation of the Galois group G = Gal(ks/k) on the ℓ-adic cohomology of X is semisimple (that is, a direct sum of irreducible representations). For k of characteristic 0, showed that the Tate conjecture (as stated above) implies the semisimplicity of
For k finite of order q, Tate showed that the Tate conjecture plus the semisimplicity conjecture would imply the strong Tate conjecture, namely that the order of the pole of the zeta function Z(X, t) at t = q−j is equal to the rank of the group of algebraic cycles of codimension j modulo numerical equivalence.
Like the Hodge conjecture, the Tate conjecture would imply most of Grothendieck's standard conjectures on algebraic cycles. Namely, it would imply the Lefschetz standard conjecture (that the inverse of the Lefschetz isomorphism is defined by an algebraic correspondence); that the Künneth components of the diagonal are algebraic; and that numerical equivalence and homological equivalence of algebraic cycles are the same.
Notes
References
External links
James Milne, The Tate conjecture over finite fields (AIM talk).
Topological methods of algebraic geometry
Diophantine geometry
Conjectures
Unsolved problems in number theory | Tate conjecture | [
"Mathematics"
] | 1,100 | [
"Unsolved problems in mathematics",
"Unsolved problems in number theory",
"Conjectures",
"Mathematical problems",
"Number theory"
] |
3,124,816 | https://en.wikipedia.org/wiki/Hedge%20%28linguistics%29 | In linguistics (particularly sub-fields like applied linguistics and pragmatics), a hedge is a word or phrase used in a sentence to express ambiguity, probability, caution, or indecisiveness about the remainder of the sentence, rather than full accuracy, certainty, confidence, or decisiveness. Hedges can also allow speakers and writers to introduce (or occasionally even eliminate) ambiguity in meaning and typicality as a category member. Hedging in category membership is used in reference to the prototype theory, to signify the extent to which items are typical or atypical members of different categories. Hedges might be used in writing, to downplay a harsh critique or a generalization, or in speaking, to lessen the impact of an utterance due to politeness constraints between a speaker and addressee.
Typically, hedges are adjectives or adverbs, but can also consist of clauses such as one use of tag questions. In some cases, a hedge could be regarded as a form of euphemism. Linguists consider hedges to be tools of epistemic modality; allowing speakers and writers to signal a level of caution in making an assertion. Hedges are also used to distinguish items into multiple categories, where items can be in a certain category to an extent.
Types of hedges
Hedges may take the form of many different parts of speech, for example:
There might just be a few insignificant problems we need to address. (adjective)
The party was somewhat spoiled by the return of the parents. (adverb)
I'm not an expert but you might want to try restarting your computer. (clause)
That's false, isn't it? (tag question clause)
Using hedges
Hedges are often used in everyday speech, and they can serve many different purposes. Below are a few ways to use hedges with examples to clarify these different functions.
Category membership
A very common use of hedges can be found in signaling typicality of category membership. Different hedges can signal prototypical membership in a category, meaning that member has most of the characteristics that are exemplary of the category. For example;
A robin is a bird par excellence.
This signifies that a robin has all of the typical characteristics of a bird, i.e. feathers, small, lives in a nest, etc.
Loosely speaking, a bat is a bird.
This sentence displays that a bat could technically be called a bird, but the hedge loosely speaking signifies that a bat has fringe membership in the category "bird".
Epistemic hedges
In some cases, "I don't know" functions as a prepositioned hedge—a forward-looking stance marker displaying that the speaker is not fully committed to what follows in their turn of talk.
Hedges may intentionally or unintentionally be employed in both spoken and written language since they are crucially important in communication. Hedges help speakers and writers indicate more precisely how the cooperative principle (expectations of quantity, quality, manner, and relevance) is observed in assessments. For example,
All I know is smoking is harmful to your health.
Here, it can be observed that information conveyed by the speaker is limited by adding all I know. By so saying, the speaker wants to inform that they are not only making an assertion but observing the maxim of quantity as well.
They told me that they are married.
If the speaker were to say simply They are married and did not know for sure if that were the case, they might violate the maxim of quality, since they were saying something that they do not know to be true or false. By prefacing the remark with They told me that, the speaker wants to confirm that they are observing the conversational maxim of quality.
I am not sure if all of these are clear to you, but this is what I know.
The above example shows that hedges are good indications the speakers are not only conscious of the maxim of manner, but they are also trying to observe them.
By the way, you like this car?
By using by the way, what has been said by the speakers is not relevant to the moment in which the conversation takes place. Such a hedge can be found in the middle of speakers' conversation as the speaker wants to switch to another topic that is different from the previous one. Therefore, by the way functions as a hedge indicating that the speaker wants to drift into another topic or to stop the previous topic.
Hedges in non-English languages
Hedges are used as a tool of communication and are found in all of the world's languages. Examples of hedges in languages besides English are as follow:
genre (French)
Il était, grand (He was, , tall.)
eigentlich (German)
După câte am înţeles (Romanian)
sora dumneavoastră crede că omul nu poate iubi decât o singură dată în viaţă. (, your sister thinks that man can only love once in his life.)
When this phrase has full syntactic complementation, speakers emphasize their lack of knowledge or display reluctance to answer. However, without an object complement, speakers display uncertainty about the truth of the following proposition or about its sufficiency as an answer.
Hedges in fuzzy language
Hedges are generally used to either add or take away fuzziness or obscurity in a given situation, often through the use of modal auxiliaries or approximates. Fuzzy language refers to the strategic manipulation of hedges so as to deliberately introduce ambiguity into a statement. Hedges can also be used to express sarcasm as a way of making sentences more vague in written form.
Sapphire works really hard.
In this sentence, the word really can make the sentence fuzzy depending on the tone of the sentence. It could be serious (where Sapphire really is hard-working and deserves a raise or promotion) or sarcastic (where Sapphire is not contributing to the work).
Lillian sure nailed her phonetics exam.
In this sentence, sure is used sarcastically to create vagueness.
Evasive hedging
Hedging can be used as an evasive tool. For example, when expectations are not met or when people want to avoid answering a question. This is seen below:
A: What did you think of Steve?B: As far as I can tell, he seems like a good guy.
A: What did you think about Erica's presentation?B: I mean, it wasn't the best.
Hedges and politeness
Hedges can also be used to politely respond negatively to commands and requests by others.
A: Are you coming to my ceremony tonight?
B: I might, I'll have to see.
A: Did you like that book?
B: Personally, it wasn't my favorite, but it isn't bad I suppose.
Incorrect usage of hedges
There are cases in which particular hedges cannot be used or are considered strange given the context.
Loosely speaking, my computer is also my television.
*Loosely speaking, my computer is an electronic device.
In the first sentence, 'loosely speaking' is used correctly, as it precedes a somewhat inaccurate, perhaps interpretive picture of the computer's identity. In the second sentence, 'loosely speaking' is used when the phrase 'broadly speaking' would be more apt: the description itself is accurate, but more general in nature.
Hedging strategies
Source:
Indetermination – serves to augment the uncertainty of a statement or response
Depersonalisation – circumvents the use of direct reference of a specific subject, creating fuzziness around who referent of the sentence is
Subjectivisation – to use verbs regarding the action of thought to express subjectivity about a claim (such as to suppose, think, or guess)
Limitation – narrowing the category membership of a subject in order to add clarity
See also
Polite fiction
Euphemism
Epistemic modality
References
Further reading
External links
Hedged Assertion
Parts of speech
Ambiguity
Euphemisms
Pragmatics | Hedge (linguistics) | [
"Technology"
] | 1,614 | [
"Parts of speech",
"Components"
] |
3,124,840 | https://en.wikipedia.org/wiki/HE%200437-5439 | HE 0437-5439 is a massive, unbound hypervelocity star (HVS), also called HVS3. It is a main sequence B-type star located in the Dorado constellation. It was discovered in 2005 with the Kueyen telescope, which is part of the European Southern Observatory's Very Large Telescope array. HE 0437-5439 is a young star, with an age of around 30 million years. The mass of the star is almost nine times greater than the mass of the Sun and the star is located 200,000 light years away in the direction of the Dorado constellation, 16 degrees northwest of the Large Magellanic Cloud (LMC) and farther away than the LMC. The star appears to be receding at an extremely high velocity of , or . At this speed, the star is no longer gravitationally bound and will leave the Milky Way galaxy system and escape into intergalactic space. It was thought to have originated in the LMC and been ejected from it soon after birth. This could happen if it originally was one of a pair of stars and if there is a supermassive black hole in the LMC.
In 2010 a study was published in which its proper motion was estimated using images from the Hubble Space Telescope from 2006 and 2009. This ruled out the possibility that the star came from the Large Magellanic Cloud, but was consistent with the hypothesis that it was ejected from the center of the Milky Way. Given its velocity, this would have occurred 100 million years ago. However, the star seems to be at most 20 million years old, which implies that it is a blue straggler, a star born from the merger of a binary star system, which was earlier ejected from the center of the Milky Way. In order for this to happen, there must have originally been a three-star system, or else there were two black holes and just the two stars.
Mechanism of ejection from the Milky Way
Studies say that a triple-star system was traveling through the center of the Milky Way when it came too close to the Galactic Center (which is thought to have a giant black hole). One of the stars was captured by the black hole causing the other two to be ejected from the Milky Way, where they merged to form a hot blue star. The star is moving at a speed of , about three times faster than the Sun's orbital velocity around the galaxy's center, and also faster than the galaxy's escape velocity.
The star is about 200,000 light years from the galaxy's center. Some doubt has surrounded the previous studies based on the speed and position of HE 0437–5439. The star would have to be at least 100 million years old to have traveled that distance from the galactic core, yet its mass and blue color indicate that it had burned only for 20 million years. These observations led to the explanation that it was part of a triple-star system consisting of two closely bound stars and one outer star. The black hole pulled the outer star away, which granted the star's momentum to the tight binary system and boosted both stars to escape velocity from the galaxy. As the stars traveled away, they went into normal stellar evolution, with one of them becoming a red giant and engulfing the other and forming one giant star — a blue straggler.
In 2008, a team of astronomers found a match between the star's chemical composition and the characteristics of stars in the Large Magellanic Cloud. Support that the star originated in the LMC was strengthened because the star is only 65,000 light years away from the nearby galaxy. Later observations using the Hubble Space Telescope showed that the star originated from the Milky Way's galactic center.
See also
Intergalactic star
Stellar kinematics
Blue straggler
References
External links
HE 0437-5439 on NASA
Two short video clips on the HE 0437-5439 hypervelocity star LectureMaker film crew produces an introduction and an interview with Dr. Warren R. Brown
B-type main-sequence stars
Blue stragglers
Dorado
Extragalactic stars
Intergalactic stars
Hypervelocity stars
? | HE 0437-5439 | [
"Astronomy"
] | 852 | [
"Dorado",
"Constellations"
] |
3,124,882 | https://en.wikipedia.org/wiki/Nursery%20habitat | In marine environments, a nursery habitat is a subset of all habitats where juveniles of a species occur, having a greater level of productivity per unit area than other juvenile habitats (Beck et al. 2001). Mangroves, salt marshes and seagrass are typical nursery habitats for a range of marine species. Some species will use nonvegetated sites, such as the yellow-eyed mullet, blue sprat and flounder.
Overview
The nursery habitat hypothesis states that the contribution per unit area of a nursery habitat is greater than for other habitats used by juveniles for the species. Productivity may be measured by density, survival, growth and movement to adult habitat (Beck et al. 2001).
There are two general models for the location of juvenile habitats within the total range for a species which reflect life history strategies of the species. These are the Classic Concept: Juveniles and Adults in separate habitats. Juveniles migrate to adult habitat. General Concept: overlap of juvenile and adult habitats.
Some marine species do not have juvenile habitats, e.g. arthropods and scallops. Fish, eels, some lobsters, blue crabs (and so forth) do have distinct juvenile habitats, whether with or without overlap with adult habitats.
In terms of management, use of the nursery role hypothesis may be limiting as it excludes some potentially important nursery sites. In these cases the Effective Juvenile Habitat concept may be more useful. This defines a nursery as that which supplies a higher percentage of individuals to adult populations.
Identification and subsequent management of nursery habitats may be important in supporting off-shore fisheries and ensuring species survival into the future. If we are unable to preserve nursery habitats, recruitment of juveniles into adult populations may decline, reducing population numbers and compromising the survival of species for biodiversity and human harvesting.
Determination
In order to determine the nursery habitat for a species, all habitats used by juveniles must be surveyed. This may include kelp forest, seagrass, mangroves, tidal flat, mudflat, wetland, salt marsh and oyster reef. While density may be an indicator of productivity, it is suggested that alone, density does not adequately provide evidence of the role of a habitat as a nursery. Recruitment biomass from juvenile to adult population is the best measure of movement between the two habitats.
Consider also biotic, abiotic and landscape variability in the value of nursery habitats. This may be an important consideration when looking at which sites to manage and protect. Biotic factors include: structural complexity, food availability, larval settlement cues, competition, and predation. Abiotic: temperature, salinity, depth, dissolved oxygen, freshwater inflow, retention zone and disturbance. Landscape factors involve: proximity of juvenile and adult habitats, access to larvae, number of adjacent habitats, patch shape, area and fragmentation. The effects of these factors may be positive or negative depending on species and broader environmental conditions at any given time.
It may be more holistic to consider temporal variation in habitats used as nurseries, and incorporating temporal scales into any testing is important. Also consider assemblages of species. Single species approaches may not be able to be used to adequately manage systems appropriately.
Acosta and Butler conducted experimental observation of spiny lobster to determine which habitats are used as nurseries. Mangroves are used as preferred nursery habitat when coral density is low. Predation on newly settled larvae was lower in mangrove than in seagrass beds and coral crevices. In comparison, Pipefish prefer seagrass over algae and sand habitats. King George Whiting have a more complex pattern of development. Settlement is preferred in seagrass and algae. Growth stages are primarily preferred in reef algae. 4 months post settlement, they move into unvegetated habitats (Jenkins and Wheatley, 1998).
Elusive Juvenile Habitats
For many fish species, including commercially exploited species that require careful management, juvenile habitats are unknown. In these cases, identifying nursery habitats requires knowledge of the spawning behavior and larval development of the species, and knowledge of the oceanography of the local marine environment (water currents; temperature, salinity, and density gradients; etc.). In combination, these sources of information can be used to predict where eggs go after spawning, where larvae hatch, and where larvae settle and metamorphose into juveniles. Further study of these settlement locations can identify the nursery habitats that should be considered in the management and conservation of the species.
For example, pelagic broadcast spawning, one of several spawning strategies known for marine species, occurs when eggs are released into some level of the water column and left to drift among the plankton until the larvae hatch and grow large enough to settle in nursery habitats and become juveniles after metamorphosis. To identify nursery habitats of pelagic broadcast spawning species, such as halibut, cod, grouper, and others, the first step is to identify the adult spawning grounds. This can be done with targeted fishing surveys and dissection of fish gonads for maturity stage. The location of the fish with mature (i.e. ready-to-spawn) gonads can be inferred as a spawning location.
Pelagic eggs are buoyant or semi-buoyant and will be subject to the currents and gradients at the level of the water column in which they were released. Plankton surveys at different depths above the spawning grounds of a species can be used to parcel out where in the water column the eggs have been released. Data on the water currents and environmental gradients at the same depths as the pelagic eggs can be incorporated into circulation models and used to calculate likely dispersal patterns for the eggs and subsequent larvae.
Information on the duration of larval development (i.e. the number of days it takes for an individual to develop into each larval life stage) can indicate how long the species remains in the water column and the distance the species may travel once it has reached a motile life stage instead of passively drifting. The knowledge of such larval movement capability can inform the likelihood that areas represent nursery habitats.
Other relevant information for identifying elusive nursery grounds is the presence or absence of appropriate prey for settling larvae and young juveniles, the presence or absence of predators, and the preferred environmental thresholds (temperature, salinity, etc.). Habitats that do not contain the properties necessary to support a juvenile of the given species are not likely to be nursery habitats, even if models of egg and larval dispersal indicate the possibility of settlement in those areas.
Bibliography
Beck, M. W., Heck Jr, K. L., Able, K. W., Childers, D. L., Eggleston, D. B., Gillanders, B. M., ... & Orth, R. J. (2001). The identification, conservation, and management of estuarine and marine nurseries for fish and invertebrates: a better understanding of the habitats that serve as nurseries for marine species and the factors that create site-specific variability in nursery quality will improve conservation and management of these areas. Bioscience, 51(8), 633–641.
Bradbury, I.R., Snelgrove, P.V.R., 2001. Contrasting larval transport in demersal fish and benthic invertebrates: The roles of behaviour and advective processes in determining spatial pattern. Canadian Journal of Fisheries and Aquatic Sciences 58, 811–823.
Hoagstrom, C.W., Turner, T.F., 2015. Recruitment ecology of pelagic-broadcast spawning minnows: Paradigms from the ocean advance science and conservation of an imperilled freshwater fauna. Fish and Fisheries 16, 282–299.
Pepin, P., Helbig, J.A., 1997. Distribution and drift of Atlantic cod (Gadus morhua) eggs and larvae on the northeast Newfoundland Shelf. Canadian Journal of Fisheries and Aquatic Sciences 54, 670–685.
A. Schwarz; M. Morrison; I. Hawes; J.Halliday (2006) Physical and biological characteristics of a rare marine habitat: sub-tidal seagrass beds of offshore islands. Science for Conservation 269. p. 39. Department of Conservation, New Zealand.
Aquatic ecology | Nursery habitat | [
"Biology"
] | 1,688 | [
"Aquatic ecology",
"Ecosystems"
] |
3,124,930 | https://en.wikipedia.org/wiki/Hybrid%20name | In botanical nomenclature, a hybrid may be given a hybrid name, which is a special kind of botanical name, but there is no requirement that a hybrid name should be created for plants that are believed to be of hybrid origin. The International Code of Nomenclature for algae, fungi, and plants (ICNafp) provides the following options in dealing with a hybrid:
A hybrid may get a name if the author considers it necessary (in practice, authors tend to use this option for naturally occurring hybrids), but it is recommended to use parents' names as they are more informative (art. H.10B.1).
A hybrid may also be indicated by a formula listing the parents. Such a formula uses the multiplication sign "×" to link the parents.
"It is usually preferable to place the names or epithets in a formula in alphabetical order. The direction of a cross may be indicated by including the sexual symbols (♀: female; ♂: male) in the formula, or by placing the female parent first. If a non-alphabetical sequence is used, its basis should be clearly indicated." (H.2A.1)
Grex names can be given to orchid hybrids.
A hybrid name is treated like other botanical names, for most purposes, but differs in that:
A hybrid name does not necessarily refer to a morphologically distinctive group, but applies to all progeny of the parents, no matter how much they vary.
E.g., Magnolia × soulangeana applies to all progeny from the cross Magnolia denudata × Magnolia liliiflora, and from the crosses of all their progeny, as well as from crosses of any of the progeny back to the parents (backcrossing). This covers quite a range in flower colour.
Grex names (for orchids only) differ in that they do not cover crosses from plants within the grex (F2 hybrids) or back-crosses (crosses between a grex member and its parent).
Hybrids can be named with ranks, like other organisms covered by the ICNafp. They are nothotaxa, from notho- (hybrid) + taxon. If the parents (or postulated parents) differ in rank, then the rank of the nothotaxon is the lowest. The names of nothospecies differ depending on whether they are derived from species within the same genus; if more than one parental genus is involved, then the nothospecies name includes a nothogenus name.
Pyrus × bretschneideri is a hybrid between two species in the genus Pyrus.
× Pyraria irregularis, in the nothogenus Pyraria, is a hybrid between Aria edulis and Pyrus communis.
Publication of names
Names of hybrids between genera (called nothogenera) can be published by specifying the names of the parent genera, but without a scientific description, and do not have a type. Nothotaxon names with the rank of a subdivision of a genus (notho-subgenus, notho-section, notho-series, etc.) are also published by listing the parent taxa and without descriptions or types.
Forms of hybrid names
A hybrid name can be indicated by:
a multiplication sign "×" placed before the name of an intergeneric hybrid or before the epithet of a species hybrid. An intervening space is optional. e.g.:
× Sorbaronia or ×Sorbaronia is the name of hybrids between the genera Sorbus and Aronia,
Iris × germanica or Iris ×germanica is a species derived by hybrid speciation
or by the prefix notho- attached to the rank (from Ancient Greek νόθος, nóthos, “bastard”)
Crataegus nothosect. Crataeguineae
Iris germanica nothovar. florentina.
The multiplication sign and the prefix notho- are not part of the actual name and are disregarded for nomenclatural purposes such as synonymy, homonymy, etc. This means that a taxonomist could decide to use either form of this name: Drosera ×anglica to emphasize that it is a hybrid, or Drosera anglica to emphasize that it is a species.
The names of intergeneric hybrids generally have a special form called a condensed formula, e.g., × Agropogon for hybrids between Agrostis and Polypogon. Hybrids involving four or more genera are formed from the name of a person, with suffix -ara, e.g., × Belleara. Names for hybrids between three genera can be either a condensed formula or formed from a person's name with suffix -ara.
Notation
The symbol used to indicate a hybrid is . (Linnaeus originally used , but abandoned it in favour of the multiplication sign.)
See also
Botanical nomenclature
International Code of Nomenclature for algae, fungi, and plants
Graft-chimaera names look similar, but use .
Glossary of scientific naming
How to type the × symbol
Notes
References
External links
The Language of Horticulture
Botanical nomenclature
Hybrid plants | Hybrid name | [
"Biology"
] | 1,054 | [
"Botanical nomenclature",
"Plants",
"Hybrid organisms",
"Botanical terminology",
"Biological nomenclature",
"Hybrid plants"
] |
3,124,950 | https://en.wikipedia.org/wiki/Colin%20de%20Verdi%C3%A8re%20graph%20invariant | Colin de Verdière's invariant is a graph parameter for any graph G, introduced by Yves Colin de Verdière in 1990. It was motivated by the study of the maximum multiplicity of the second eigenvalue of certain Schrödinger operators.
Definition
Let be a simple graph with vertex set . Then is the largest corank of any symmetric matrix such that:
(M1) for all with : if , and if ;
(M2) has exactly one negative eigenvalue, of multiplicity 1;
(M3) there is no nonzero matrix such that and such that if either or hold.
Characterization of known graph families
Several well-known families of graphs can be characterized in terms of their Colin de Verdière invariants:
if and only if G has only one vertex;
if and only if G is a linear forest (a disjoint union of paths);
if and only if G is outerplanar;
if and only if G is planar;
if and only if G is linklessly embeddable in R3.
These same families of graphs also show up in connections between the Colin de Verdière invariant of a graph and the structure of its complement:
If the complement of an n-vertex graph is a linear forest, then ;
If the complement of an n-vertex graph is outerplanar, then ;
If the complement of an n-vertex graph is planar, then .
Graph minors
A minor of a graph is another graph formed from it by contracting edges and by deleting edges and vertices. The Colin de Verdière invariant is minor-monotone, meaning that taking a minor of a graph can only decrease or leave unchanged its invariant:
If H is a minor of G then .
By the Robertson–Seymour theorem, for every k there exists a finite set H of graphs such that the graphs with invariant at most k are the same as the graphs that do not have any member of H as a minor. lists these sets of forbidden minors for k ≤ 3; for k = 4 the set of forbidden minors consists of the seven graphs in the Petersen family, due to the two characterizations of the linklessly embeddable graphs as the graphs with μ ≤ 4 and as the graphs with no Petersen family minor. For k = 5 the set of forbidden minors includes the 78 graphs of the Heawood family, and it is conjectured that this list is complete.
Chromatic number
conjectured that any graph with Colin de Verdière invariant μ may be colored with at most μ + 1 colors. For instance, the linear forests have invariant 1, and can be 2-colored; the outerplanar graphs have invariant two, and can be 3-colored; the planar graphs have invariant 3, and (by the four color theorem) can be 4-colored.
For graphs with Colin de Verdière invariant at most four, the conjecture remains true; these are the linklessly embeddable graphs, and the fact that they have chromatic number at most five is a consequence of a proof by of the Hadwiger conjecture for K6-minor-free graphs.
Other properties
If a graph has crossing number , it has Colin de Verdière invariant at most . For instance, the two Kuratowski graphs and can both be drawn with a single crossing, and have Colin de Verdière invariant at most four.
Influence
The Colin de Verdière invariant is defined through a class of matrices corresponding to the graph instead of just a single matrix. Along the same lines other graph parameters can be defined and studied, such as the minimum rank, minimum semidefinite rank and minimum skew rank.
Notes
References
. Translated by Neil J. Calkin as .
.
.
.
Graph invariants
Graph minor theory | Colin de Verdière graph invariant | [
"Mathematics"
] | 761 | [
"Graph invariants",
"Graph minor theory",
"Mathematical relations",
"Graph theory"
] |
3,125,089 | https://en.wikipedia.org/wiki/Azumaya%20algebra | In mathematics, an Azumaya algebra is a generalization of central simple algebras to -algebras where need not be a field. Such a notion was introduced in a 1951 paper of Goro Azumaya, for the case where is a commutative local ring. The notion was developed further in ring theory, and in algebraic geometry, where Alexander Grothendieck made it the basis for his geometric theory of the Brauer group in Bourbaki seminars from 1964–65. There are now several points of access to the basic definitions.
Over a ring
An Azumaya algebra
over a commutative ring is an -algebra obeying any of the following equivalent conditions:
There exists an -algebra such that the tensor product of -algebras is Morita equivalent to .
The -algebra is Morita equivalent to , where is the opposite algebra of .
The center of is , and is separable.
is finitely generated, faithful, and projective as an -module, and the tensor product is isomorphic to via the map sending to the endomorphism of .
Examples over a field
Over a field , Azumaya algebras are completely classified by the Artin–Wedderburn theorem since they are the same as central simple algebras. These are algebras isomorphic to the matrix ring for some division algebra over whose center is just . For example, quaternion algebras provide examples of central simple algebras.
Examples over local rings
Given a local commutative ring , an -algebra is Azumaya if and only if is free of positive finite rank as an -module, and the algebra is a central simple algebra over , hence all examples come from central simple algebras over .
Cyclic algebras
There is a class of Azumaya algebras called cyclic algebras which generate all similarity classes of Azumaya algebras over a field , hence all elements in the Brauer group (defined below). Given a finite cyclic Galois field extension of degree , for every and any generator there is a twisted polynomial ring , also denoted , generated by an element such that
and the following commutation property holds:
As a vector space over , has basis with multiplication given by
Note that give a geometrically integral variety , there is also an associated cyclic algebra for the quotient field extension .
Brauer group of a ring
Over fields, there is a cohomological classification of Azumaya algebras using Étale cohomology. In fact, this group, called the Brauer group, can be also defined as the similarity classes of Azumaya algebras over a ring , where rings are similar if there is an isomorphism
of rings for some natural numbers . Then, this equivalence is in fact an equivalence relation, and if , , then , showing
is a well defined operation. This forms a group structure on the set of such equivalence classes called the Brauer group, denoted . Another definition is given by the torsion subgroup of the etale cohomology group
which is called the cohomological Brauer group. These two definitions agree when is a field.
Brauer group using Galois cohomology
There is another equivalent definition of the Brauer group using Galois cohomology. For a field extension there is a cohomological Brauer group defined as
and the cohomological Brauer group for is defined as
where the colimit is taken over all finite Galois field extensions.
Computation for a local field
Over a local non-archimedean field , such as the p-adic numbers , local class field theory gives the isomorphism of abelian groups:pg 193
This is because given abelian field extensions there is a short exact sequence of Galois groups
and from Local class field theory, there is the following commutative diagram:
where the vertical maps are isomorphisms and the horizontal maps are injections.
n-torsion for a field
Recall that there is the Kummer sequence
giving a long exact sequence in cohomology for a field . Since Hilbert's Theorem 90 implies , there is an associated short exact sequence
showing the second etale cohomology group with coefficients in the th roots of unity is
Generators of n-torsion classes in the Brauer group over a field
The Galois symbol, or norm-residue symbol, is a map from the -torsion Milnor K-theory group to the etale cohomology group , denoted by
It comes from the composition of the cup product in etale cohomology with the Hilbert's Theorem 90 isomorphism
hence
It turns out this map factors through , whose class for is represented by a cyclic algebra . For the Kummer extension where , take a generator of the cyclic group, and construct . There is an alternative, yet equivalent construction through Galois cohomology and etale cohomology. Consider the short exact sequence of trivial -modules
The long exact sequence yields a map
For the unique character
with , there is a unique lift
and
note the class is from the Hilberts theorem 90 map . Then, since there exists a primitive root of unity , there is also a class
It turns out this is precisely the class . Because of the norm residue isomorphism theorem, is an isomorphism and the -torsion classes in are generated by the cyclic algebras .
Skolem–Noether theorem
One of the important structure results about Azumaya algebras is the Skolem–Noether theorem: given a local commutative ring and an Azumaya algebra , the only automorphisms of are inner. Meaning, the following map is surjective:
where is the group of units in This is important because it directly relates to the cohomological classification of similarity classes of Azumaya algebras over a scheme. In particular, it implies an Azumaya algebra has structure group for some , and the Čech cohomology group
gives a cohomological classification of such bundles. Then, this can be related to using the exact sequence
It turns out the image of is a subgroup of the torsion subgroup .
On a scheme
An Azumaya algebra on a scheme X with structure sheaf , according to the original Grothendieck seminar, is a sheaf of -algebras that is étale locally isomorphic to a matrix algebra sheaf; one should, however, add the condition that each matrix algebra sheaf is of positive rank. This definition makes an Azumaya algebra on into a 'twisted-form' of the sheaf . Milne, Étale Cohomology, starts instead from the definition that it is a sheaf of -algebras whose stalk at each point is an Azumaya algebra over the local ring in the sense given above.
Two Azumaya algebras and are equivalent if there exist locally free sheaves and of finite positive rank at every point such that
where is the endomorphism sheaf of . The Brauer group of (an analogue of the Brauer group of a field) is the set of equivalence classes of Azumaya algebras. The group operation is given by tensor product, and the inverse is given by the opposite algebra. Note that this is distinct from the cohomological Brauer group which is defined as .
Example over Spec(Z[1/n])
The construction of a quaternion algebra over a field can be globalized to by considering the noncommutative -algebra
then, as a sheaf of -algebras, has the structure of an Azumaya algebra. The reason for restricting to the open affine set is because the quaternion algebra is a division algebra over the points is and only if the Hilbert symbol
which is true at all but finitely many primes.
Example over Pn
Over Azumaya algebras can be constructed as for an Azumaya algebra over a field . For example, the endomorphism sheaf of is the matrix sheaf
so an Azumaya algebra over can be constructed from this sheaf tensored with an Azumaya algebra over , such as a quaternion algebra.
Applications
There have been significant applications of Azumaya algebras in diophantine geometry, following work of Yuri Manin. The Manin obstruction to the Hasse principle is defined using the Brauer group of schemes.
See also
Gerbe
Class field theory
Algebraic K-theory
Motivic cohomology
Norm residue isomorphism theorem
References
Brauer group and Azumaya algebras
Milne, John. Etale cohomology. Ch IV
Mathoverflow thread on "Explicit examples of Azumaya algebras"
Division algebras
Ring theory
Scheme theory
Algebras | Azumaya algebra | [
"Mathematics"
] | 1,757 | [
"Mathematical structures",
"Algebras",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
3,125,116 | https://en.wikipedia.org/wiki/Inverted%20index | In computer science, an inverted index (also referred to as a postings list, postings file, or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward index, which maps from documents to content). The purpose of an inverted index is to allow fast full-text searches, at a cost of increased processing when a document is added to the database. The inverted file may be the database file itself, rather than its index. It is the most popular data structure used in document retrieval systems, used on a large scale for example in search engines. Additionally, several significant general-purpose mainframe-based database management systems have used inverted list architectures, including ADABAS, DATACOM/DB, and Model 204.
There are two main variants of inverted indexes: A record-level inverted index (or inverted file index or just inverted file) contains a list of references to documents for each word. A word-level inverted index (or full inverted index or inverted list) additionally contains the positions of each word within a document. The latter form offers more functionality (like phrase searches), but needs more processing power and space to be created.
Applications
The inverted index data structure is a central component of a typical search engine indexing algorithm. A goal of a search engine implementation is to optimize the speed of the query: find the documents where word X occurs. Once a forward index is developed, which stores lists of words per document, it is next inverted to develop an inverted index. Querying the forward index would require sequential iteration through each document and to each word to verify a matching document. The time, memory, and processing resources to perform such a query are not always technically realistic. Instead of listing the words per document in the forward index, the inverted index data structure is developed which lists the documents per word.
With the inverted index created, the query can be resolved by jumping to the word ID (via random access) in the inverted index.
In pre-computer times, concordances to important books were manually assembled. These were effectively inverted indexes with a small amount of accompanying commentary that required a tremendous amount of effort to produce.
In bioinformatics, inverted indexes are very important in the sequence assembly of short fragments of sequenced DNA. One way to find the source of a fragment is to search for it against a reference DNA sequence. A small number of mismatches (due to differences between the sequenced DNA and reference DNA, or errors) can be accounted for by dividing the fragment into smaller fragments—at least one subfragment is likely to match the reference DNA sequence. The matching requires constructing an inverted index of all substrings of a certain length from the reference DNA sequence. Since the human DNA contains more than 3 billion base pairs, and we need to store a DNA substring for every index and a 32-bit integer for index itself, the storage requirement for such an inverted index would probably be in the tens of gigabytes.
Compression
For historical reasons, inverted list compression and bitmap compression were developed as separate lines of research, and only later were recognized as solving essentially the same problem.
See also
Index (search engine)
Reverse index
Vector space model
References
External links
NIST's Dictionary of Algorithms and Data Structures: inverted index
Managing Gigabytes for Java a free full-text search engine for large document collections written in Java.
Lucene - Apache Lucene is a full-featured text search engine library written in Java.
Sphinx Search - Open source high-performance, full-featured text search engine library used by craigslist and others employing an inverted index.
Example implementations on Rosetta Code
Caltech Large Scale Image Search Toolbox: a Matlab toolbox implementing Inverted File Bag-of-Words image search.
Data management
Search algorithms
Database index techniques
Substring indices | Inverted index | [
"Technology"
] | 810 | [
"Data management",
"Data"
] |
3,125,155 | https://en.wikipedia.org/wiki/Left-right%20planarity%20test | In graph theory, a branch of mathematics, the left-right planarity test
or de Fraysseix–Rosenstiehl planarity criterion is a characterization of planar graphs based on the properties of the depth-first search trees, published by and used by them with Patrice Ossona de Mendez to develop a linear time planarity testing algorithm. In a 2003 experimental comparison of six planarity testing algorithms, this was one of the fastest algorithms tested.
T-alike and T-opposite edges
For any depth-first search of a graph G, the edges
encountered when discovering a vertex for the first time define a depth-first search tree T of G. This is a Trémaux tree, meaning that the remaining edges (the cotree) each connect a pair of vertices that are related to each other as an ancestor and descendant in T. Three types of patterns can be used to define two relations between pairs of cotree edges, named the T-alike and T-opposite relations.
In the following figures, simple circle nodes represent vertices, double circle nodes represent subtrees, twisted segments represent tree paths, and curved arcs represent cotree edges. The root of each tree is shown at the bottom of the figure. In the first figure, the edges labeled and are T-alike, meaning that, at the endpoints nearest the root of the tree, they will both be on the same side of the tree in every planar drawing. In the next two figures, the edges with the same labels are T-opposite, meaning that they will be on different sides of the tree in every planar drawing.
The characterization
Let G be a graph and let T be a Trémaux tree of G. The graph G is planar if and only if there exists a partition of the cotree edges of G into two classes so that any two edges belong to the same class if they are T-alike and any two edges belong to different classes if they are T-opposite.
This characterization immediately leads to an (inefficient) planarity test: determine for all pairs of edges whether they are T-alike or T-opposite, form an auxiliary graph that has a vertex for each
connected component of T-alike edges and an edge for each pair of T-opposite edges, and check whether this auxiliary graph is bipartite. Making this algorithm efficient involves finding a subset of the T-alike and T-opposite pairs that is sufficient to carry out this method without determining the relation between all edge pairs in the input graph.
References
Further reading
Topological graph theory
Planar graphs | Left-right planarity test | [
"Mathematics"
] | 529 | [
"Statements about planar graphs",
"Planar graphs",
"Graph theory",
"Topology",
"Mathematical relations",
"Planes (geometry)",
"Topological graph theory"
] |
12,057,519 | https://en.wikipedia.org/wiki/Internet%20of%20things | Internet of things (IoT) describes devices with sensors, processing ability, software and other technologies that connect and exchange data with other devices and systems over the Internet or other communication networks. The Internet of things encompasses electronics, communication, and computer science engineering. "Internet of things" has been considered a misnomer because devices do not need to be connected to the public internet; they only need to be connected to a network and be individually addressable.
The field has evolved due to the convergence of multiple technologies, including ubiquitous computing, commodity sensors, and increasingly powerful embedded systems, as well as machine learning. Older fields of embedded systems, wireless sensor networks, control systems, automation (including home and building automation), independently and collectively enable the Internet of things. In the consumer market, IoT technology is most synonymous with "smart home" products, including devices and appliances (lighting fixtures, thermostats, home security systems, cameras, and other home appliances) that support one or more common ecosystems and can be controlled via devices associated with that ecosystem, such as smartphones and smart speakers. IoT is also used in healthcare systems.
There are a number of concerns about the risks in the growth of IoT technologies and products, especially in the areas of privacy and security, and consequently there have been industry and government moves to address these concerns, including the development of international and local standards, guidelines, and regulatory frameworks. Because of their interconnected nature, IoT devices are vulnerable to security breaches and privacy concerns. At the same time, the way these devices communicate wirelessly creates regulatory ambiguities, complicating jurisdictional boundaries of the data transfer.
Background
Around 1972, for its remote site use, Stanford Artificial Intelligence Laboratory developed a computer controlled vending machine, adapted from a machine rented from Canteen Vending, which sold for cash or, though a computer terminal (Teletype Model 33 KSR), on credit. Products included, at least, beer, yogurt, and milk. It was called the Prancing Pony, after the name of the room, named after an inn in Tolkien's Lord of the Rings, as each room at Stanford Artificial Intelligence Laboratory was named after a place in Middle Earth. A successor version still operates in the Computer Science Department at Stanford, with both hardware and software having been updated.
History
In 1982, an early concept of a network connected smart device was built as an Internet interface for sensors installed in the Carnegie Mellon University Computer Science Departments departmental Coca-Cola vending machine, supplied by graduate student volunteers, provided a temperature model and an inventory status, inspired by the computer controlled vending machine in the Prancing Pony room at Stanford Artificial Intelligence Laboratory. First accessible only on the CMU campus, it became the first ARPANET-connected appliance,
Mark Weiser's 1991 paper on ubiquitous computing, "The Computer of the 21st Century", as well as academic venues such as UbiComp and PerCom produced the contemporary vision of the IoT. In 1994, Reza Raji described the concept in IEEE Spectrum as "[moving] small packets of data to a large set of nodes, so as to integrate and automate everything from home appliances to entire factories". Between 1993 and 1997, several companies proposed solutions like Microsoft's at Work or Novell's NEST. The field gained momentum when Bill Joy envisioned device-to-device communication as a part of his "Six Webs" framework, presented at the World Economic Forum at Davos in 1999.
The concept of the "Internet of things" and the term itself, first appeared in a speech by Peter T. Lewis, to the Congressional Black Caucus Foundation Legislative Weekend in Washington, D.C., published in September 1985. According to Lewis, "The Internet of Things, or IoT, is the integration of people, processes and technology with connectable devices and sensors to enable remote monitoring, status, manipulation and evaluation of trends of such devices."
The term "Internet of things" was coined independently by Kevin Ashton of Procter & Gamble, later of MIT's Auto-ID Center, in 1999, though he prefers the phrase "Internet for things". At that point, he viewed radio-frequency identification (RFID) as essential to the Internet of things, which would allow computers to manage all individual things. The main theme of the Internet of things is to embed short-range mobile transceivers in various gadgets and daily necessities to enable new forms of communication between people and things, and between things themselves.
In 2004 Cornelius "Pete" Peterson, CEO of NetSilicon, predicted that, "The next era of information technology will be dominated by [IoT] devices, and networked devices will ultimately gain in popularity and significance to the extent that they will far exceed the number of networked computers and workstations." Peterson believed that medical devices and industrial controls would become dominant applications of the technology.
Defining the Internet of things as "simply the point in time when more 'things or objects' were connected to the Internet than people", Cisco Systems estimated that the IoT was "born" between 2008 and 2009, with the things/people ratio growing from 0.08 in 2003 to 1.84 in 2010.
Applications
The extensive set of applications for IoT devices is often divided into consumer, commercial, industrial, and infrastructure spaces.
Consumers
A growing portion of IoT devices is created for consumer use, including connected vehicles, home automation, wearable technology, connected health, and appliances with remote monitoring capabilities.
Home automation
IoT devices are a part of the larger concept of home automation, which can include lighting, heating and air conditioning, media and security systems and camera systems. Long-term benefits could include energy savings by automatically ensuring lights and electronics are turned off or by making the residents in the home aware of usage.
A smart home or automated home could be based on a platform or hubs that control smart devices and appliances. For instance, using Apple's HomeKit, manufacturers can have their home products and accessories controlled by an application in iOS devices such as the iPhone and the Apple Watch. This could be a dedicated app or iOS native applications such as Siri. This can be demonstrated in the case of Lenovo's Smart Home Essentials, which is a line of smart home devices that are controlled through Apple's Home app or Siri without the need for a Wi-Fi bridge. There are also dedicated smart home hubs that are offered as standalone platforms to connect different smart home products. These include the Amazon Echo, Google Home, Apple's HomePod, and Samsung's SmartThings Hub. In addition to the commercial systems, there are many non-proprietary, open source ecosystems, including Home Assistant, OpenHAB and Domoticz.
Elder care
One key application of a smart home is to assist the elderly and disabled. These home systems use assistive technology to accommodate an owner's specific disabilities. Voice control can assist users with sight and mobility limitations while alert systems can be connected directly to cochlear implants worn by hearing-impaired users. They can also be equipped with additional safety features, including sensors that monitor for medical emergencies such as falls or seizures. Smart home technology applied in this way can provide users with more freedom and a higher quality of life.
Organizations
The term "Enterprise IoT" refers to devices used in business and corporate settings.
Medical and healthcare
The Internet of Medical Things (IoMT) is an application of the IoT for medical and health-related purposes, data collection and analysis for research, and monitoring. The IoMT has been referenced as "Smart Healthcare", as the technology for creating a digitized healthcare system, connecting available medical resources and healthcare services.
IoT devices can be used to enable remote health monitoring and emergency notification systems. These health monitoring devices can range from blood pressure and heart rate monitors to advanced devices capable of monitoring specialized implants, such as pacemakers, Fitbit electronic wristbands, or advanced hearing aids. Some hospitals have begun implementing "smart beds" that can detect when they are occupied and when a patient is attempting to get up. It can also adjust itself to ensure appropriate pressure and support are applied to the patient without the manual interaction of nurses. A 2015 Goldman Sachs report indicated that healthcare IoT devices "can save the United States more than $300 billion in annual healthcare expenditures by increasing revenue and decreasing cost." Moreover, the use of mobile devices to support medical follow-up led to the creation of 'm-health', used analyzed health statistics."
Specialized sensors can also be equipped within living spaces to monitor the health and general well-being of senior citizens, while also ensuring that proper treatment is being administered and assisting people to regain lost mobility via therapy as well. These sensors create a network of intelligent sensors that are able to collect, process, transfer, and analyze valuable information in different environments, such as connecting in-home monitoring devices to hospital-based systems. Other consumer devices to encourage healthy living, such as connected scales or wearable heart monitors, are also a possibility with the IoT. End-to-end health monitoring IoT platforms are also available for antenatal and chronic patients, helping one manage health vitals and recurring medication requirements.
Advances in plastic and fabric electronics fabrication methods have enabled ultra-low cost, use-and-throw IoMT sensors. These sensors, along with the required RFID electronics, can be fabricated on paper or e-textiles for wireless powered disposable sensing devices. Applications have been established for point-of-care medical diagnostics, where portability and low system-complexity is essential.
IoMT was not only being applied in the clinical laboratory industry, but also in the healthcare and health insurance industries. IoMT in the healthcare industry is now permitting doctors, patients, and others, such as guardians of patients, nurses, families, and similar, to be part of a system, where patient records are saved in a database, allowing doctors and the rest of the medical staff to have access to patient information.
IoT devices within healthcare are structured in a multi-layered architecture. Initially, data is collected from the devices and sensors within the IoT. Subsequently, data is processed and stored within the institution's network. At this point, data becomes accessible for further internal and external procedures. Simultaneously, data is protected via various cybersecurity measures, such as the principle of least privilege (PoLP), data encryption for example, with the Advanced Encryption Standard (AES), intrusion detection systems (IDS), and intrusion prevention systems (IPS). Lastly, data can be accessed by users on workstations or portable devices through applications like patient management software (PMS).
IoMT in the insurance industry provides access to better and new types of dynamic information. This includes sensor-based solutions such as biosensors, wearables, connected health devices, and mobile apps to track customer behavior. This can lead to more accurate underwriting and new pricing models.
The application of the IoT in healthcare plays a fundamental role in managing chronic diseases and in disease prevention and control. Remote monitoring is made possible through the connection of powerful wireless solutions. The connectivity enables health practitioners to capture patient's data and apply complex algorithms in health data analysis.
Transportation
The IoT can assist in the integration of communications, control, and information processing across various transportation systems. Application of the IoT extends to all aspects of transportation systems (i.e., the vehicle, the infrastructure, and the driver or user). Dynamic interaction between these components of a transport system enables inter- and intra-vehicular communication, smart traffic control, smart parking, electronic toll collection systems, logistics and fleet management, vehicle control, safety, and road assistance.
V2X communications
In vehicular communication systems, vehicle-to-everything communication (V2X), consists of three main components: vehicle-to-vehicle communication (V2V), vehicle-to-infrastructure communication (V2I) and vehicle to pedestrian communications (V2P). V2X is the first step to autonomous driving and connected road infrastructure.
Home automation
IoT devices can be used to monitor and control the mechanical, electrical and electronic systems used in various types of buildings (e.g., public and private, industrial, institutions, or residential) in home automation and building automation systems. In this context, three main areas are being covered in literature:
The integration of the Internet with building energy management systems to create energy-efficient and IOT-driven "smart buildings".
The possible means of real-time monitoring for reducing energy consumption and monitoring occupant behaviors.
The integration of smart devices in the built environment and how they might be used in future applications.
Industrial
Also known as IIoT, industrial IoT devices acquire and analyze data from connected equipment, operational technology (OT), locations, and people. Combined with operational technology (OT) monitoring devices, IIoT helps regulate and monitor industrial systems. Also, the same implementation can be carried out for automated record updates of asset placement in industrial storage units as the size of the assets can vary from a small screw to the whole motor spare part, and misplacement of such assets can cause a loss of manpower time and money.
Manufacturing
The IoT can connect various manufacturing devices equipped with sensing, identification, processing, communication, actuation, and networking capabilities. Network control and management of manufacturing equipment, asset and situation management, or manufacturing process control allow IoT to be used for industrial applications and smart manufacturing. IoT intelligent systems enable rapid manufacturing and optimization of new products and rapid response to product demands.
Digital control systems to automate process controls, operator tools and service information systems to optimize plant safety and security are within the purview of the IIoT. IoT can also be applied to asset management via predictive maintenance, statistical evaluation, and measurements to maximize reliability. Industrial management systems can be integrated with smart grids, enabling energy optimization. Measurements, automated controls, plant optimization, health and safety management, and other functions are provided by networked sensors.
In addition to general manufacturing, IoT is also used for processes in the industrialization of construction.
Agriculture
There are numerous IoT applications in farming such as collecting data on temperature, rainfall, humidity, wind speed, pest infestation, and soil content. This data can be used to automate farming techniques, take informed decisions to improve quality and quantity, minimize risk and waste, and reduce the effort required to manage crops. For example, farmers can now monitor soil temperature and moisture from afar and even apply IoT-acquired data to precision fertilization programs. The overall goal is that data from sensors, coupled with the farmer's knowledge and intuition about his or her farm, can help increase farm productivity, and also help reduce costs.
In August 2018, Toyota Tsusho began a partnership with Microsoft to create fish farming tools using the Microsoft Azure application suite for IoT technologies related to water management. Developed in part by researchers from Kindai University, the water pump mechanisms use artificial intelligence to count the number of fish on a conveyor belt, analyze the number of fish, and deduce the effectiveness of water flow from the data the fish provide. The FarmBeats project from Microsoft Research that uses TV white space to connect farms is also a part of the Azure Marketplace now.
Maritime
IoT devices are in use to monitor the environments and systems of boats and yachts. Many pleasure boats are left unattended for days in summer, and months in winter so such devices provide valuable early alerts of boat flooding, fire, and deep discharge of batteries. The use of global Internet data networks such as Sigfox, combined with long-life batteries, and microelectronics allows the engine rooms, bilge, and batteries to be constantly monitored and reported to connected Android & Apple applications for example.
Infrastructure
Monitoring and controlling operations of sustainable urban and rural infrastructures like bridges, railway tracks and on- and offshore wind farms is a key application of the IoT. The IoT infrastructure can be used for monitoring any events or changes in structural conditions that can compromise safety and increase risk. The IoT can benefit the construction industry by cost-saving, time reduction, better quality workday, paperless workflow and increase in productivity. It can help in taking faster decisions and saving money in Real-Time Data Analytics. It can also be used for scheduling repair and maintenance activities efficiently, by coordinating tasks between different service providers and users of these facilities. IoT devices can also be used to control critical infrastructure like bridges to provide access to ships. The usage of IoT devices for monitoring and operating infrastructure is likely to improve incident management and emergency response coordination, and quality of service, up-times and reduce costs of operation in all infrastructure-related areas. Even areas such as waste management can benefit.
Metropolitan scale deployments
There are several planned or ongoing large-scale deployments of the IoT, to enable better management of cities and systems. For example, Songdo, South Korea, the first of its kind fully equipped and wired smart city, is gradually being built, with approximately 70 percent of the business district completed . Much of the city is planned to be wired and automated, with little or no human intervention.
In 2014 another application was undergoing a project in Santander, Spain. For this deployment, two approaches have been adopted. This city of 180,000 inhabitants has already seen 18,000 downloads of its city smartphone app. The app is connected to 10,000 sensors that enable services like parking search, and environmental monitoring. City context information is used in this deployment so as to benefit merchants through a spark deals mechanism based on city behavior that aims at maximizing the impact of each notification.
Other examples of large-scale deployments underway include the Sino-Singapore Guangzhou Knowledge City; work on improving air and water quality, reducing noise pollution, and increasing transportation efficiency in San Jose, California; and smart traffic management in western Singapore. Using its RPMA (Random Phase Multiple Access) technology, San Diego–based Ingenu has built a nationwide public network for low-bandwidth data transmissions using the same unlicensed 2.4 gigahertz spectrum as Wi-Fi. Ingenu's "Machine Network" covers more than a third of the US population across 35 major cities including San Diego and Dallas. French company, Sigfox, commenced building an Ultra Narrowband wireless data network in the San Francisco Bay Area in 2014, the first business to achieve such a deployment in the U.S. It subsequently announced it would set up a total of 4000 base stations to cover a total of 30 cities in the U.S. by the end of 2016, making it the largest IoT network coverage provider in the country thus far. Cisco also participates in smart cities projects. Cisco has deployed technologies for Smart Wi-Fi, Smart Safety & Security, Smart Lighting, Smart Parking, Smart Transports, Smart Bus Stops, Smart Kiosks, Remote Expert for Government Services (REGS) and Smart Education in the five km area in the city of Vijaywada, India.
Another example of a large deployment is the one completed by New York Waterways in New York City to connect all the city's vessels and be able to monitor them live 24/7. The network was designed and engineered by Fluidmesh Networks, a Chicago-based company developing wireless networks for critical applications. The NYWW network is currently providing coverage on the Hudson River, East River, and Upper New York Bay. With the wireless network in place, NY Waterway is able to take control of its fleet and passengers in a way that was not previously possible. New applications can include security, energy and fleet management, digital signage, public Wi-Fi, paperless ticketing and others.
Energy management
Significant numbers of energy-consuming devices (e.g. lamps, household appliances, motors, pumps, etc.) already integrate Internet connectivity, which can allow them to communicate with utilities not only to balance power generation but also helps optimize the energy consumption as a whole. These devices allow for remote control by users, or central management via a cloud-based interface, and enable functions like scheduling (e.g., remotely powering on or off heating systems, controlling ovens, changing lighting conditions etc.). The smart grid is a utility-side IoT application; systems gather and act on energy and power-related information to improve the efficiency of the production and distribution of electricity. Using advanced metering infrastructure (AMI) Internet-connected devices, electric utilities not only collect data from end-users, but also manage distribution automation devices like transformers.
Environmental monitoring
Environmental monitoring applications of the IoT typically use sensors to assist in environmental protection by monitoring air or water quality, atmospheric or soil conditions, and can even include areas like monitoring the movements of wildlife and their habitats. Development of resource-constrained devices connected to the Internet also means that other applications like earthquake or tsunami early-warning systems can also be used by emergency services to provide more effective aid. IoT devices in this application typically span a large geographic area and can also be mobile. It has been argued that the standardization that IoT brings to wireless sensing will revolutionize this area.
Living Lab
Another example of integrating the IoT is Living Lab which integrates and combines research and innovation processes, establishing within a public-private-people-partnership. Between 2006 and January 2024, there were over 440 Living Labs (though not all are currently active) that use the IoT to collaborate and share knowledge between stakeholders to co-create innovative and technological products. For companies to implement and develop IoT services for smart cities, they need to have incentives. The governments play key roles in smart city projects as changes in policies will help cities to implement the IoT which provides effectiveness, efficiency, and accuracy of the resources that are being used. For instance, the government provides tax incentives and cheap rent, improves public transports, and offers an environment where start-up companies, creative industries, and multinationals may co-create, share a common infrastructure and labor markets, and take advantage of locally embedded technologies, production process, and transaction costs.
Military
The Internet of Military Things (IoMT) is the application of IoT technologies in the military domain for the purposes of reconnaissance, surveillance, and other combat-related objectives. It is heavily influenced by the future prospects of warfare in an urban environment and involves the use of sensors, munitions, vehicles, robots, human-wearable biometrics, and other smart technology that is relevant on the battlefield.
One of the examples of IOT devices used in the military is Xaver 1000 system. The Xaver 1000 was developed by Israel's Camero Tech, which is the latest in the company's line of "through wall imaging systems". The Xaver line uses millimeter wave (MMW) radar, or radar in the range of 30-300 gigahertz. It is equipped with an AI-based life target tracking system as well as its own 3D 'sense-through-the-wall' technology.
Internet of Battlefield Things
The Internet of Battlefield Things (IoBT) is a project initiated and executed by the U.S. Army Research Laboratory (ARL) that focuses on the basic science related to the IoT that enhance the capabilities of Army soldiers. In 2017, ARL launched the Internet of Battlefield Things Collaborative Research Alliance (IoBT-CRA), establishing a working collaboration between industry, university, and Army researchers to advance the theoretical foundations of IoT technologies and their applications to Army operations.
Ocean of Things
The Ocean of Things project is a DARPA-led program designed to establish an Internet of things across large ocean areas for the purposes of collecting, monitoring, and analyzing environmental and vessel activity data. The project entails the deployment of about 50,000 floats that house a passive sensor suite that autonomously detect and track military and commercial vessels as part of a cloud-based network.
Product digitalization
There are several applications of smart or active packaging in which a QR code or NFC tag is affixed on a product or its packaging. The tag itself is passive, however, it contains a unique identifier (typically a URL) which enables a user to access digital content about the product via a smartphone. Strictly speaking, such passive items are not part of the Internet of things, but they can be seen as enablers of digital interactions. The term "Internet of Packaging" has been coined to describe applications in which unique identifiers are used, to automate supply chains, and are scanned on large scale by consumers to access digital content. Authentication of the unique identifiers, and thereby of the product itself, is possible via a copy-sensitive digital watermark or copy detection pattern for scanning when scanning a QR code, while NFC tags can encrypt communication.
Trends and characteristics
The IoT's major significant trend in recent years is the growth of devices connected and controlled via the Internet. The wide range of applications for IoT technology mean that the specifics can be very different from one device to the next but there are basic characteristics shared by most.
The IoT creates opportunities for more direct integration of the physical world into computer-based systems, resulting in efficiency improvements, economic benefits, and reduced human exertions.
IoT Analytics reported there were 16.6 billion IoT devices connected in 2023. In 2020, the same firm projected there would be 30 billion devices connected by 2025. As of October, 2024, there are around 17 billion.
Intelligence
Ambient intelligence and autonomous control are not part of the original concept of the Internet of things. Ambient intelligence and autonomous control do not necessarily require Internet structures, either. However, there is a shift in research (by companies such as Intel) to integrate the concepts of the IoT and autonomous control, with initial outcomes towards this direction considering objects as the driving force for autonomous IoT. An approach in this context is deep reinforcement learning where most of IoT systems provide a dynamic and interactive environment. Training an agent (i.e., IoT device) to behave smartly in such an environment cannot be addressed by conventional machine learning algorithms such as supervised learning. By reinforcement learning approach, a learning agent can sense the environment's state (e.g., sensing home temperature), perform actions (e.g., turn HVAC on or off) and learn through the maximizing accumulated rewards it receives in long term.
IoT intelligence can be offered at three levels: IoT devices, Edge/Fog nodes, and cloud computing. The need for intelligent control and decision at each level depends on the time sensitiveness of the IoT application. For example, an autonomous vehicle's camera needs to make real-time obstacle detection to avoid an accident. This fast decision making would not be possible through transferring data from the vehicle to cloud instances and return the predictions back to the vehicle. Instead, all the operation should be performed locally in the vehicle. Integrating advanced machine learning algorithms including deep learning into IoT devices is an active research area to make smart objects closer to reality. Moreover, it is possible to get the most value out of IoT deployments through analyzing IoT data, extracting hidden information, and predicting control decisions. A wide variety of machine learning techniques have been used in IoT domain ranging from traditional methods such as regression, support vector machine, and random forest to advanced ones such as convolutional neural networks, LSTM, and variational autoencoder.
In the future, the Internet of things may be a non-deterministic and open network in which auto-organized or intelligent entities (web services, SOA components) and virtual objects (avatars) will be interoperable and able to act independently (pursuing their own objectives or shared ones) depending on the context, circumstances or environments. Autonomous behavior through the collection and reasoning of context information as well as the object's ability to detect changes in the environment (faults affecting sensors) and introduce suitable mitigation measures constitutes a major research trend, clearly needed to provide credibility to the IoT technology. Modern IoT products and solutions in the marketplace use a variety of different technologies to support such context-aware automation, but more sophisticated forms of intelligence are requested to permit sensor units and intelligent cyber-physical systems to be deployed in real environments.
Architecture
IoT system architecture, in its simplistic view, consists of three tiers: Tier 1: Devices, Tier 2: the Edge Gateway, and Tier 3: the Cloud. Devices include networked things, such as the sensors and actuators found in IoT equipment, particularly those that use protocols such as Modbus, Bluetooth, Zigbee, or proprietary protocols, to connect to an Edge Gateway. The Edge Gateway layer consists of sensor data aggregation systems called Edge Gateways that provide functionality, such as pre-processing of the data, securing connectivity to cloud, using systems such as WebSockets, the event hub, and, even in some cases, edge analytics or fog computing. Edge Gateway layer is also required to give a common view of the devices to the upper layers to facilitate in easier management. The final tier includes the cloud application built for IoT using the microservices architecture, which are usually polyglot and inherently secure in nature using HTTPS/OAuth. It includes various database systems that store sensor data, such as time series databases or asset stores using backend data storage systems (e.g. Cassandra, PostgreSQL). The cloud tier in most cloud-based IoT system features event queuing and messaging system that handles communication that transpires in all tiers. Some experts classified the three-tiers in the IoT system as edge, platform, and enterprise and these are connected by proximity network, access network, and service network, respectively.
Building on the Internet of things, the web of things is an architecture for the application layer of the Internet of things looking at the convergence of data from IoT devices into Web applications to create innovative use-cases. In order to program and control the flow of information in the Internet of things, a predicted architectural direction is being called BPM Everywhere which is a blending of traditional process management with process mining and special capabilities to automate the control of large numbers of coordinated devices.
Network architecture
The Internet of things requires huge scalability in the network space to handle the surge of devices. IETF 6LoWPAN can be used to connect devices to IP networks. With billions of devices being added to the Internet space, IPv6 will play a major role in handling the network layer scalability. IETF's Constrained Application Protocol, ZeroMQ, and MQTT can provide lightweight data transport. In practice many groups of IoT devices are hidden behind gateway nodes and may not have unique addresses. Also the vision of everything-interconnected is not needed for most applications as it is mainly the data which need interconnecting at a higher layer.
Fog computing is a viable alternative to prevent such a large burst of data flow through the Internet. The edge devices' computation power to analyze and process data is extremely limited. Limited processing power is a key attribute of IoT devices as their purpose is to supply data about physical objects while remaining autonomous. Heavy processing requirements use more battery power harming IoT's ability to operate. Scalability is easy because IoT devices simply supply data through the Internet to a server with sufficient processing power.
Decentralized IoT
Decentralized Internet of things, or decentralized IoT, is a modified IoT which utilizes fog computing to handle and balance requests of connected IoT devices in order to reduce loading on the cloud servers and improve responsiveness for latency-sensitive IoT applications like vital signs monitoring of patients, vehicle-to-vehicle communication of autonomous driving, and critical failure detection of industrial devices. Performance is improved, especially for huge IoT systems with millions of nodes.
Conventional IoT is connected via a mesh network and led by a major head node (centralized controller). The head node decides how a data is created, stored, and transmitted. In contrast, decentralized IoT attempts to divide IoT systems into smaller divisions. The head node authorizes partial decision-making power to lower level sub-nodes under mutual agreed policy.
Some approached to decentralized IoT attempts to address the limited bandwidth and hashing capacity of battery powered or wireless IoT devices via blockchain.
Complexity
In semi-open or closed loops (i.e., value chains, whenever a global finality can be settled) the IoT will often be considered and studied as a complex system due to the huge number of different links, interactions between autonomous actors, and its capacity to integrate new actors. At the overall stage (full open loop) it will likely be seen as a chaotic environment (since systems always have finality).
As a practical approach, not all elements on the Internet of things run in a global, public space. Subsystems are often implemented to mitigate the risks of privacy, control and reliability. For example, domestic robotics (domotics) running inside a smart home might only share data within and be available via a local network. Managing and controlling a high dynamic ad hoc IoT things/devices network is a tough task with the traditional networks architecture, Software Defined Networking (SDN) provides the agile dynamic solution that can cope with the special requirements of the diversity of innovative IoT applications.
Size considerations
The exact scale of the Internet of things is unknown, with quotes of billions or trillions often quoted at the beginning of IoT articles. In 2015 there were 83 million smart devices in people's homes. This number is expected to grow to 193 million devices by 2020.
The figure of online capable devices grew 31% from 2016 to 2017 to reach 8.4 billion.
Space considerations
In the Internet of things, the precise geographic location of a thing—and also the precise geographic dimensions of a thing—can be critical. Therefore, facts about a thing, such as its location in time and space, have been less critical to track because the person processing the information can decide whether or not that information was important to the action being taken, and if so, add the missing information (or decide to not take the action). (Note that some things on the Internet of things will be sensors, and sensor location is usually important.) The GeoWeb and Digital Earth are applications that become possible when things can become organized and connected by location. However, the challenges that remain include the constraints of variable spatial scales, the need to handle massive amounts of data, and an indexing for fast search and neighbour operations. On the Internet of things, if things are able to take actions on their own initiative, this human-centric mediation role is eliminated. Thus, the time-space context that we as humans take for granted must be given a central role in this information ecosystem. Just as standards play a key role on the Internet and the Web, geo-spatial standards will play a key role on the Internet of things.
A solution to "basket of remotes"
Many IoT devices have the potential to take a piece of this market. Jean-Louis Gassée (Apple initial alumni team, and BeOS co-founder) has addressed this topic in an article on Monday Note, where he predicts that the most likely problem will be what he calls the "basket of remotes" problem, where we'll have hundreds of applications to interface with hundreds of devices that don't share protocols for speaking with one another. For improved user interaction, some technology leaders are joining forces to create standards for communication between devices to solve this problem. Others are turning to the concept of predictive interaction of devices, "where collected data is used to predict and trigger actions on the specific devices" while making them work together.
Social Internet of things
Social Internet of things (SIoT) is a new kind of IoT that focuses the importance of social interaction and relationship between IoT devices. SIoT is a pattern of how cross-domain IoT devices enabling application to application communication and collaboration without human intervention in order to serve their owners with autonomous services, and this only can be realized when gained low-level architecture support from both IoT software and hardware engineering.
Social Network for IoT Devices (Not Human)
IoT defines a device with an identity like a citizen in a community and connect them to the Internet to provide services to its users. SIoT defines a social network for IoT devices only to interact with each other for different goals that to serve human.
How is SIoT different from IoT?
SIoT is different from the original IoT in terms of the collaboration characteristics. IoT is passive, it was set to serve for dedicated purposes with existing IoT devices in predetermined system. SIoT is active, it was programmed and managed by AI to serve for unplanned purposes with mix and match of potential IoT devices from different systems that benefit its users.
How does SIoT Work?
IoT devices built-in with sociability will broadcast their abilities or functionalities, and at the same time discovers, shares information, monitors, navigates and groups with other IoT devices in the same or nearby network realizing SIoT and facilitating useful service compositions in order to help its users proactively in every day's life especially during emergency.
Social IoT Examples
IoT-based smart home technology monitors health data of patients or aging adults by analyzing their physiological parameters and prompt the nearby health facilities when emergency medical services needed. In case emergency, automatically, ambulance of a nearest available hospital will be called with pickup location provided, ward assigned, patient's health data will be transmitted to the emergency department, and display on the doctor's computer immediately for further action.
IoT sensors on the vehicles, road and traffic lights monitor the conditions of the vehicles and drivers and alert when attention needed and also coordinate themselves automatically to ensure autonomous driving is working normally. Unfortunately if an accident happens, IoT camera will inform the nearest hospital and police station for help.
Social IoT Challenges
Internet of things is multifaceted and complicated. One of the main factors that hindering people from adopting and use Internet of things (IoT) based products and services is its complexity. Installation and setup is a challenge to people, therefore, there is a need for IoT devices to mix match and configure themselves automatically to provide different services at different situation.
System security always a concern for any technology, and it is more crucial for SIoT as not only security of oneself need to be considered but also the mutual trust mechanism between collaborative IoT devices from time to time, from place to place.
Another critical challenge for SIoT is the accuracy and reliability of the sensors. At most of the circumstances, IoT sensors would need to respond in nanoseconds to avoid accidents, injury, and loss of life.
Enabling technologies
There are many technologies that enable the IoT. Crucial to the field is the network used to communicate between devices of an IoT installation, a role that several wireless or wired technologies may fulfill:
Addressability
The original idea of the Auto-ID Center is based on RFID-tags and distinct identification through the Electronic Product Code. This has evolved into objects having an IP address or URI. An alternative view, from the world of the Semantic Web focuses instead on making all things (not just those electronic, smart, or RFID-enabled) addressable by the existing naming protocols, such as URI. The objects themselves do not converse, but they may now be referred to by other agents, such as powerful centralised servers acting for their human owners. Integration with the Internet implies that devices will use an IP address as a distinct identifier. Due to the limited address space of IPv4 (which allows for 4.3 billion different addresses), objects in the IoT will have to use the next generation of the Internet protocol (IPv6) to scale to the extremely large address space required.
Internet-of-things devices additionally will benefit from the stateless address auto-configuration present in IPv6, as it reduces the configuration overhead on the hosts, and the IETF 6LoWPAN header compression. To a large extent, the future of the Internet of things will not be possible without the support of IPv6; and consequently, the global adoption of IPv6 in the coming years will be critical for the successful development of the IoT in the future.
Application Layer
ADRC defines an application layer protocol and supporting framework for implementing IoT applications.
Short-range wireless
Bluetooth mesh networking – Specification providing a mesh networking variant to Bluetooth Low Energy (BLE) with an increased number of nodes and standardized application layer (Models).
Li-Fi (light fidelity) – Wireless communication technology similar to the Wi-Fi standard, but using visible-light communication for increased bandwidth.
Near-field communication (NFC) – Communication protocols enabling two electronic devices to communicate within a 4 cm range.
Radio-frequency identification (RFID) – Technology using electromagnetic fields to read data stored in tags embedded in other items.
Wi-Fi – Technology for local area networking–based on the IEEE 802.11 standard, where devices may communicate through a shared access point or directly between individual devices.
Zigbee – Communication protocols for personal area networking– based on the IEEE 802.15.4 standard, providing low power consumption, low data rate, low cost, and high throughput.
Z-Wave – Wireless communications protocol used primarily for home automation and security applications
Medium-range wireless
LTE-Advanced – High-speed communication specification for mobile networks. Provides enhancements to the LTE standard with extended coverage, higher throughput, and lower latency.
5G – 5G wireless networks can be used to achieve the high communication requirements of the IoT and connect a large number of IoT devices, even when they are on the move. There are three features of 5G that are each considered to be useful for supporting particular elements of IoT: enhanced mobile broadband (eMBB), massive machine type communications (mMTC) and ultra-reliable low latency communications (URLLC).
LoRa: Range up to in urban areas, and up to or more in rural areas (line of sight).
DASH7: Range of up to 2 km.
Long-range wireless
Low-power wide-area networking (LPWAN) – Wireless networks designed to allow long-range communication at a low data rate, reducing power and cost for transmission. Available LPWAN technologies and protocols: LoRaWan, Sigfox, NB-IoT, Weightless, RPMA, MIoTy, IEEE 802.11ah
Very-small-aperture terminal (VSAT) – Satellite communication technology using small dish antennas for narrowband and broadband data.
Wired
Ethernet – General purpose networking standard using twisted pair and fiber optic links in conjunction with hubs or switches.
Power-line communication (PLC) – Communication technology using electrical wiring to carry power and data. Specifications such as HomePlug or G.hn utilize PLC for networking IoT devices.
Comparison of technologies by layer
Different technologies have different roles in a protocol stack. Below is a simplified presentation of the roles of several popular communication technologies in IoT applications:
Standards and standards organizations
This is a list of technical standards for the IoT, most of which are open standards, and the standards organizations that aspire to successfully setting them.
Politics and civic engagement
Some scholars and activists argue that the IoT can be used to create new models of civic engagement if device networks can be open to user control and inter-operable platforms. Philip N. Howard, a professor and author, writes that political life in both democracies and authoritarian regimes will be shaped by the way the IoT will be used for civic engagement. For that to happen, he argues that any connected device should be able to divulge a list of the "ultimate beneficiaries" of its sensor data and that individual citizens should be able to add new organisations to the beneficiary list. In addition, he argues that civil society groups need to start developing their IoT strategy for making use of data and engaging with the public.
Government regulation
One of the key drivers of the IoT is data. The success of the idea of connecting devices to make them more efficient is dependent upon access to and storage & processing of data. For this purpose, companies working on the IoT collect data from multiple sources and store it in their cloud network for further processing. This leaves the door wide open for privacy and security dangers and single point vulnerability of multiple systems. The other issues pertain to consumer choice and ownership of data and how it is used. Though still in their infancy, regulations and governance regarding these issues of privacy, security, and data ownership continue to develop. IoT regulation depends on the country. Some examples of legislation that is relevant to privacy and data collection are: the US Privacy Act of 1974, OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data of 1980, and the EU Directive 95/46/EC of 1995.
Current regulatory environment:
A report published by the Federal Trade Commission (FTC) in January 2015 made the following three recommendations:
Data security – At the time of designing IoT companies should ensure that data collection, storage and processing would be secure at all times. Companies should adopt a "defense in depth" approach and encrypt data at each stage.
Data consent – users should have a choice as to what data they share with IoT companies and the users must be informed if their data gets exposed.
Data minimisation – IoT companies should collect only the data they need and retain the collected information only for a limited time.
However, the FTC stopped at just making recommendations for now. According to an FTC analysis, the existing framework, consisting of the FTC Act, the Fair Credit Reporting Act, and the Children's Online Privacy Protection Act, along with developing consumer education and business guidance, participation in multi-stakeholder efforts and advocacy to other agencies at the federal, state and local level, is sufficient to protect consumer rights.
A resolution passed by the Senate in March 2015, is already being considered by the Congress. This resolution recognized the need for formulating a National Policy on IoT and the matter of privacy, security and spectrum. Furthermore, to provide an impetus to the IoT ecosystem, in March 2016, a bipartisan group of four Senators proposed a bill, The Developing Innovation and Growing the Internet of Things (DIGIT) Act, to direct the Federal Communications Commission to assess the need for more spectrum to connect IoT devices.
Approved on 28 September 2018, California Senate Bill No. 327 goes into effect on 1 January 2020. The bill requires "a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure,"
Several standards for the IoT industry are actually being established relating to automobiles because most concerns arising from use of connected cars apply to healthcare devices as well. In fact, the National Highway Traffic Safety Administration (NHTSA) is preparing cybersecurity guidelines and a database of best practices to make automotive computer systems more secure.
A recent report from the World Bank examines the challenges and opportunities in government adoption of IoT. These include –
Still early days for the IoT in government
Underdeveloped policy and regulatory frameworks
Unclear business models, despite strong value proposition
Clear institutional and capacity gap in government AND the private sector
Inconsistent data valuation and management
Infrastructure a major barrier
Government as an enabler
Most successful pilots share common characteristics (public-private partnership, local, leadership)
In early December 2021, the U.K. government introduced the Product Security and Telecommunications Infrastructure bill (PST), an effort to legislate IoT distributors, manufacturers, and importers to meet certain cybersecurity standards. The bill also seeks to improve the security credentials of consumer IoT devices.
Criticism, problems and controversies
Platform fragmentation
The IoT suffers from platform fragmentation, lack of interoperability and common technical standards a situation where the variety of IoT devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently between different inconsistent technology ecosystems hard. For example, wireless connectivity for IoT devices can be done using Bluetooth, Wi-Fi, Wi-Fi HaLow, Zigbee, Z-Wave, LoRa, NB-IoT, Cat M1 as well as completely custom proprietary radios – each with its own advantages and disadvantages; and unique support ecosystem.
The IoT's amorphous computing nature is also a problem for security, since patches to bugs found in the core operating system often do not reach users of older and lower-price devices. One set of researchers says that the failure of vendors to support older devices with patches and updates leaves more than 87% of active Android devices vulnerable.
Privacy, autonomy, and control
Philip N. Howard, a professor and author, writes that the Internet of things offers immense potential for empowering citizens, making government transparent, and broadening information access. Howard cautions, however, that privacy threats are enormous, as is the potential for social control and political manipulation.
Concerns about privacy have led many to consider the possibility that big data infrastructures such as the Internet of things and data mining are inherently incompatible with privacy. Key challenges of increased digitalization in the water, transport or energy sector are related to privacy and cybersecurity which necessitate an adequate response from research and policymakers alike.
Writer Adam Greenfield claims that IoT technologies are not only an invasion of public space but are also being used to perpetuate normative behavior, citing an instance of billboards with hidden cameras that tracked the demographics of passersby who stopped to read the advertisement.
The Internet of Things Council compared the increased prevalence of digital surveillance due to the Internet of things to the concept of the panopticon described by Jeremy Bentham in the 18th century. The assertion is supported by the works of French philosophers Michel Foucault and Gilles Deleuze. In Discipline and Punish: The Birth of the Prison, Foucault asserts that the panopticon was a central element of the discipline society developed during the Industrial Era. Foucault also argued that the discipline systems established in factories and school reflected Bentham's vision of panopticism. In his 1992 paper "Postscripts on the Societies of Control", Deleuze wrote that the discipline society had transitioned into a control society, with the computer replacing the panopticon as an instrument of discipline and control while still maintaining the qualities similar to that of panopticism.
Peter-Paul Verbeek, a professor of philosophy of technology at the University of Twente, Netherlands, writes that technology already influences our moral decision making, which in turn affects human agency, privacy and autonomy. He cautions against viewing technology merely as a human tool and advocates instead to consider it as an active agent.
Justin Brookman, of the Center for Democracy and Technology, expressed concern regarding the impact of the IoT on consumer privacy, saying that "There are some people in the commercial space who say, 'Oh, big data – well, let's collect everything, keep it around forever, we'll pay for somebody to think about security later.' The question is whether we want to have some sort of policy framework in place to limit that."
Tim O'Reilly believes that the way companies sell the IoT devices on consumers are misplaced, disputing the notion that the IoT is about gaining efficiency from putting all kinds of devices online and postulating that the "IoT is really about human augmentation. The applications are profoundly different when you have sensors and data driving the decision-making."
Editorials at WIRED have also expressed concern, one stating "What you're about to lose is your privacy. Actually, it's worse than that. You aren't just going to lose your privacy, you're going to have to watch the very concept of privacy be rewritten under your nose."
The American Civil Liberties Union (ACLU) expressed concern regarding the ability of IoT to erode people's control over their own lives. The ACLU wrote that "There's simply no way to forecast how these immense powers – disproportionately accumulating in the hands of corporations seeking financial advantage and governments craving ever more control – will be used. Chances are big data and the Internet of Things will make it harder for us to control our own lives, as we grow increasingly transparent to powerful corporations and government institutions that are becoming more opaque to us."
In response to rising concerns about privacy and smart technology, in 2007 the British Government stated it would follow formal Privacy by Design principles when implementing their smart metering program. The program would lead to replacement of traditional power meters with smart power meters, which could track and manage energy usage more accurately. However the British Computer Society is doubtful these principles were ever actually implemented. In 2009 the Dutch Parliament rejected a similar smart metering program, basing their decision on privacy concerns. The Dutch program later revised and passed in 2011.
Data storage
A challenge for producers of IoT applications is to clean, process and interpret the vast amount of data which is gathered by the sensors. There is a solution proposed for the analytics of the information referred to as Wireless Sensor Networks. These networks share data among sensor nodes that are sent to a distributed system for the analytics of the sensory data.
Another challenge is the storage of this bulk data. Depending on the application, there could be high data acquisition requirements, which in turn lead to high storage requirements. In 2013, the Internet was estimated to be responsible for consuming 5% of the total energy produced, and a "daunting challenge to power" IoT devices to collect and even store data still remains.
Data silos, although a common challenge of legacy systems, still commonly occur with the implementation of IoT devices, particularly within manufacturing. As there are a lot of benefits to be gained from IoT and IIoT devices, the means in which the data is stored can present serious challenges without the principles of autonomy, transparency, and interoperability being considered. The challenges do not occur by the device itself, but the means in which databases and data warehouses are set-up. These challenges were commonly identified in manufactures and enterprises which have begun upon digital transformation, and are part of the digital foundation, indicating that in order to receive the optimal benefits from IoT devices and for decision making, enterprises will have to first re-align their data storing methods. These challenges were identified by Keller (2021) when investigating the IT and application landscape of I4.0 implementation within German M&E manufactures.
Security
Security is the biggest concern in adopting Internet of things technology, with concerns that rapid development is happening without appropriate consideration of the profound security challenges involved and the regulatory changes that might be necessary. The rapid development of the Internet of Things (IoT) has allowed billions of devices to connect to the network. Due to too many connected devices and the limitation of communication security technology, various security issues gradually appear in the IoT.
Most of the technical security concerns are similar to those of conventional servers, workstations and smartphones. These concerns include using weak authentication, forgetting to change default credentials, unencrypted messages sent between devices, SQL injections, man-in-the-middle attacks, and poor handling of security updates. However, many IoT devices have severe operational limitations on the computational power available to them. These constraints often make them unable to directly use basic security measures such as implementing firewalls or using strong cryptosystems to encrypt their communications with other devices - and the low price and consumer focus of many devices makes a robust security patching system uncommon.
Rather than conventional security vulnerabilities, fault injection attacks are on the rise and targeting IoT devices. A fault injection attack is a physical attack on a device to purposefully introduce faults in the system to change the intended behavior. Faults might happen unintentionally by environmental noises and electromagnetic fields. There are ideas stemmed from control-flow integrity (CFI) to prevent fault injection attacks and system recovery to a healthy state before the fault.
Internet of things devices also have access to new areas of data, and can often control physical devices, so that even by 2014 it was possible to say that many Internet-connected appliances could already "spy on people in their own homes" including televisions, kitchen appliances, cameras, and thermostats. Computer-controlled devices in automobiles such as brakes, engine, locks, hood and trunk releases, horn, heat, and dashboard have been shown to be vulnerable to attackers who have access to the on-board network. In some cases, vehicle computer systems are Internet-connected, allowing them to be exploited remotely. By 2008 security researchers had shown the ability to remotely control pacemakers without authority. Later hackers demonstrated remote control of insulin pumps and implantable cardioverter defibrillators.
Poorly secured Internet-accessible IoT devices can also be subverted to attack others. In 2016, a distributed denial of service attack powered by Internet of things devices running the Mirai malware took down a DNS provider and major web sites. The Mirai Botnet had infected roughly 65,000 IoT devices within the first 20 hours. Eventually the infections increased to around 200,000 to 300,000 infections. Brazil, Colombia and Vietnam made up of 41.5% of the infections. The Mirai Botnet had singled out specific IoT devices that consisted of DVRs, IP cameras, routers and printers. Top vendors that contained the most infected devices were identified as Dahua, Huawei, ZTE, Cisco, ZyXEL and MikroTik. In May 2017, Junade Ali, a computer scientist at Cloudflare noted that native DDoS vulnerabilities exist in IoT devices due to a poor implementation of the Publish–subscribe pattern. These sorts of attacks have caused security experts to view IoT as a real threat to Internet services.
The U.S. National Intelligence Council in an unclassified report maintains that it would be hard to deny "access to networks of sensors and remotely-controlled objects by enemies of the United States, criminals, and mischief makers... An open market for aggregated sensor data could serve the interests of commerce and security no less than it helps criminals and spies identify vulnerable targets. Thus, massively parallel sensor fusion may undermine social cohesion, if it proves to be fundamentally incompatible with Fourth-Amendment guarantees against unreasonable search." In general, the intelligence community views the Internet of things as a rich source of data.
On 31 January 2019, The Washington Post wrote an article regarding the security and ethical challenges that can occur with IoT doorbells and cameras: "Last month, Ring got caught allowing its team in Ukraine to view and annotate certain user videos; the company says it only looks at publicly shared videos and those from Ring owners who provide consent. Just last week, a California family's Nest camera let a hacker take over and broadcast fake audio warnings about a missile attack, not to mention peer in on them, when they used a weak password."
There have been a range of responses to concerns over security. The Internet of Things Security Foundation (IoTSF) was launched on 23 September 2015 with a mission to secure the Internet of things by promoting knowledge and best practice. Its founding board is made from technology providers and telecommunications companies. In addition, large IT companies are continually developing innovative solutions to ensure the security of IoT devices. In 2017, Mozilla launched Project Things, which allows to route IoT devices through a safe Web of Things gateway. As per the estimates from KBV Research, the overall IoT security market would grow at 27.9% rate during 2016–2022 as a result of growing infrastructural concerns and diversified usage of Internet of things.
Governmental regulation is argued by some to be necessary to secure IoT devices and the wider Internet – as market incentives to secure IoT devices is insufficient. It was found that due to the nature of most of the IoT development boards, they generate predictable and weak keys which make it easy to be utilized by man-in-the-middle attack. However, various hardening approaches were proposed by many researchers to resolve the issue of SSH weak implementation and weak keys.
IoT security within the field of manufacturing presents different challenges, and varying perspectives. Within the EU and Germany, data protection is constantly referenced throughout manufacturing and digital policy particularly that of I4.0. However, the attitude towards data security differs from the enterprise perspective whereas there is an emphasis on less data protection in the form of GDPR as the data being collected from IoT devices in the manufacturing sector does not display personal details. Yet, research has indicated that manufacturing experts are concerned about "data security for protecting machine technology from international competitors with the ever-greater push for interconnectivity".
Safety
IoT systems are typically controlled by event-driven smart apps that take as input either sensed data, user inputs, or other external triggers (from the Internet) and command one or more actuators towards providing different forms of automation. Examples of sensors include smoke detectors, motion sensors, and contact sensors. Examples of actuators include smart locks, smart power outlets, and door controls. Popular control platforms on which third-party developers can build smart apps that interact wirelessly with these sensors and actuators include Samsung's SmartThings, Apple's HomeKit, and Amazon's Alexa, among others.
A problem specific to IoT systems is that buggy apps, unforeseen bad app interactions, or device/communication failures, can cause unsafe and dangerous physical states, e.g., "unlock the entrance door when no one is at home" or "turn off the heater when the temperature is below 0 degrees Celsius and people are sleeping at night". Detecting flaws that lead to such states, requires a holistic view of installed apps, component devices, their configurations, and more importantly, how they interact. Recently, researchers from the University of California Riverside have proposed IotSan, a novel practical system that uses model checking as a building block to reveal "interaction-level" flaws by identifying events that can lead the system to unsafe states. They have evaluated IotSan on the Samsung SmartThings platform. From 76 manually configured systems, IotSan detects 147 vulnerabilities (i.e., violations of safe physical states/properties).
Design
Given widespread recognition of the evolving nature of the design and management of the Internet of things, sustainable and secure deployment of IoT solutions must design for "anarchic scalability". Application of the concept of anarchic scalability can be extended to physical systems (i.e. controlled real-world objects), by virtue of those systems being designed to account for uncertain management futures. This hard anarchic scalability thus provides a pathway forward to fully realize the potential of Internet-of-things solutions by selectively constraining physical systems to allow for all management regimes without risking physical failure.
Brown University computer scientist Michael Littman has argued that successful execution of the Internet of things requires consideration of the interface's usability as well as the technology itself. These interfaces need to be not only more user-friendly but also better integrated: "If users need to learn different interfaces for their vacuums, their locks, their sprinklers, their lights, and their coffeemakers, it's tough to say that their lives have been made any easier."
Environmental sustainability impact
A concern regarding Internet-of-things technologies pertains to the environmental impacts of the manufacture, use, and eventual disposal of all these semiconductor-rich devices. Modern electronics are replete with a wide variety of heavy metals and rare-earth metals, as well as highly toxic synthetic chemicals. This makes them extremely difficult to properly recycle. Electronic components are often incinerated or placed in regular landfills. Furthermore, the human and environmental cost of mining the rare-earth metals that are integral to modern electronic components continues to grow. This leads to societal questions concerning the environmental impacts of IoT devices over their lifetime.
Intentional obsolescence of devices
The Electronic Frontier Foundation has raised concerns that companies can use the technologies necessary to support connected devices to intentionally disable or "brick" their customers' devices via a remote software update or by disabling a service necessary to the operation of the device. In one example, home automation devices sold with the promise of a "Lifetime Subscription" were rendered useless after Nest Labs acquired Revolv and made the decision to shut down the central servers the Revolv devices had used to operate. As Nest is a company owned by Alphabet (Google's parent company), the EFF argues this sets a "terrible precedent for a company with ambitions to sell self-driving cars, medical devices, and other high-end gadgets that may be essential to a person's livelihood or physical safety."
Owners should be free to point their devices to a different server or collaborate on improved software. But such action violates the United States DMCA section 1201, which only has an exemption for "local use". This forces tinkerers who want to keep using their own equipment into a legal grey area. EFF thinks buyers should refuse electronics and software that prioritize the manufacturer's wishes above their own.
Examples of post-sale manipulations include Google Nest Revolv, disabled privacy settings on Android, Sony disabling Linux on PlayStation 3, and enforced EULA on Wii U.
Confusing terminology
Kevin Lonergan at Information Age, a business technology magazine, has referred to the terms surrounding the IoT as a "terminology zoo". The lack of clear terminology is not "useful from a practical point of view" and a "source of confusion for the end user". A company operating in the IoT space could be working in anything related to sensor technology, networking, embedded systems, or analytics. According to Lonergan, the term IoT was coined before smart phones, tablets, and devices as we know them today existed, and there is a long list of terms with varying degrees of overlap and technological convergence: Internet of things, Internet of everything (IoE), Internet of goods (supply chain), industrial Internet, pervasive computing, pervasive sensing, ubiquitous computing, cyber-physical systems (CPS), wireless sensor networks (WSN), smart objects, digital twin, cyberobjects or avatars, cooperating objects, machine to machine (M2M), ambient intelligence (AmI), Operational technology (OT), and information technology (IT). Regarding IIoT, an industrial sub-field of IoT, the Industrial Internet Consortium's Vocabulary Task Group has created a "common and reusable vocabulary of terms" to ensure "consistent terminology" across publications issued by the Industrial Internet Consortium. IoT One has created an IoT Terms Database including a New Term Alert to be notified when a new term is published. , this database aggregates 807 IoT-related terms, while keeping material "transparent and comprehensive".
Adoption barriers
Lack of interoperability and unclear value propositions
Despite a shared belief in the potential of the IoT, industry leaders and consumers are facing barriers to adopt IoT technology more widely. Mike Farley argued in Forbes that while IoT solutions appeal to early adopters, they either lack interoperability or a clear use case for end-users. A study by Ericsson regarding the adoption of IoT among Danish companies suggests that many struggle "to pinpoint exactly where the value of IoT lies for them".
Privacy and security concerns
As for IoT, especially in regards to consumer IoT, information about a user's daily routine is collected so that the "things" around the user can cooperate to provide better services that fulfill personal preference. When the collected information which describes a user in detail travels through multiple hops in a network, due to a diverse integration of services, devices and network, the information stored on a device is vulnerable to privacy violation by compromising nodes existing in an IoT network.
For example, on 21 October 2016, a multiple distributed denial of service (DDoS) attacks systems operated by domain name system provider Dyn, which caused the inaccessibility of several websites, such as GitHub, Twitter, and others. This attack is executed through a botnet consisting of a large number of IoT devices including IP cameras, gateways, and even baby monitors.
Fundamentally there are 4 security objectives that the IoT system requires: (1) data confidentiality: unauthorised parties cannot have access to the transmitted and stored data; (2) data integrity: intentional and unintentional corruption of transmitted and stored data must be detected; (3) non-repudiation: the sender cannot deny having sent a given message; (4) data availability: the transmitted and stored data should be available to authorised parties even with the denial-of-service (DOS) attacks.
Information privacy regulations also require organisations to practice "reasonable security". California's SB-327 Information privacy: connected devices "would require a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorised access, destruction, use, modification, or disclosure, as specified". As each organisation's environment is unique, it can prove challenging to demonstrate what "reasonable security" is and what potential risks could be involved for the business. Oregon's HB2395 also "requires [a] person that manufactures, sells or offers to sell connected device] manufacturer to equip connected device with reasonable security features that protect connected device and information that connected device collects, contains, stores or transmits] stores''' from access, destruction, modification, use or disclosure that consumer does not authorise."
According to antivirus provider Kaspersky, there were 639 million data breaches of IoT devices in 2020 and 1.5 billion breaches in the first six months of 2021.
Traditional governance structure
A study issued by Ericsson regarding the adoption of Internet of things among Danish companies identified a "clash between IoT and companies' traditional governance structures, as IoT still presents both uncertainties and a lack of historical precedence." Among the respondents interviewed, 60 percent stated that they "do not believe they have the organizational capabilities, and three of four do not believe they have the processes needed, to capture the IoT opportunity." This has led to a need to understand organizational culture in order to facilitate organizational design processes and to test new innovation management practices. A lack of digital leadership in the age of digital transformation has also stifled innovation and IoT adoption to a degree that many companies, in the face of uncertainty, "were waiting for the market dynamics to play out", or further action in regards to IoT "was pending competitor moves, customer pull, or regulatory requirements". Some of these companies risk being "kodaked" – "Kodak was a market leader until digital disruption eclipsed film photography with digital photos" – failing to "see the disruptive forces affecting their industry" and "to truly embrace the new business models the disruptive change opens up". Scott Anthony has written in Harvard Business Review that Kodak "created a digital camera, invested in the technology, and even understood that photos would be shared online" but ultimately failed to realize that "online photo sharing was'' the new business, not just a way to expand the printing business."
Business planning and project management
According to 2018 study, 70–75% of IoT deployments were stuck in the pilot or prototype stage, unable to reach scale due in part to a lack of business planning.
Even though scientists, engineers, and managers across the world are continuously working to create and exploit the benefits of IoT products, there are some flaws in the governance, management and implementation of such projects. Despite tremendous forward momentum in the field of information and other underlying technologies, IoT still remains a complex area and the problem of how IoT projects are managed still needs to be addressed. IoT projects must be run differently than simple and traditional IT, manufacturing or construction projects. Because IoT projects have longer project timelines, a lack of skilled resources and several security/legal issues, there is a need for new and specifically designed project processes. The following management techniques should improve the success rate of IoT projects:
A separate research and development phase
A Proof-of-Concept/Prototype before the actual project begins
Project managers with interdisciplinary technical knowledge
Universally defined business and technical jargon
See also
Ambient IoT
Artificial intelligence of things
Automotive security
Cloud manufacturing
Data Distribution Service
Digital object memory
Electric Dreams (film)
Four-dimensional product
Fourth Industrial Revolution
Indoor positioning system
Internet of Musical Things
IoT security device
Matter
OpenWSN
Quantified self
Responsive computer-aided design
Notes
References
Bibliography
Ambient intelligence
Technology assessments
Computing and society
Digital technology
21st-century inventions | Internet of things | [
"Technology"
] | 15,108 | [
"Information and communications technology",
"Technology assessments",
"Digital technology",
"Computing and society",
"Ambient intelligence"
] |
12,059,291 | https://en.wikipedia.org/wiki/Homologation%20reaction | In organic chemistry, a homologation reaction, also known as homologization, is any chemical reaction that converts the reactant into the next member of the homologous series. A homologous series is a group of compounds that differ by a constant unit, generally a methylene () group. The reactants undergo a homologation when the number of a repeated structural unit in the molecules is increased. The most common homologation reactions increase the number of methylene () units in saturated chain within the molecule. For example, the reaction of aldehydes or ketones with diazomethane or methoxymethylenetriphenylphosphine to give the next homologue in the series.
Examples of homologation reactions include:
Kiliani-Fischer synthesis, where an aldose molecule is elongated through a three-step process consisting of:
Nucleophillic addition of cyanide to the carbonyl to form a cyanohydrin
Hydrolysis to form a lactone
Reduction to form the homologous aldose
Wittig reaction of an aldehyde with methoxymethylenetriphenylphosphine, which produces a homologous aldehyde.
Arndt–Eistert reaction is a series of chemical reactions designed to convert a carboxylic acid to a higher carboxylic acid homologue (i.e. contains one additional carbon atom)
Kowalski ester homologation, an alternative to the Arndt-Eistert synthesis. Has been used to convert β-amino esters from α-amino esters through an ynolate intermediate.
Seyferth–Gilbert homologation in which an aldehyde is converted to a terminal alkyne and then hydrolyzed back to an aldehyde.
Some reactions increase the chain length by more than one unit. For example, the DeMayo reaction can be considered a two-carbon homologation reaction.
Chain reduction
Likewise the chain length can also be reduced:
In the Gallagher–Hollander degradation (1946) pyruvic acid is removed from a linear aliphatic carboxylic acid yielding a new acid with 2 carbon atoms less. The original publication concerns the conversion of bile acid in a series of reactions: acid chloride (2) formation with thionyl chloride, diazoketone formation (3) with diazomethane, chloromethyl ketone formation (4) with hydrochloric acid, organic reduction of chlorine to methylketone (5), ketone halogenation to 6, elimination reaction with pyridine to enone 7 and finally oxidation with chromium trioxide to bisnorcholanic acid 8.
In the Hooker reaction (1936) an alkyl chain in a certain naphthoquinone (phenomenon first observed in the compound lapachol) is reduced by one methylene unit as carbon dioxide in each potassium permanganate oxidation.
Mechanistically oxidation causes ring-cleavage at the alkene group, extrusion of carbon dioxide in decarboxylation with subsequent ring-closure.
See also
Homologous series
References
Carbon-carbon bond forming reactions | Homologation reaction | [
"Chemistry"
] | 658 | [
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
12,059,679 | https://en.wikipedia.org/wiki/Umbrella%20sampling | Umbrella sampling is a technique in computational physics and chemistry, used to improve sampling of a system (or different systems) where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Torrie and Valleau in 1977. It is a particular physical application of the more general importance sampling in statistics.
Systems in which an energy barrier separates two regions of configuration space may suffer from poor sampling. In Metropolis Monte Carlo runs, the low probability of overcoming the potential barrier can leave inaccessible configurations poorly sampled—or even entirely unsampled—by the simulation. An easily visualised example occurs with a solid at its melting point: considering the state of the system with an order parameter Q, both liquid (low Q) and solid (high Q) phases are low in energy, but are separated by a free-energy barrier at intermediate values of Q. This prevents the simulation from adequately sampling both phases.
Umbrella sampling is a means of "bridging the gap" in this situation. The standard Boltzmann weighting for Monte Carlo sampling is replaced by a potential chosen to cancel the influence of the energy barrier present. The Markov chain generated has a distribution given by
with U the potential energy, w(rN) a function chosen to promote configurations that would otherwise be inaccessible to a Boltzmann-weighted Monte Carlo run. In the example above, w may be chosen such that w = w(Q), taking high values at intermediate Q and low values at low/high Q, facilitating barrier crossing.
Values for a thermodynamic property A deduced from a sampling run performed in this manner can be transformed into canonical-ensemble values by applying the formula
with the subscript indicating values from the umbrella-sampled simulation.
The effect of introducing the weighting function w(rN) is equivalent to adding a biasing potential
to the potential energy of the system.
If the biasing potential is strictly a function of a reaction coordinate or order parameter , then the (unbiased) free-energy profile on the reaction coordinate can be calculated by subtracting the biasing potential from the biased free-energy profile:
where is the free-energy profile of the unbiased system, and is the free-energy profile calculated for the biased, umbrella-sampled system.
Series of umbrella sampling simulations can be analyzed using the weighted histogram analysis method (WHAM) or its generalization. WHAM can be derived using the maximum likelihood method.
Subtleties exist in deciding the most computationally efficient way to apply the umbrella sampling method, as described in Frenkel and Smit's book Understanding Molecular Simulation.
Alternatives to umbrella sampling for computing potentials of mean force or reaction rates are free-energy perturbation and transition interface sampling. A further alternative, which functions in full non-equilibrium, is S-PRES.
References
Further reading
Daan Frenkel and Berend Smit: "Understanding Molecular Simulation: From Algorithms to Applications". Academic Press 2001,
Johannes Kästner: “Umbrella Sampling”, WIREs Computational Molecular Science 1, 932 (2011) doi:10.1002/wcms.66
Monte Carlo methods
Molecular dynamics
Computational chemistry
Computational physics
Theoretical chemistry | Umbrella sampling | [
"Physics",
"Chemistry"
] | 656 | [
"Molecular physics",
"Monte Carlo methods",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Theoretical chemistry",
"nan"
] |
12,060,033 | https://en.wikipedia.org/wiki/NOBIN | NOBIN (2-amino-2'-hydroxy-1,1'-binaphthyl) is an organic molecule used for asymmetric catalysis. NOBIN is related to BINOL and other analogs by both having a chiral axis and being a scaffold for certain chemical reactions. NOBIN is an excellent catalyst for the aldol reaction producing reliable products, good yields, and excellent diastereoselectivity.
Though rotation around the bond joining the rings is limited by the hydrogen atoms, enantiomerically pure NOBIN may racemize upon heating.
NOBIN is prepared by oxidative cross coupling of 2-naphthol and 2-naphthylamine. The oxidative source is metal ions in solution such as Fe2+ or a Cu2+ amine complex. Once racemic NOBIN is produced, it needs to be resolved. One method for this is the use of camphorsulfonic acid, in which the basic group of NOBIN is used to form a diastereomeric salt of one enantiomer. The other enantiomer however will stay in solution.
References
Kocovsky, Smrcina, Lorenc, Hanus; Synlett;(1991) 231
Ding, Li; Ten years of research on NOBIN chemistry: Current Organic Synthesis;(2005);2;499-545
Catalysts
2-Naphthols
Naphthylamines | NOBIN | [
"Chemistry"
] | 301 | [
"Catalysis",
"Catalysts",
"Chemical kinetics"
] |
12,060,085 | https://en.wikipedia.org/wiki/Digital%20signal%20controller | A digital signal controller (DSC) is a hybrid of microcontrollers and digital signal processors (DSPs). Like microcontrollers, DSCs have fast interrupt responses, offer control-oriented peripherals like PWMs and watchdog timers, and are usually programmed using the C programming language, although they can be programmed using the device's native assembly language. On the DSP side, they incorporate features found on most DSPs such as single-cycle multiply–accumulate (MAC) units, barrel shifters, and large accumulators. Not all vendors have adopted the term DSC. The term was first introduced by Microchip Technology in 2002 with the launch of their 6000 series DSCs and subsequently adopted by most, but not all DSC vendors. For example, Infineon and Renesas refer to their DSCs as microcontrollers.
DSCs are used in a wide range of applications, but the majority go into motor control, power conversion, and sensor processing applications. Currently, DSCs are being marketed as green technologies for their potential to reduce power consumption in electric motors and power supplies.
In order of market share, the top three DSC vendors are Texas Instruments, Freescale, and Microchip Technology, according to market research firm Forward Concepts (2007). These three companies dominate the DSC market, with other vendors such as Infineon and Renesas taking a smaller slice of the pie.
DSC chips
NOTE: Data is from 2012 (Microchip and TI) and table currently only includes offering from the top 3 DSC vendors.
DSC software
DSCs, like microcontrollers and DSPs, require software support. There are a growing number of software packages that offer the features required by both DSP applications and microcontroller applications. With a broader set of requirements, software solutions are more rare. They require: development tools, DSP libraries, optimization for DSP processing, fast interrupt handling, multi-threading, and a tiny footprint.
References
Microcontrollers
Digital signal processing
Digital signal processors
Integrated circuits | Digital signal controller | [
"Technology",
"Engineering"
] | 433 | [
"Computer engineering",
"Integrated circuits"
] |
12,062,017 | https://en.wikipedia.org/wiki/Osteoimmunology | Osteoimmunology (όστέον, osteon from Greek, "bone"; from Latin, "immunity"; and λόγος, logos, from Greek "study") is a field that emerged about 40 years ago that studies the interface between the skeletal system and the immune system, comprising the "osteo-immune system". Osteoimmunology also studies the shared components and mechanisms between the two systems in vertebrates, including ligands, receptors, signaling molecules and transcription factors. Over the past decade, osteoimmunology has been investigated clinically for the treatment of bone metastases, rheumatoid arthritis (RA), osteoporosis, osteopetrosis, and periodontitis. Studies in osteoimmunology reveal relationships between molecular communication among blood cells and structural pathologies in the body.
System similarities
The RANKL-RANK-OPG axis (OPG stands for osteoprotegerin) is an example of an important signaling system functioning both in bone and immune cell communication. RANKL is expressed on osteoblasts and activated T cells, whereas RANK is expressed on osteoclasts, and dendritic cells (DCs), both of which can be derived from myeloid progenitor cells. Surface RANKL on osteoblasts as well as secreted RANKL provide necessary signals for osteoclast precursors to differentiate into osteoclasts. RANKL expression on activated T cells leads to DC activation through binding to RANK expressed on DCs. OPG, produced by DCs, is a soluble decoy receptor for RANKL that competitively inhibits RANKL binding to RANK.
Crosstalk
The bone marrow cavity is important for the proper development of the immune system, and houses important stem cells for maintenance of the immune system. Within this space, as well as outside of it, cytokines produced by immune cells also have important effects on regulating bone homeostasis. Some important cytokines that are produced by the immune system, including RANKL, M-CSF, TNFa, ILs, and IFNs, affect the differentiation and activity of osteoclasts and bone resorption. Such inflammatory osteoclastogenesis and osteoclast activation can be seen in ex vivo primary cultures of cells from the inflamed synovial fluid of patients with disease flare of the autoimmune disease rheumatoid arthritis.
Clinical osteoimmunology
Clinical osteoimmunology is a field that studies a treatment or prevention of the bone related diseases caused by disorders of the immune system. Aberrant and/or prolonged activation of immune system leads to derangement of bone modeling and remodeling. Common diseases caused by disorder of osteoimmune system is osteoporosis and bone destruction accompanied by RA characterized by high infiltration of CD4+ T cells in rheumatoid joints, in which two mechanisms are involved: One is an indirect effect on osteoclastogenesis from rheumatoid synovial cells in joints since synovial cells have osteoclast precursors and osteoclast supporting cells, synovial macrophages are highly differentiated into osteoclasts with help of RANKL released from osteoclast supporting cells.
The second is an indirect effect on osteoclast differentiation and activity by the secretion of inflammatory cytokines such as IL-1, IL-6, TNFa, in synovium of RA, which increase RANKL signaling and finally bone destruction. A clinical approach to prevent bone related diseases caused by RA is OPG and RANKL treatment in arthritis. There is some evidence that infections (e.g. respiratory virus infection) can reduce the numbers of osteoblasts in bone, the key cells involved in bone formation.
See also
Bone metabolism
Osteoimmunology and Osseointegration
HSC
Osteoarthritis
References
Physiology
Branches of immunology | Osteoimmunology | [
"Biology"
] | 855 | [
"Branches of immunology",
"Physiology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.