id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
27,302,843
https://en.wikipedia.org/wiki/Common%20Programming%20Interface%20for%20Communications
Common Programming Interface for Communications (CPI-C) is an application programming interface (API) developed by IBM in 1987 to provide a platform-independent communications interface for the IBM Systems Application Architecture-based network, and to standardise programming access to SNA LU 6.2. CPI-C was part of IBM Systems Application Architecture (SAA), an attempt to standardise APIs across all IBM platforms. It was adopted in 1992 by X/Open as an open systems standard, identified as standard C210, and documented in X/Open Developers Specification: CPI-C. See also IBM Advanced Program-to-Program Communication References External links Distributed Transaction Processing: The XCPI-C Specification Version 2 CPIC Reference Manual CPI-C for MVS Chapter 21. Using CPIC-C for Java, IBM SecureWay Communications Server Programming with the CPI-C API, John Lyons, 31 May 1997 IBM software Systems Network Architecture Network software
Common Programming Interface for Communications
Engineering
191
43,833,681
https://en.wikipedia.org/wiki/HR%206875
HR 6875, previously known as Sigma Telescopii, is a single star in the constellation Corona Australis. It has a blue-white hue and is dimly visible to the naked eye with an apparent visual magnitude of 5.24. This object is located at a distance of approximately 550 light years from the Sun based on parallax. It is listed as a member of the Sco OB2 association. This is a hot B-type main-sequence star with a stellar classification of B3 V. It is around 103 million years old and is spinning rapidly with a projected rotational velocity of 248 km/s or perhaps higher. The star has six times the mass of the Sun and about four times the Sun's radius. It is radiating more than a thousand times the luminosity of the Sun from its photosphere at an effective temperature of 20,350 K. A magnitude 10.13 visual companion is located at an angular separation of along a position angle of 162°. References B-type main-sequence stars Scorpius–Centaurus association Corona Australis Telescopii, Sigma Durchmusterung objects 168905 090200 6875
HR 6875
Astronomy
246
60,259,160
https://en.wikipedia.org/wiki/Arogyavani
Arogyavani is an initiative first conceptualized and implemented by the Government of Karnataka. It is a toll-free health helpline number (Dial to 104) functioning 24/7 hrs for the convenience of general public. Arogyavani Health Information Helpline provides the medically validated advice, Health Related Schemes launched by various governments, health counseling services and also lodge the complaints against Healthcare service providers like Doctors, Hospital and corruption in Health sector. Presence in other States of India After its success in Karnataka in agreement with Piramal Swasthya, the Government of Punjab started 104 Medical Helpline call center by entering into an agreement with Ziqitza Healthcare Limited from June 2014. In the state of Punjab, the services under the Arogyavani initiative are available in three languages namely Punjabi, Hindi and English. In its first year, the 104 medical helpline proved boon for 47,476 people in the state of Punjab. In the state of Odisha and Chhattisgarh, Ziqitza Healthcare Limited operates 104 health helpline number through Tender processing under Public Private Partnership (PPP) with National Health Mission(NHM). The other states where 104 non-emergency medical helpline number is active are Assam, Chhattisgarh, Jharkhand, Gujarat, Rajasthan, Tamil Nadu, Telangana and Madhya Pradesh. Objective The main objective of 104 Medical Helpline number is to provide information and advice for health related services to general public seeking answers or resolutions for the following concerned areas: Information Directory for tracking health services providers/institutions, diagnostic services, hospitals etc. Complaint Registration about person/institution relating to deficiency of services, negligence, corruption, etc. in government healthcare institutions. Advice on long term ill conditions like diabetes, heart issues etc. Response to health scares and other localized epidemics. Counseling and advice (stress, depression, anxiety, post-trauma recovery, HIV, AIDS, RTI, STI etc.) Health and symptoms checker (initial assessment, flu advice, pregnancy related information etc.) First aid information and advice. Any other health related services/issues. References Government of Karnataka Telephone numbers Three-digit telephone numbers
Arogyavani
Mathematics
439
76,602,972
https://en.wikipedia.org/wiki/Wai-iti%20Dark%20Sky%20Park
The Wai-iti Dark Sky Park is an accredited International Dark Sky Park, located near the township of Wakefield in the Tasman District of New Zealand. It covers an area of of Tasman District Council land, including Tunnicliff Forest and the Wai-iti Recreation Reserve. Wai-iti is the first International Dark Sky Park to be designated in New Zealand by DarkSky International. The park is located around from Wakefield, and from Nelson, and is adjacent to the Wai-iti River. The application for the dark sky park status was prepared by the Top of the South Dark Sky committee, a group associated with the Nelson Science Society Astronomy Section. Accreditation was announced in July 2020. The application reported that readings of night sky luminance in the park taken over a period of 5 years have an average value of 21.52 mag/arcsec2 (corresponding to Bortle scale 3), with a few individual readings of 21.84 mag/arcsec2 (Bortle scale 1). In July 2023, the Top of the South Dark Sky Committee warned the District Council that the Dark Sky Park accreditation was at risk because there had been a 150% increase in light pollution in the park over a period of three years. Factors leading to the increase in light pollution were thought to include the expansion of residential and industrial subdivisions within of the park, increasing street lighting, and the use of 4000k LED street lights. At the time of the application for accreditation, the District Council had agreed to a lighting management plan, but as of 2023, this had not been implemented. The advocates for the Dark Sky Park urged that luminaires in sensitive areas be refitted with 2200K amber phosphorus LEDs. References External links Official website 2020 establishments in New Zealand Dark-sky preserves in New Zealand Tasman District
Wai-iti Dark Sky Park
Astronomy
374
1,126,638
https://en.wikipedia.org/wiki/Invariant%20%28mathematics%29
In mathematics, an invariant is a property of a mathematical object (or a class of mathematical objects) which remains unchanged after operations or transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane. The phrases "invariant under" and "invariant to" a transformation are both used. More generally, an invariant with respect to an equivalence relation is a property that is constant on each equivalence class. Invariants are used in diverse areas of mathematics such as geometry, topology, algebra and discrete mathematics. Some important classes of transformations are defined by an invariant they leave unchanged. For example, conformal maps are defined as transformations of the plane that preserve angles. The discovery of invariants is an important step in the process of classifying mathematical objects. Examples A simple example of invariance is expressed in our ability to count. For a finite set of objects of any kind, there is a number to which we always arrive, regardless of the order in which we count the objects in the set. The quantity—a cardinal number—is associated with the set, and is invariant under the process of counting. An identity is an equation that remains true for all values of its variables. There are also inequalities that remain true when the values of their variables change. The distance between two points on a number line is not changed by adding the same quantity to both numbers. On the other hand, multiplication does not have this same property, as distance is not invariant under multiplication. Angles and ratios of distances are invariant under scalings, rotations, translations and reflections. These transformations produce similar shapes, which is the basis of trigonometry. In contrast, angles and ratios are not invariant under non-uniform scaling (such as stretching). The sum of a triangle's interior angles (180°) is invariant under all the above operations. As another example, all circles are similar: they can be transformed into each other and the ratio of the circumference to the diameter is invariant (denoted by the Greek letter π (pi)). Some more complicated examples: The real part and the absolute value of a complex number are invariant under complex conjugation. The tricolorability of knots. The degree of a polynomial is invariant under a linear change of variables. The dimension and homology groups of a topological object are invariant under homeomorphism. The number of fixed points of a dynamical system is invariant under many mathematical operations. Euclidean distance is invariant under orthogonal transformations. Area is invariant under linear maps which have determinant ±1 (see ). Some invariants of projective transformations include collinearity of three or more points, concurrency of three or more lines, conic sections, and the cross-ratio. The determinant, trace, eigenvectors, and eigenvalues of a linear endomorphism are invariant under a change of basis. In other words, the spectrum of a matrix is invariant under a change of basis. The principal invariants of tensors do not change with rotation of the coordinate system (see Invariants of tensors). The singular values of a matrix are invariant under orthogonal transformations. Lebesgue measure is invariant under translations. The variance of a probability distribution is invariant under translations of the real line. Hence the variance of a random variable is unchanged after the addition of a constant. The fixed points of a transformation are the elements in the domain that are invariant under the transformation. They may, depending on the application, be called symmetric with respect to that transformation. For example, objects with translational symmetry are invariant under certain translations. The integral of the Gaussian curvature of a two-dimensional Riemannian manifold is invariant under changes of the Riemannian metric . This is the Gauss–Bonnet theorem. MU puzzle The MU puzzle is a good example of a logical problem where determining an invariant is of use for an impossibility proof. The puzzle asks one to start with the word MI and transform it into the word MU, using in each step one of the following transformation rules: If a string ends with an I, a U may be appended (xI → xIU) The string after the M may be completely duplicated (Mx → Mxx) Any three consecutive I's (III) may be replaced with a single U (xIIIy → xUy) Any two consecutive U's may be removed (xUUy → xy) An example derivation (with superscripts indicating the applied rules) is MI →2 MII →2 MIIII →3 MUI →2 MUIUI →1 MUIUIU →2 MUIUIUUIUIU →4 MUIUIIUIU → ... In light of this, one might wonder whether it is possible to convert MI into MU, using only these four transformation rules. One could spend many hours applying these transformation rules to strings. However, it might be quicker to find a property that is invariant to all rules (that is, not changed by any of them), and that demonstrates that getting to MU is impossible. By looking at the puzzle from a logical standpoint, one might realize that the only way to get rid of any I's is to have three consecutive I's in the string. This makes the following invariant interesting to consider: The number of I's in the string is not a multiple of 3. This is an invariant to the problem, if for each of the transformation rules the following holds: if the invariant held before applying the rule, it will also hold after applying it. Looking at the net effect of applying the rules on the number of I's and U's, one can see this actually is the case for all rules: {| class=wikitable |- ! Rule !! #I's !! #U's !! Effect on invariant |- | style="text-align: center;" | 1 || style="text-align: right;" | +0 || style="text-align: right;" | +1 || Number of I's is unchanged. If the invariant held, it still does. |- | style="text-align: center;" | 2 || style="text-align: right;" | ×2 || style="text-align: right;" | ×2 || If n is not a multiple of 3, then 2×n is not either. The invariant still holds. |- | style="text-align: center;" | 3 || style="text-align: right;" | −3 || style="text-align: right;" | +1 || If n is not a multiple of 3, n−3 is not either. The invariant still holds. |- | style="text-align: center;" | 4 || style="text-align: right;" | +0 || style="text-align: right;" | −2 || Number of I's is unchanged. If the invariant held, it still does. |} The table above shows clearly that the invariant holds for each of the possible transformation rules, which means that whichever rule one picks, at whatever state, if the number of I's was not a multiple of three before applying the rule, then it will not be afterwards either. Given that there is a single I in the starting string MI, and one is not a multiple of three, one can then conclude that it is impossible to go from MI to MU (as the number of I's will never be a multiple of three). Invariant set A subset S of the domain U of a mapping T: U → U is an invariant set under the mapping when Note that the elements of S are not fixed, even though the set S is fixed in the power set of U. (Some authors use the terminology setwise invariant, vs. pointwise invariant, to distinguish between these cases.) For example, a circle is an invariant subset of the plane under a rotation about the circle's center. Further, a conical surface is invariant as a set under a homothety of space. An invariant set of an operation T is also said to be stable under T. For example, the normal subgroups that are so important in group theory are those subgroups that are stable under the inner automorphisms of the ambient group. In linear algebra, if a linear transformation T has an eigenvector v, then the line through 0 and v is an invariant set under T, in which case the eigenvectors span an invariant subspace which is stable under T. When T is a screw displacement, the screw axis is an invariant line, though if the pitch is non-zero, T has no fixed points. In probability theory and ergodic theory, invariant sets are usually defined via the stronger property When the map is measurable, invariant sets form a sigma-algebra, the invariant sigma-algebra. Formal statement The notion of invariance is formalized in three different ways in mathematics: via group actions, presentations, and deformation. Unchanged under group action Firstly, if one has a group G acting on a mathematical object (or set of objects) X, then one may ask which points x are unchanged, "invariant" under the group action, or under an element g of the group. Frequently one will have a group acting on a set X, which leaves one to determine which objects in an associated set F(X) are invariant. For example, rotation in the plane about a point leaves the point about which it rotates invariant, while translation in the plane does not leave any points invariant, but does leave all lines parallel to the direction of translation invariant as lines. Formally, define the set of lines in the plane P as L(P); then a rigid motion of the plane takes lines to lines – the group of rigid motions acts on the set of lines – and one may ask which lines are unchanged by an action. More importantly, one may define a function on a set, such as "radius of a circle in the plane", and then ask if this function is invariant under a group action, such as rigid motions. Dual to the notion of invariants are coinvariants, also known as orbits, which formalizes the notion of congruence: objects which can be taken to each other by a group action. For example, under the group of rigid motions of the plane, the perimeter of a triangle is an invariant, while the set of triangles congruent to a given triangle is a coinvariant. These are connected as follows: invariants are constant on coinvariants (for example, congruent triangles have the same perimeter), while two objects which agree in the value of one invariant may or may not be congruent (for example, two triangles with the same perimeter need not be congruent). In classification problems, one might seek to find a complete set of invariants, such that if two objects have the same values for this set of invariants, then they are congruent. For example, triangles such that all three sides are equal are congruent under rigid motions, via SSS congruence, and thus the lengths of all three sides form a complete set of invariants for triangles. The three angle measures of a triangle are also invariant under rigid motions, but do not form a complete set as incongruent triangles can share the same angle measures. However, if one allows scaling in addition to rigid motions, then the AAA similarity criterion shows that this is a complete set of invariants. Independent of presentation Secondly, a function may be defined in terms of some presentation or decomposition of a mathematical object; for instance, the Euler characteristic of a cell complex is defined as the alternating sum of the number of cells in each dimension. One may forget the cell complex structure and look only at the underlying topological space (the manifold) – as different cell complexes give the same underlying manifold, one may ask if the function is independent of choice of presentation, in which case it is an intrinsically defined invariant. This is the case for the Euler characteristic, and a general method for defining and computing invariants is to define them for a given presentation, and then show that they are independent of the choice of presentation. Note that there is no notion of a group action in this sense. The most common examples are: The presentation of a manifold in terms of coordinate charts – invariants must be unchanged under change of coordinates. Various manifold decompositions, as discussed for Euler characteristic. Invariants of a presentation of a group. Unchanged under perturbation Thirdly, if one is studying an object which varies in a family, as is common in algebraic geometry and differential geometry, one may ask if the property is unchanged under perturbation (for example, if an object is constant on families or invariant under change of metric). Invariants in computer science In computer science, an invariant is a logical assertion that is always held to be true during a certain phase of execution of a computer program. For example, a loop invariant is a condition that is true at the beginning and the end of every iteration of a loop. Invariants are especially useful when reasoning about the correctness of a computer program. The theory of optimizing compilers, the methodology of design by contract, and formal methods for determining program correctness, all rely heavily on invariants. Programmers often use assertions in their code to make invariants explicit. Some object oriented programming languages have a special syntax for specifying class invariants. Automatic invariant detection in imperative programs Abstract interpretation tools can compute simple invariants of given imperative computer programs. The kind of properties that can be found depend on the abstract domains used. Typical example properties are single integer variable ranges like 0<=x<1024, relations between several variables like 0<=i-j<2*n-1, and modulus information like y%4==0. Academic research prototypes also consider simple properties of pointer structures. More sophisticated invariants generally have to be provided manually. In particular, when verifying an imperative program using the Hoare calculus, a loop invariant has to be provided manually for each loop in the program, which is one of the reasons that this approach is generally impractical for most programs. In the context of the above MU puzzle example, there is currently no general automated tool that can detect that a derivation from MI to MU is impossible using only the rules 1–4. However, once the abstraction from the string to the number of its "I"s has been made by hand, leading, for example, to the following C program, an abstract interpretation tool will be able to detect that ICount%3 cannot be 0, and hence the "while"-loop will never terminate. void MUPuzzle(void) { volatile int RandomRule; int ICount = 1, UCount = 0; while (ICount % 3 != 0) // non-terminating loop switch(RandomRule) { case 1: UCount += 1; break; case 2: ICount *= 2; UCount *= 2; break; case 3: ICount -= 3; UCount += 1; break; case 4: UCount -= 2; break; } // computed invariant: ICount % 3 == 1 || ICount % 3 == 2 } See also Erlangen program Graph invariant Invariant differential operator Invariant estimator in statistics Invariant measure Invariant (physics) Invariants of tensors Invariant theory Knot invariant Mathematical constant Mathematical constants and functions Scale invariance Symmetry in mathematics Topological invariant Young–Deruyts development Notes References J.D. Fokker, H. Zantema, S.D. Swierstra (1991). "Iteratie en invariatie", Programmeren en Correctheid. Academic Service. . External links "Applet: Visual Invariants in Sorting Algorithms" by William Braynen in 1997 Mathematical terminology
Invariant (mathematics)
Mathematics
3,333
5,863,585
https://en.wikipedia.org/wiki/Nabumetone
Nabumetone, sold under the brand name Relafen among others, is a nonsteroidal anti-inflammatory drug (NSAID). Nabumetone was developed by Beecham and first received regulatory approval in 1991. Nabumetone is a non-acidic NSAID prodrug that is rapidly metabolized in the liver to the active metabolite, 6-methoxy-2-naphthyl acetic acid. Nabumetone's active metabolite inhibits the cyclooxygenase enzyme and preferentially blocks COX-2 activity (which is indirectly responsible for the production of inflammation and pain during arthritis). The active metabolite of nabumetone is felt to be the compound primarily responsible for therapeutic effect. Comparatively, the parent drug is a poor inhibitor of COX-2 byproducts, particularly prostaglandins. It may be less nephrotoxic than indomethacin. There are two known polymorphs of the compound. Nabumetone has little effect on renal prostaglandin secretion and less of an association with heart failure than other traditional drugs of the class. Effects of nabumetone on blood pressure control in hypertensive patients on ACE inhibitors are also good, equivalent to paracetamol. In 2022, it was the 239th most commonly prescribed medication in the United States, with more than 1million prescriptions. Medical uses Nabumetone is indicated for relief of signs and symptoms of osteoarthritis and rheumatoid arthritis. Side effects Side effects include bloody or black, tarry stools; change in color, frequency, or amount of urine; chest pain; shortness of breath; coughing up blood; pale stools; numbness; weakness; flu-like symptoms; leg pain; vision problems; speech problems; problems walking; weight gain; stomach pain; cold sweat; skin rash; blisters; headache; swelling; bleeding; bruising; vomiting blood; jaundice; diarrhea; constipation; dizziness; indigestion; gas; nausea; and ringing in the ears. In October 2020, the US Food and Drug Administration (FDA) required the prescription drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They recommend avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy. Society and culture Brand names It is sold under many brand names, including Relafen, Relifex, and Gambaran. References Nonsteroidal anti-inflammatory drugs Prodrugs Naphthol ethers Ketones
Nabumetone
Chemistry
566
73,555,762
https://en.wikipedia.org/wiki/Plop%20Boot%20Manager
The Plop Boot Manager is a proprietary bootloader written by Elmar Hanlhofer. Plop Boot Manager can make computers boot from media that the original BIOS has no support for, such as USB or IDE CD/DVDs. Optionally, Plop can be installed directly onto the hard disk of a computer. References External links Boot loaders
Plop Boot Manager
Technology
74
77,727,630
https://en.wikipedia.org/wiki/List%20of%20Wolf-Rayet%20stars
This is a list of Wolf-Rayet stars, in order of their distance from Earth. List Milky Way Galaxy Magellanic Clouds The Large Magellanic Cloud (LMC) is around 163 kly distant and the Small Magellanic Cloud (SMC) is around 204 kly distant. Andromeda Galaxy and Triangulum Galaxy The Andromeda Galaxy (M31) is 2.5 Mly distant and the Triangulum Galaxy is around 3.2 Mly distant Other Galaxies See also List of luminous blue variable stars List of O-type stars References Wolf–Rayet stars Lists of stars Star systems Lists by distance O-type stars B-type stars
List of Wolf-Rayet stars
Physics,Astronomy
140
2,137,376
https://en.wikipedia.org/wiki/HMGN
HMGN (High Mobility Group Nucleosome-binding) proteins are members of the broader class of high mobility group (HMG) chromosomal proteins that are involved in regulation of transcription, replication, recombination, and DNA repair. HMGN1 and HMGN2 (initially designated HMG-14 and HMG-17 respectively) were discovered by E.W. Johns research group in the early 1970s. HMGN3, HMGN4, and HMGN5 were discovered later and are less abundant. HMGNs are nucleosome binding proteins that help in transcription, replication, recombination, and DNA repair. They can also alter the chromatin epigenetic landscape, helping to stabilize cell identity. There is still relatively little known about their structure and function. HMGN proteins are found in all vertebrates, and play a role in chromatin structure and histone modification. HMGNs come in long chains of amino acids, containing around 100 for HMGN1-4, and roughly 200 in HMGN5. Recent research on the HMGN family is focused on their effect on cell identity, and how reduction of HMGNs relates to induced reprogramming of mouse embryonic fibroblasts (MEFs). Function Much of the research that has been done HMGN proteins have been done in vitro, while there is relatively little on the in vivo function and roles of HMGN proteins. Due to these proteins being predominantly found in higher eukaryotes, the use of microorganisms and other lower eukaryotes has deemed insufficient to determine the in vivo roles of HMGN proteins. A study was done with knockout mice to see the effect if any that HMGN proteins play on a full organism level. This resulted in the mice showing increasing sensitivity to UV radiation when having less than normal levels of HMGN(2). This would indicate that HMGN might facilitate repair of UV damage. The same increase in sensitivity was observed  in mice when exposed to gamma radiation, however the cellular processes that repair DNA in either case are drastically different, leading to an inconclusive state whether HMGN proteins facilitate DNA repair in vivo. HMGN1 and HMGN2 do not co-localize within living cells. This is indication of possible different roles of each HMGN. Family HMGN proteins are part of broader group of proteins referred to as High Mobility group chromosomal (HMG) proteins. This larger group was named this for their high electrophoretic mobility in polyacrylamide gels and is differentiated into 3 distinct but related groups, one of them being HMGN proteins. HMGN family can be further divided into specific proteins, these being HMGN1, HMGN2, HMGN3, HMGN4, and HMGN5.  The overall sizes of the proteins vary to each specific one, but HMGN1-4 average 100 amino acids. Whereas the larger HMGN5 proteins are 300+ amino acids long in mice and roughly 200 in length for humans. HMGN 1 and HMGN 2 HMGN1 and HMGN2 are among the most common of the HMGN proteins. The main purpose and function are reducing the compaction of the cellular chromatin by nucleosome binding. NMR evidence shows that reducing compaction occurs when the proteins targets the main elements that are responsible for the compactions of the chromatin. These have an expression rates that correlate to the differentiation of the cells it is present in. Areas that have experienced differentiation have reduced expression levels in comparison to undifferentiated areas, where HMGN1 and HMGN2 are highly expressed. HMGN 3 HMGN3 has two variants, HMGN3a and HMGN3b. Unlike the HMGN1 and HMGN2 proteins, both forms of HMGN3 tend to be tissue and development specific. They are only expressed in certain tissues at specific developmental stages. There is no preference to a certain tissue given by the two variants of the HMGN3 proteins. There is equal likelihood that either be present in a certain highly expressed HMGN3 tissue. The brain and the eyes in particular are areas that HMGN3 is heavily expressed as well as in adult pancreatic islet cells. It has been shown that the loss of HMGN3 in mice has led to a mild onset of diabetes due to ineffective insulin secretion. HMGN 4 The discovery of HMGN4 was done by GenBank during a database search and identified it as a "new HMGN2 like transcript", indicating that HMGN4 is closely related to HMGN2. There has been very little research done on HMGN4 proteins. The gene associated with the production of the HMGN4 is located in a region associated with schizophrenia on chromosome 6. Until this point every kind of HMGN has been identified in the vertebrates, but HMGN4 has only been seen and identified in primates. Within humans, HMGN4 has shown high levels of expression in the thyroid, thymus and the lymph nodes. HMGN 5 The most recent addition to the HMGN protein family is of HMGN5. It is larger than the previous HMGNs, containing 300+ amino acids, due to a long C-terminal domain that varies with species, explaining  why mice and humans have a different size of HMGN5. Its biological function is unknown but has shown expression in placental development. There have also been cases where HMGN5 was present in human tumors including, prostate cancer, breast cancer, lung cancer, etc. For this reason, it is thought that HMGN5 might have some link to cancer and might be a potential target for cancer therapy in the future. Binding of HMGN proteins to chromatin The location of HMGN during mitosis is the subject of several studies. It is very difficult to date their intra-nuclear organization during the various stages of cell cycle. There is a superfamily of abundance and ubiquitous nuclear proteins that bind to chromatin without any known DNA sequence, which is composed of HMGA, HMBG, and HMGN families. HMGA is associated with chromatin throughout the cell cycle, located in the scaffold of the metaphase chromosome. Both HMGB and HMGN are associated with the mitotic chromosome. The interactions of all HMGs with chromatin is highly dynamic, proteins move constantly throughout the nucleus. The sample nucleosomes for potential binding sites in a "stop and go" manner, with the "stop" step being longer than the "go" step. Through the use of immunofluorescence studies, live cell imaging, gel mobility shift assays, and bimolecular fluorescence complementation, the above was determined and also by comparing the chromatin binding properties of wild-type and HMGN mutant proteins. In conclusion, HMGNs can associate with mitotic chromatin. However, the binding of HMGN to mitotic chromatin is not dependent on a functional HMGN nucleosomal binding domain, and weaker than the binding to interphase nucleosomes in which HMGNs form specific complexes with nucleosomes. H1 competition and chromatin remodeling Nucleosomes serve as the protein core (made from 8 histones) for DNA to wrap around, functioning as a foundation for the larger and more condensed chromatin structures of chromosomes. HMGN proteins compete with Histone H1 (linker histone not part of the core nucleosome) for nucleosome binding sites. Once occupied one protein cannot displace the other. However both proteins are not permanently associated to the nucleosomes and can be removed via post transcriptional modifications. In the case of HMGN proteins, Protein kinase C (PKC) can phosphorylate the serine amino acids in the nucleosome binding domain present in all HMGN variants. This gives HMGNs a mobile character as they are continuously able to bind and unbind to nucleosomes depending on the intracellular environment and signaling. Active competition between HMGNs and H1 serve an active role in chromatin remodeling and as result play a role in the cell cycle and cellular differentiation where chromatin compaction and de-compaction determine if certain genes are expressed or not. Histone acetylation is usually associated with open chromatin, and histone methylation is usually associated with closed chromatin. With use of ChIP-sequencing it is possible to study DNA paired with proteins to determine what kind of histone modifications are present when the nucleosomes are bound to either H1 or HMGNs. Using this method it was found that H1 presence corresponded to high levels of H3K27me3 and H3K4me3, which means that the H3 histone is heavily methylated suggesting that the chromatin structure is closed. It was also found that HMGN presence corresponded to high levels of H3K27ac and H3K4me1, conversely meaning that the H3 histone methylation is greatly reduced suggesting the chromatin structure is open. Transcriptional activity and cellular differentiation Functional compensation While the role of HMGNs are still being researched, it is clear that the absence of HMGNs in knock out (KO) and knock down (KD) studies result in a significant difference of a cell's total transcriptional activity. Several transcriptome studies have been conducted which show various other genes are either unregulated or down regulated due to HMGN absence. Interestingly in the case of HMGN1&2 only knocking out HMGN1 or HMGN2 results in changes for just few genes. But when you knock out both HMGN1&2 there is far more pronounced effect with regard to changes in gene activity. For example, in mice brain when only HMGN1 was knocked out only 1 gene was up-regulated, when only HMGN2 was knocked out 19 genes were up-regulated and 29 down-regulated. But when both HMGN1&2 are knocked out 50 genes were up-regulated and 41 down-regulated. If you simply tallied the totals for the HMGN1 and HMGN2 knock outs you would not get the same results as an HMGN1&2 DKO (double knock out). This is described as functional compensation since both HMGN1 and HMGN2 are only slightly different in terms of protein structure and essentially do the same thing. They have largely the same affinity for nucleosomal binding sites. That means a lot of times if HMGN1 is absent, HMGN2 can fill in and vis versa. Using ChIP-seq it was found in mice chromosomes there were 16.5K sites were both HMGN1&2 could bind, 14.6K sites that had HMGN1 preference and only 6.4K sites that had HMGN2 preference. Differences in HMGN1 and HMGN2 activity are pronounced in the brain, thymus, liver, and spleen suggesting HMGN variants also have specialized roles in addition to their overlapping functionality. Eye development This overlapping functionality may seem redundant or even deleterious, however these proteins are integral to various cellular processes, especially differentiation and embryogenesis as it provides a means for dynamic chromatin modeling. For example, in mice embryo, during ocular development HMGN1,2&3. HMGN1 expression is elevated during initial stages of eye development in progenitor cells, but is decreased in newly formed and fated cells, such as lens fiber cells. HMGN2 in contrast stays elevated in both embryonic and adult eye cells. HMGN3 was found to be especially elevated at 2 weeks (for an adult mouse) in the inner nuclear and ganglion cells. This shows there is an uneven distribution of HMGNs in pre-fated and adult cells. Brain / CNS development In human brain development HMGNs have been shown to be a critical component of neural differentiation and are elevated in neural stem cells (neural progenitor cells). For example, in a knock down study, loss of HMGN1,2&3 resulted in lower population of astrocyte cells and higher population of neural progenitor cells. In oligodendrocyte differentiation HMGNs are critical, since when HMGN1&2 are both knocked out the population of oligodendrocytes in spinal tissue was reduced 65%. However, due to functional compensation this effect is not observed when only HMGN1 or HMGN2 are knocked. This observation if not just correlation. With ChIP-seq analysis it is shown that chromatin modeling at the OLIG1&2 genes (transcription factors involved in oligodendrocyte differentiation) is in an open conformation and has HMGNs bound to the nucleosomes. It can be inferred that this redundancy is actually beneficial as the presence of at least one HMGN variant vastly improves tissue differentiation and development. These findings are summarized in the figure to the right. See also High mobility group References External links Transcription factors
HMGN
Chemistry,Biology
2,697
11,851,594
https://en.wikipedia.org/wiki/Pseudocercospora%20kaki
Pseudocercospora kaki is a fungal plant pathogen, who causes leaf spot of persimmon. It was originally found on leaves of Diospyros kaki (Oriental persimmon) in Taiwan. Some examples of other host species are Diospyros hispida, Diospyros lotus (date-plum, Caucasian persimmon), Diospyros texana (Texas persimmon, Mexican persimmon), and Diospyros melanoxylon (Coromandel ebony). References Fungal tree pathogens and diseases Fruit tree diseases kaki Fungus species
Pseudocercospora kaki
Biology
126
5,634,407
https://en.wikipedia.org/wiki/Shape%20correction%20function
The shape correction function is a ratio of the surface area of a growing organism and that of an isomorph as function of the volume. The shape of the isomorph is taken to be equal to that of the organism for a given reference volume, so for that particular volume the surface areas are also equal and the shape correction function has value one. For a volume and reference volume , the shape correction function equals: V0-morphs: V1-morphs: Isomorphs: Static mixtures between a V0 and a V1-morph can be found as: for The shape correction function is used in Dynamic Energy Budget theory to correct equations for isomorphs to organisms that change shape during growth. The conversion is necessary for accurately modelling food (substrate) acquisition and mobilization of reserve for use by metabolism. References Developmental biology Metabolism
Shape correction function
Chemistry,Biology
177
341,598
https://en.wikipedia.org/wiki/Cybercrime
Cybercrime encompasses a wide range of criminal activities that are carried out using digital devices and/or networks. These crimes involve the use of technology to commit fraud, identity theft, data breaches, computer viruses, scams, and expanded upon in other malicious acts. Cybercriminals exploit vulnerabilities in computer systems and networks to gain unauthorized access, steal sensitive information, disrupt services, and cause financial or reputational harm to individuals, organizations, and governments. In 2000, the tenth United Nations Congress on the Prevention of Crime and the Treatment of Offenders classified cyber crimes into five categories: unauthorized access, damage to computer data or programs, sabotage to hinder the functioning of a computer system or network, unauthorized interception of data within a system or network, and computer espionage. Internationally, both state and non-state actors engage in cybercrimes, including espionage, financial theft, and other cross-border crimes. Cybercrimes crossing international borders and involving the actions of at least one nation-state are sometimes referred to as cyberwarfare. Warren Buffett has described that cybercrime is the "number one problem with mankind", and that it "poses real risks to humanity". The World Economic Forum's (WEF) 2020 Global Risks Report highlighted that organized cybercrime groups are joining forces to commit criminal activities online, while estimating the likelihood of their detection and prosecution to be less than 1 percent in the US. There are also many privacy concerns surrounding cybercrime when confidential information is intercepted or disclosed, legally or otherwise. The World Economic Forum’s 2023 Global Risks Report ranked cybercrime as one of the top 10 risks facing the world today and for the next 10 years. If viewed as a nation state, cybercrime would count as the third largest economy in the world. In numbers, cybercrime is predicted to cause over 9 trillion US dollars in damages worldwide in 2024. Classifications Computer crime encompasses a broad range of activities, including computer fraud, financial crimes, scams, cybersex trafficking, and ad-fraud. Computer fraud Computer fraud is the act of using a computer to take or alter electronic data, or to gain unlawful use of a computer or system. Computer fraud that involves the use of the internet is also called internet fraud. The legal definition of computer fraud varies by jurisdiction, but typically involves accessing a computer without permission or authorization. Forms of computer fraud include hacking into computers to alter information, distributing malicious code such as computer worms or viruses, installing malware or spyware to steal data, phishing, and advance-fee scams. Other forms of fraud may be committed using computer systems, including bank fraud, carding, identity theft, extortion, and theft of classified information. These types of crimes often result in the loss of personal or financial information. Fraud Factory Fraud factory is a collection of large fraud organizations usually involving cyber fraud and human trafficking operations. Cyberterrorism The term cyberterrorism refers to acts of terrorism committed through the use of cyberspace or computer resources. Acts of disruption of computer networks and personal computers through viruses, worms, phishing, malicious software, hardware, or programming scripts can all be forms of cyberterrorism. Government officials and information technology (IT) security specialists have documented a significant increase in network problems and server scams since early 2001. In the United States there is an increasing concern from agencies such as the Federal Bureau of Investigation (FBI) and the Central Intelligence Agency (CIA). Cyberextortion Cyberextortion occurs when a website, e-mail server, or computer system is subjected to or threatened with attacks by malicious hackers, often through denial-of-service attacks. Cyber extortionists demand money in return for promising to stop the attacks and provide "protection". According to the FBI, cyber extortionists are increasingly attacking corporate websites and networks, crippling their ability to operate, and demanding payments to restore their service. More than 20 cases are reported each month to the FBI, and many go unreported in order to keep the victim's name out of the public domain. Perpetrators often use a distributed denial-of-service attack. However, other cyberextortion techniques exist, such as doxing and bug poaching. An example of cyberextortion was the Sony Hack of 2014. Ransomware Ransomware is a type of malware used in cyberextortion to restrict access to files, sometimes threatening permanent data erasure unless a ransom is paid. Ransomware is a global issue, with more than 300 million attacks worldwide in 2021. According to the 2022 Unit 42 Ransomware Threat Report, in 2021 the average ransom demand in cases handled by Norton climbed 144 percent to $2.2 million, and there was an 85 percent increase in the number of victims who had their personal information shown on dark web information dumps. A loss of nearly $400 million in 2021 and 2022 is just one of the statistics showing the impact of ransomware attacks on everyday people. Cybersex trafficking Cybersex trafficking is the transportation of victims for such purposes as coerced prostitution or the live streaming of coerced sexual acts or rape on webcam. Victims are abducted, threatened, or deceived and transferred to "cybersex dens". The dens can be in any location where the cybersex traffickers have a computer, tablet, or phone with an internet connection. Perpetrators use social media networks, video conferences, dating pages, online chat rooms, apps, dark web sites, and other platforms. They use online payment systems and cryptocurrencies to hide their identities. Millions of reports of cybersex incidents are sent to authorities annually. New legislation and police procedures are needed to combat this type of cybercrime. There are an estimated 6.3 million victims of cybersex trafficking, according to a recent report by the International Labour Organization. This number includes about 1.7 million child victims. An example of cybersex trafficking is the 2018–2020 Nth room case in South Korea. Cyberwarfare According to the U.S. Department of Defense, cyberspace has emerged as an arena for national-security threats through several recent events of geostrategic importance, including the attack on Estonia's infrastructure in 2007, allegedly by Russian hackers. In August 2008, Russia again allegedly conducted cyberattacks against Georgia. Fearing that such attacks may become a normal part of future warfare among nation-states, military commanders see a need to develop cyberspace operations. Computers as a tool When an individual is the target of cybercrime, the computer is often the tool rather than the target. These crimes, which typically exploit human weaknesses, usually do not require much technical expertise. These are the types of crimes which have existed for centuries in the offline world. Criminals have simply been given a tool that increases their pool of potential victims and makes them all the harder to trace and apprehend. Crimes that use computer networks or devices to advance other ends include: Fraud and identity theft (although this increasingly uses malware, hacking or phishing, making it an example of "computer as target" as well as "computer as tool") Information warfare Phishing scams Spam Propagation of illegal, obscene, or offensive content, including harassment and threats The unsolicited sending of bulk email for commercial purposes (spam) is unlawful in some jurisdictions. Phishing is mostly propagated via email. Phishing emails may contain links to other websites that are affected by malware. Or they may contain links to fake online banking or other websites used to steal private account information. Obscene or offensive content The content of websites and other electronic communications may be distasteful, obscene, or offensive for a variety of reasons. In some instances, it may be illegal. What content is unlawful varies greatly between countries, and even within nations. It is a sensitive area in which the courts can become involved in arbitrating between groups with strong beliefs. One area of internet pornography that has been the target of the strongest efforts at curtailment is child pornography, which is illegal in most jurisdictions in the world. Ad-fraud Ad-frauds are particularly popular among cybercriminals, as such frauds are lucrative and unlikely to be prosecuted. Jean-Loup Richet, a professor at the Sorbonne Business School, classified the large variety of ad-frauds committed by cybercriminals into three categories: identity fraud, attribution fraud, and ad-fraud services. Identity fraud aims to impersonate real users and inflate audience numbers. The techniques used for identity fraud include traffic from bots (coming from a hosting company, a data center, or compromised devices); cookie stuffing; falsification of user characteristics, such as location and browser type; fake social traffic (misleading users on social networks into visiting the advertised website); and fake social media accounts that make a bot appear legitimate. Attribution fraud impersonates the activities of real users, such as clicks and conversations. Many ad-fraud techniques belong to this category: the use of hijacked and malware-infected devices as part of a botnet; click farms (companies where low-wage employees are paid to click or engage in conversations); incentivized browsing; video placement abuse (delivered in display banner slots); hidden ads (which will never be viewed by real users); domain spoofing (ads served on a fake website); and clickjacking, in which the user is forced to click on an ad. Ad-fraud services include all online infrastructure and hosting services that might be needed to undertake identity or attribution fraud. Services can involve the creation of spam websites (fake networks of websites that provide artificial backlinks); link building services; hosting services; or fake and scam pages impersonating a famous brand. Online harassment Whereas content may be offensive in a non-specific way, harassment directs obscenities and derogatory comments at specific individuals, often focusing on gender, race, religion, nationality, or sexual orientation. Committing a crime using a computer can lead to an enhanced sentence. For example, in the case of United States v. Neil Scott Kramer, the defendant was given an enhanced sentence according to the U.S. Sentencing Guidelines Manual §2G1.3(b)(3) for his use of a cell phone to "persuade, induce, entice, coerce, or facilitate the travel of, the minor to engage in prohibited sexual conduct." Kramer appealed the sentence on the grounds that there was insufficient evidence to convict him under this statute because his charge included persuading through a computer device and his cellular phone technically is not a computer. Although Kramer tried to argue this point, the U.S. Sentencing Guidelines Manual states that the term "computer" means "an electronic, magnetic, optical, electrochemical, or other high-speed data processing device performing logical, arithmetic, or storage functions, and includes any data storage facility or communications facility directly related to or operating in conjunction with such device." In the United States, at least 41 states have passed laws and regulations that regard extreme online harassment as a criminal act. These acts can also be prosecuted on the federal level, because of US Code 18 Section 2261A, which states that using computers to threaten or harass can lead to a sentence of up to 20 years. Several countries besides the US have also created laws to combat online harassment. In China, a country with over 20 percent of the world's internet users, in response to the Human Flesh Search Engine bullying incident, the Legislative Affairs Office of the State Council passed a strict law against cyberbullying. The United Kingdom passed the Malicious Communications Act, which states that sending messages or letters electronically that the government deems "indecent or grossly offensive" and/or language intended to cause "distress and anxiety" can lead to a prison sentence of six months and a potentially large fine.  Australia, while not directly addressing the issue of harassment, includes most forms of online harassment under the Criminal Code Act of 1995. Using telecommunication to send threats, harass, or cause offense is a direct violation of this act. Although freedom of speech is protected by law in most democratic societies, it does not include all types of speech. Spoken or written threats can be criminalized because they harm or intimidate. This applies to online or network-related threats. Cyberbullying has increased drastically with the growing popularity of online social networking. As of January 2020, 44 percent of adult internet users in the United States had "personally experienced online harassment". Online harassment of children often has negative and even life-threatening effects. According to a 2021 survey, 41 percent of children develop social anxiety, 37 percent develop depression, and 26 percent have suicidal thoughts. The United Arab Emirates was found to have purchased the NSO Group's mobile spyware Pegasus for mass surveillance and a campaign of harassment of prominent activists and journalists, including Ahmed Mansoor, Princess Latifa, Princess Haya, and others. Ghada Owais was one of the many high-profile female journalists and activists who were targeted. She filed a lawsuit against UAE ruler Mohamed bin Zayed Al Nahyan along with other defendants, accusing them of sharing her photos online. Drug trafficking Darknet markets are used to buy and sell recreational drugs online. Some drug traffickers use encrypted messaging tools to communicate with drug mules or potential customers. The dark web site Silk Road, which started operations in 2011, was the first major online marketplace for drugs. It was permanently shut down in October 2013 by the FBI and Europol. After Silk Road 2.0 went down, Silk Road 3 Reloaded emerged. However, it was just an older marketplace named Diabolus Market that used the Silk Road name in order to get more exposure from the Silk Road brand's earlier success. Darknet markets have had a rise in traffic in recent years for many reasons, such as the anonymous purchases and often a system of reviews by other buyers. There are many ways in which darknet markets can financially drain individuals. Vendors and customers alike go to great lengths to keep their identities a secret while online. Commonly used tools for hiding their online presence include virtual private networks (VPNs), Tails, and the Tor Browser. Darknet markets entice customers by making them feel comfortable. Although people can easily gain access to a Tor browser, actually gaining access to an illicit market is not as simple as typing it in on a search engine, as one would with Google. Darknet markets have special links that change frequently, ending in .onion as opposed to the typical .com, .net, and .org domain extensions. To add to privacy, the most prevalent currency on these markets is Bitcoin, which allows transactions to be anonymous. A problem that marketplace users sometimes face is exit scamming. That is, a vendor with a high rating acts as if they are selling on the market and have users pay for products they never receive. The vendor then closes their account after receiving money from multiple buyers and never sending what was paid for. The vendors, all of whom are involved in illegal activities, have no reason not to engage in exit scamming when they no longer want to be a vendor. In 2019, an entire market known as Wall Street Market allegedly exit scammed, stealing $30 million dollars in bitcoin. The FBI has cracked down on these markets. In July 2017, the FBI seized one of the biggest markets, commonly called Alphabay, which re-opened in August 2021 under the control of DeSnake, one of the original administrators. Investigators pose as buyers and order products from darknet vendors in the hope that the vendors leave a trail the investigators can follow. In one case an investigator posed as a firearms seller, and for six months people purchased from them and provided home addresses. The FBI was able to make over a dozen arrests during this six-month investigation. Another crackdown targeted vendors selling fentanyl and opiates. With thousands of people dying each year due to drug overdose, investigators have made internet drug sales a priority. Many vendors do not realize the extra criminal charges that go along with selling drugs online, such as money laundering and illegal use of the mail. In 2019, a vendor was sentenced to 10 years in prison after selling cocaine and methamphetamine under the name JetSetLife. But despite the large amount of time investigators spend tracking down people, in 2018 only 65 suspects who bought and sold illegal goods on some of the biggest markets were identified. Meanwhile, thousands of transactions take place daily on these markets. Emerging trends in Cybercrime Through rapid technological advances, the tactics of cybercriminals are ever evolving with instances of AI (artificial intelligence) being used and exploited for criminal activity. These trends highlight the dynamic nature of cybercrime, emphasizing the need for evolving countermeasures to combat future online threats. The use of AI has been able to replicate voices to impersonate, fraudulently obtain money and other finical related crimes. The dark web is seeing an increase in artificial chatbots specifically designed to aid hackers and help with various phishing techniques. Cybercriminals can now use AI deepfakes to pose as individuals who may be connected or have authority over the victim of the attack. Personal data is something that in the future will be more accessible than ever, with almost everything having a history that is possible to access on black markets, fueling issues such as identity theft, finical fraud, and targeted advertisements. Notable incidents One of the highest-profile banking computer crimes occurred over a course of three years beginning in 1970. The chief teller at the Park Avenue branch of New York's Union Dime Savings Bank embezzled over $1.5 million from hundreds of accounts. In 2014, the Sony Pictures Entertainment hack not only exposed sensitive company data but also led to extortion demands, marking one of the most publicized corporate cyberattacks to date. For more detailed insights on cyber blackmail and notable incidents, visit [C9 Journal](https://c9journal.com/cyber-blackmail-definition-prevention-and-response/). A hacking group called MOD (Masters of Deception) allegedly stole passwords and technical data from Pacific Bell, Nynex, and other telephone companies as well as several big credit agencies and two major universities. The damage caused was extensive; one company, Southwestern Bell, suffered losses of $370,000. In 1983, a 19-year-old UCLA student used his PC to break into a Defense Department International Communications system. Between 1995 and 1998 the Newscorp satellite pay-to-view encrypted SKY-TV service was hacked several times during an ongoing technological arms race between a pan-European hacking group and Newscorp. The original motivation of the hackers was to watch Star Trek reruns in Germany, which was something which Newscorp did not have the copyright permission to allow. On 26 March 1999, the Melissa worm infected a document on a victim's computer, then automatically emailed that document and a copy of the virus to other people. In February 2000, an individual going by the alias of MafiaBoy began a series of denial-of-service attacks against high-profile websites, including Yahoo!, Dell, Inc., E*TRADE, eBay, and CNN. About 50 computers at Stanford University, along with computers at the University of California at Santa Barbara, were among the zombie computers sending pings in the distributed denial-of-service attacks. On 3 August 2000, Canadian federal prosecutors charged MafiaBoy with 54 counts of illegal access to computers. The Stuxnet worm corrupted SCADA microprocessors, particularly the types used in Siemens centrifuge controllers. The Russian Business Network (RBN) was registered as an internet site in 2006. Initially, much of its activity was legitimate. But apparently the founders soon discovered that it was more profitable to host illegitimate activities and to offer its services to criminals. The RBN has been described by VeriSign as "the baddest of the bad". It provides web hosting services and internet access to all kinds of criminal and objectionable activities that earn up to $150 million in one year. It specializes in personal identity theft for resale. It is the originator of MPack and an alleged operator of the now defunct Storm botnet. On 2 March 2010, Spanish investigators arrested three men suspected of infecting over 13 million computers around the world. The botnet of infected computers included PCs inside more than half of the Fortune 1000 companies and more than 40 major banks, according to investigators. In August 2010, the US Department of Homeland Security shut down the international pedophile ring Dreamboard. The website had approximately 600 members and may have distributed up to 123 terabytes of child pornography (roughly equivalent to 16,000 DVDs). To date this is the single largest US prosecution of an international child pornography ring; 52 arrests were made worldwide. In January 2012, Zappos.com experienced a security breach compromising the credit card numbers, personal information, and billing and shipping addresses of as many as 24 million customers. In June 2012, LinkedIn and eHarmony were attacked, and 65 million password hashes were compromised. Thirty thousand passwords were cracked, and 1.5 million eHarmony passwords were posted online. In December 2012, the Wells Fargo website experienced a denial-of-service attack that potentially compromised 70 million customers and 8.5 million active viewers. Other banks thought to be compromised included Bank of America, J. P. Morgan, U.S. Bank, and PNC Financial Services. On 23 April 2013, the Twitter account of the Associated Press was hacked. The hacker posted a hoax tweet about fictitious attacks on the White House that they claimed left then-President Obama injured. The hoax tweet resulted in a brief plunge of 130 points in the Dow Jones Industrial Average, the removal of $136 billion from the S&P 500 index, and the temporary suspension of AP's Twitter account. The Dow Jones later restored its session gains. In May 2017, 74 countries logged a ransomware cybercrime called "WannaCry". Illicit access to camera sensors, microphone sensors, phonebook contacts, all internet-enabled apps, and metadata of mobile telephones running Android and iOS was reportedly provided by Israeli spyware that was found to be in operation in at least 46 nation-states around the world. Journalists, royalty, and government officials were among the targets. Earlier accusations that Israeli weapons companies were meddling in international telephony and smartphones have been eclipsed by the 2018 Pegasus spyware revelations. In December 2019, US intelligence officials and The New York Times revealed that ToTok, a messaging application widely used in the United Arab Emirates, is a spying tool for the UAE. An investigation revealed that the Emirati government was attempting to track every conversation, movement, relationship, appointment, sound, and image of those who installed the app on their phones. Combating computer crime Due to cybercriminals using the internet for cross-border attacks and crimes, the process of prosecuting cybercriminals has been difficult. The number of vulnerabilities that a cybercriminal could use as points of opportunity to exploit has also increased over the years. From 2008 to 2014 alone, there has been a 17.75% increase in vulnerabilities across all online devices. The internet's expansive reach causes the damage inflicted to people to be magnified since many methods of cybercrime have the opportunity to reach many people. The availability of virtual spaces has allowed cybercrime to become an everyday occurrence. In 2018, the Internet Crime Complaint Center received 351,937 complaints of cybercrime, which led to $2.7 billion lost. Investigation In a criminal investigation, a computer can be a source of evidence (see digital forensics). Even when a computer is not directly used for criminal purposes, it may contain records of value to criminal investigators in the form of a logfile. In many countries, Internet Service Providers are required by law to keep their logfiles for a predetermined amount of time. There are many ways for cybercrime to take place, and investigations tend to start with an IP Address trace; however, that does not necessarily enable detectives to solve a case. Different types of high-tech crime may also include elements of low-tech crime, and vice versa, making cybercrime investigators an indispensable part of modern law enforcement. Methods of cybercrime detective work are dynamic and constantly improving, whether in closed police units or in the framework of international cooperation. In the United States, the FBI and the Department of Homeland Security (DHS) are government agencies that combat cybercrime. The FBI has trained agents and analysts in cybercrime placed in their field offices and headquarters. In the DHS, the Secret Service has a Cyber Intelligence Section that works to target financial cybercrimes. They combat international cybercrime and work to protect institutions such as banks from intrusions and information breaches. Based in Alabama, the Secret Service and the Alabama Office of Prosecution Services work together to train professionals in law enforcement at the National Computer Forensic Institute. The NCFI provides "state and local members of the law enforcement community with training in cyber incident response, investigation, and forensic examination in cyber incident response, investigation, and forensic examination." Investigating cyber crime within the United States and globally often requires partnerships. Within the United States, cyber crime may be investigated by law enforcement, the Department of Homeland Security, among other federal agencies. However, as the world becomes more dependent on technology, cyber attacks and cyber crime are going to expand as threat actors will continue to exploit weaknesses in protection and existing vulnerabilities to achieve their end goals, often being data theft or exfiltration. To combat cybercrime, the United States Secret Service maintains an Electronic Crimes Task Force which extends beyond the United States as it helps to locate threat actors that are located globally and performing cyber related crimes within the United States. The Secret Service is also responsible for the National Computer Forensic Institute which allows law enforcement and people of the court to receive cyber training and information on how to combat cyber crime. The United States Immigration and Customs Enforcement is responsible for the Cyber Crimes Center (C3) providing cyber crime related services for federal, state, local and international agencies. Finally, the United States also has resources relating to Law Enforcement Cyber Incident Reporting to allow local and state agencies to understand how, when, and what should be reported as a cyber incident to the federal government. Because cybercriminals commonly use encryption and other techniques to hide their identity and location, it can be difficult to trace a perpetrator after a crime is committed, so prevention measures are crucial. Prevention The Department of Homeland Security also instituted the Continuous Diagnostics and Mitigation (CDM) Program. The CDM Program monitors and secures government networks by tracking network risks and informing system personnel so that they can take action. In an attempt to catch intrusions before the damage is done, the DHS created the Enhanced Cybersecurity Services (ECS). The Cyber Security and Infrastructure Security Agency approves the private partners that provide intrusion detection and prevention services through the ECS. Cybersecurity professionals have been skeptical of prevention-focused strategies. The mode of use of cybersecurity products has also been called into question. Shuman Ghosemajumder has argued that individual companies using a combination of products for security is not a scalable approach and has advocated for the use of cybersecurity technology primarily at the platform level. On a personal level, there are some strategies available to defend against cybercrime: Keeping your software and operating system update to benefit from security patches Using anti-virus software that can detect and remove malicious threats Use strong passwords with a variety of characters that aren't easy to guess Refrain from opening attachments from spam emails Do not click on links from scam emails Do not give out personal information over the internet unless you can verify that the destination is safe Contact companies about suspicious requests of your information Legislation Because of weak laws, cybercriminals operating from developing countries can often evade detection and prosecution. In countries such as the Philippines, laws against cybercrime are weak or sometimes nonexistent. Cybercriminals can then strike from across international borders and remain undetected. Even when identified, these criminals can typically avoid being extradited to a country such as the US that has laws that allow for prosecution. For this reason, agencies such as the FBI have used deception and subterfuge to catch criminals. For example, two Russian hackers had been evading the FBI for some time. The FBI set up a fake computing company based in Seattle, Washington. They proceeded to lure the two Russian men into the United States by offering them work with this company. Upon completion of the interview, the suspects were arrested. Clever tricks like that are sometimes a necessary part of catching cybercriminals when weak laws and limited international cooperation make it impossible otherwise. The first cyber related law in the United States was the Privacy Act of 1974 which was only required for federal agencies to follow to ensure privacy and protection of personally identifiable information (PII). However, since 1974, in the United States other laws and regulations have been drafted and implemented, but there is still a gap in responding to current cyber related crime. The most recent cyber related law, according to NIST, was the NIST Small Business Cybersecurity Act, which came out in 2018, and provides guidelines to small businesses to ensure that cybersecurity risks are being identified and addressed accurately. During President Barack Obama's presidency three cybersecurity related bills were signed into order in December 2014. The first was the Federal Information Security Modernization Act of 2014, the second was the National Cybersecurity Protection Act of 2014, and the third was the Cybersecurity Enhancement Act of 2014. Although the Federal Information Security Modernization Act of 2014 was just an update of an older version of the act, it focused on the practices federal agencies were to abide by relating to cybersecurity. While the National Cybersecurity Protection Act of 2014 was aimed toward increasing the amount of information sharing that occurs across the federal and private sector to improve cybersecurity amongst the industries. Finally, the Cybersecurity Enhancement Act of 2014 relates to cybersecurity research and education. In April 2015, then-President Barack Obama released an executive order that allows the US to freeze the assets of convicted cybercriminals and block their economic activity within the United States. The European Union adopted cybercrime directive 2013/40/EU, which was elaborated upon in the Council of Europe's Convention on Cybercrime. It is not only the US and the European Union that have been introducing measures against cybercrime. On 31 May 2017, China announced that its new cybersecurity law was taking effect. In Australia, legislation to combat cybercrime includes the Criminal Code Act 1995, the Telecommunications Act 1997, and the Enhancing Online Safety Act 2015. Penalties Penalties for computer-related crimes in New York State can range from a fine and a short period of jail time for a Class A misdemeanor, such as unauthorized use of a computer, up to 3 to 15 years in prison for a Class C felony, such as computer tampering in the first degree. However, some former cybercriminals have been hired as information security experts by private companies due to their inside knowledge of computer crime, a phenomenon which theoretically could create perverse incentives. A possible counter to this is for courts to ban convicted hackers from using the internet or computers, even after they have been released from prisonthough as computers and the internet become more and more central to everyday life, this type of punishment becomes more and more draconian. Nuanced approaches have been developed that manage cyber offenders' behavior without resorting to total computer or internet bans. These approaches involve restricting individuals to specific devices which are subject to monitoring or searches by probation or parole officers. Awareness Cybercrime is becoming more of a threat in our society. According to Accenture's State of Cybersecurity, security attacks increased 31% from 2020 to 2021. The number of attacks per company increased from 206 to 270. Due to this rising threat, the importance of raising awareness about measures to protect information and the tactics criminals use to steal that information is paramount. However, despite cybercrime becoming a mounting problem, many people are not aware of the severity of this problem. This could be attributed to a lack of experience and knowledge of technological issues. There are 1.5 million cyber-attacks annually, which means that there are over 4,000 attacks a day, 170 attacks every hour, or nearly three attacks every minute, with studies showing that only 16 percent of victims had asked the people who were carrying out the attacks to stop. Comparitech's 2023 study shows that cybercrime victims have peaked to 71 million annually, which means there is a cyberattack every 39 seconds. Anybody who uses the internet for any reason can be a victim, which is why it is important to be aware of how to be protected while online. Intelligence As cybercrime proliferated, a professional ecosystem evolved to support individuals and groups seeking to profit from cybercrime activities. The ecosystem has become quite specialized, and includes malware developers, botnet operators, professional cybercrime groups, groups specializing in the sale of stolen content, and so forth. A few of the leading cybersecurity companies have the skills and resources to follow the activities of these individuals and groups. A wide variety of information that can be used for defensive purposes is available from these sources, for example, technical indicators such as hashes of infected files and malicious IPs/URLs, as well as strategic information profiling the goals and techniques of the profiled groups. Much of it is freely available, but consistent, ongoing access typically requires a subscription. Some in the corporate sector see a crucial role for artificial intelligence in the future development of cybersecurity. Interpol's Cyber Fusion Center began a collaboration with key cybersecurity players to distribute information on the latest online scams, cyber threats, and risks to internet users. Since 2017, reports on social engineering frauds, ransomware, phishing, and other attacks have been distributed to security agencies in over 150 countries. Spread of cybercrime The increasing prevalence of cybercrime has resulted in more attention to computer crime detection and prosecution. Hacking has become less complex as hacking communities disseminate their knowledge through the internet. Blogs and social networks have contributed substantially to information sharing, so that beginners can benefit from older hackers' knowledge and advice. Furthermore, hacking is cheaper than ever. Before the cloud computing era, in order to spam or scam, one needed a variety of resources, such as a dedicated server; skills in server management, network configuration, and network maintenance; and knowledge of internet service provider standards. By comparison, a software-as-a-service for mail is a scalable and inexpensive bulk e-mail-sending service for marketing purposes that could be easily set up for spam. Cloud computing could help cybercriminals leverage their attacks, whether brute-forcing a password, improving the reach of a botnet, or facilitating a spamming campaign. Agencies ASEAN Australian High Tech Crime Centre Cyber Crime Investigation Cell, a wing of Mumbai Police, India Cyber Crime Unit (Hellenic Police), established in Greece in 2004 EUROPOL INTERPOL National Cyber Crime Unit, in the United Kingdom National Security Agency, in the United States National Special Crime Unit, in Denmark. National White Collar Crime Center, in the United States Cyber Terror Response Center - Korea National Police Agency Cyber Police Department - Japan National Police Agency Siber suçlarla mücadele - Turkish Cyber Agency See also References Cyber Crime. (n.d.). [Folder]. Federal Bureau of Investigation. Retrieved April 24, 2024, from https://www.fbi.gov/investigate/cyber Herrero, J., Torres, A., Vivas, P., & Urueña, A. (2022). Smartphone Addiction, Social Support, and Cybercrime Victimization: A Discrete Survival and Growth Mixture Model: Psychosocial Intervention. Psychosocial Intervention, 31(1), 59–66. https://doi.org/10.5093/pi2022a3 Further reading Balkin, J., Grimmelmann, J., Katz, E., Kozlovski, N., Wagman, S. & Zarsky, T. (2006) (eds) Cybercrime: Digital Cops in a Networked Environment, New York University Press, New York. Bowker, Art (2012) "The Cybercrime Handbook for Community Corrections: Managing Risk in the 21st Century" Charles C. Thomas Publishers, Ltd. Springfield. Brenner, S. (2007) Law in an Era of Smart Technology, Oxford: Oxford University Press Broadhurst, R., and Chang, Lennon Y.C. (2013) "Cybercrime in Asia: trends and challenges", in B. Hebenton, SY Shou, & J. Liu (eds), Asian Handbook of Criminology (pp. 49–64). New York: Springer () Chang, L.Y. C. (2012) Cybercrime in the Greater China Region: Regulatory Responses and Crime Prevention across the Taiwan Strait. Cheltenham: Edward Elgar. () Chang, Lennon Y.C., & Grabosky, P. (2014) "Cybercrime and establishing a secure cyber world", in M. Gill (ed) Handbook of Security (pp. 321–339). NY: Palgrave. Csonka P. (2000) Internet Crime; the Draft council of Europe convention on cyber-crime: A response to the challenge of crime in the age of the internet? Computer Law & Security Report Vol.16 no.5. Easttom, C. (2010) Computer Crime Investigation and the Law Fafinski, S. (2009) Computer Misuse: Response, regulation and the law Cullompton: Willan Glenny, M. DarkMarket : cyberthieves, cybercops, and you, New York, NY : Alfred A. Knopf, 2011. Grabosky, P. (2006) Electronic Crime, New Jersey: Prentice Hall Halder, D., & Jaishankar, K. (2016). Cyber Crimes against Women in India. New Delhi: SAGE Publishing. . Jaishankar, K. (Ed.) (2011). Cyber Criminology: Exploring Internet Crimes and Criminal behavior. Boca Raton, FL, US: CRC Press, Taylor, and Francis Group. McQuade, S. (2006) Understanding and Managing Cybercrime, Boston: Allyn & Bacon. McQuade, S. (ed) (2009) The Encyclopedia of Cybercrime, Westport, CT: Greenwood Press. Parker D (1983) Fighting Computer Crime, U.S.: Charles Scribner's Sons. Pattavina, A. (ed) Information Technology and the Criminal Justice System, Thousand Oaks, CA: Sage. Richet, J.L. (2013) From Young Hackers to Crackers, International Journal of Technology and Human Interaction (IJTHI), 9(3), 53–62. Robertson, J. (2 March 2010). Authorities bust 3 in infection of 13m computers. Retrieved 26 March 2010, from Boston News: Boston.com Rolón, D. N. Control, vigilancia y respuesta penal en el ciberespacio, Latin American's New Security Thinking, Clacso, 2014, pp. 167/182 Walden, I. (2007) Computer Crimes and Digital Investigations, Oxford: Oxford University Press. Wall, D.S. (2007) Cybercrimes: The transformation of crime in the information age, Cambridge: Polity. Williams, M. (2006) Virtually Criminal: Crime, Deviance and Regulation Online, Routledge, London. Yar, M. (2006) Cybercrime and Society, London: Sage. External links International Journal of Cyber Criminology Common types of cyber attacks Countering ransomware attacks Government resources Cybercrime.gov from the United States Department of Justice National Institute of Justice Electronic Crime Program from the United States Department of Justice FBI Cyber Investigators home page US Secret Service Computer Fraud Australian High Tech Crime Centre UK National Cyber Crime Unit from the National Crime Agency Crime by type Computer security Organized crime activity Harassment and bullying
Cybercrime
Biology
8,411
32,045,633
https://en.wikipedia.org/wiki/Superdense%20carbon%20allotropes
Superdense carbon allotropes are proposed configurations of carbon atoms that result in a stable material with a higher density than diamond. Few hypothetical carbon allotropes denser than diamond are known. All these allotropes can be divided at two groups: the first are hypothetically stable at ambient conditions; the second are high-pressure carbon allotropes which become quasi-stable only at high pressure. Ambient conditions According to the SACADA database, the first group comprises the structures, called hP3, tI12, st12, r8, I41/a, P41212, m32, m32*, t32, t32*, H-carbon and uni. Among them, st12 carbon was proposed as far as 1987 in the work of R. Biswas et al. High-pressure carbon The following allotropes belong to the second group: MP8, OP8, SC4, BC-8 and (9,0). These are hypothetically quasi-stable at the high pressure. BC-8 carbon is not only a superdense allotrope but also one of the oldest hypothetical carbon structures; initially it was proposed in 1984 in the work R. Biswas et al. The MP8 structure proposed in the work J. Sun et al. is almost two times denser than diamond; its density is as high as 7.06 g/cm3 and it is the highest value reported so far. Band gaps All hypothetical superdense carbon allotropes have dissimilar band gaps compared to the others. For example, SC4 is supposed to be a metallic allotrope while st12, m32, m32*, t32, t32* have band gaps larger than 5.0 eV. Carbon tetrahedra These new materials would have structures based on carbon tetrahedra, and represent the densest of such structures. On the opposite end of the density spectrum is a recently theorized tetrahedral structure called T-carbon. This is obtained by replacing carbon atoms in diamond with carbon tetrahedra. In contrast to superdense allotropes, T-carbon would have very low density and hardness. References External links SACADA - Samara Carbon Allotrope Database Allotropes of carbon
Superdense carbon allotropes
Chemistry
478
22,711,153
https://en.wikipedia.org/wiki/Netzeitung
Netzeitung was a German online newspaper produced in Berlin from 2000 to 2009. On 4 January 2010 netzeitung.de had been converted into an automated portal displaying contents from nachrichten.de (an online news portal operated by Tomorrow Focus). Netzeitung had claimed to be the first German newspaper that was completely online, and to have been the most cited news source in Germany in 2005. The paper went online in November 2000 and was started by the same company that publishes the Norwegian online newspaper Nettavisen. In 2006, the paper employed some 60 journalists and reached, according to Michael Maier, then the chief editor, some 1.2 million households per month and was to earn €8 million. According to Google Ad Planner, the site ranked #25 in Germany in monthly visitors of news sites. Chief editor was Domenika Ahlrichs (2007-2009). After 2007 the paper was owned by a subsidiary of the Mecom Group, which in January 2009 sold its German division to M. DuMont Schauberg. References German-language newspapers European news websites German news websites
Netzeitung
Technology
226
513,418
https://en.wikipedia.org/wiki/Integrated%20circuit%20packaging
Integrated circuit packaging is the final stage of semiconductor device fabrication, in which the die is encapsulated in a supporting case that prevents physical damage and corrosion. The case, known as a "package", supports the electrical contacts which connect the device to a circuit board. The packaging stage is followed by testing of the integrated circuit. Design considerations Electrical The current-carrying traces that run out of the die, through the package, and into the printed circuit board (PCB) have very different electrical properties compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself. Therefore, it is important that the materials used as electrical contacts exhibit characteristics like low resistance, low capacitance and low inductance. Both the structure and materials must prioritize signal transmission properties, while minimizing any parasitic elements that could negatively affect the signal. Controlling these characteristics is becoming increasingly important as the rest of technology begins to speed up. Packaging delays have the potential to make up almost half of a high-performance computer's delay, and this bottleneck on speed is expected to increase. Mechanical and thermal The integrated circuit package must resist physical breakage, keep out moisture, and also provide effective heat dissipation from the chip. Moreover, for RF applications, the package is commonly required to shield electromagnetic interference, that may either degrade the circuit performance or adversely affect neighboring circuits. Finally, the package must permit interconnecting the chip to a PCB. The materials of the package are either plastic (thermoset or thermoplastic), metal (commonly Kovar) or ceramic. A common plastic used for this is epoxy-cresol-novolak (ECN). All three material types offer usable mechanical strength, moisture and heat resistance. Nevertheless, for higher-end devices, metallic and ceramic packages are commonly preferred due to their higher strength (which also supports higher pin-count designs), heat dissipation, hermetic performance, or other reasons. Generally, ceramic packages are more expensive than similar plastic packages. Some packages have metallic fins to enhance heat transfer, but these take up space. Larger packages also allow for more interconnecting pins. Economic Cost is a factor in selection of integrated circuit packaging. Typically, an inexpensive plastic package can dissipate heat up to 2W, which is sufficient for many simple applications, though a similar ceramic package can dissipate up to 50W in the same scenario. As the chips inside the package get smaller and faster, they also tend to get hotter. As the subsequent need for more effective heat dissipation increases, the cost of packaging rises along with it. Generally, the smaller and more complex the package needs to be, the more expensive it is to manufacture. Wire bonding can be used instead of techniques such as flip-chip to reduce costs. History Early integrated circuits were packaged in ceramic flat packs, which the military used for many years for their reliability and small size. The other type of packaging used in the 1970s, called the ICP (Integrated Circuit Package), was a ceramic package (sometimes round as the transistor package), with the leads on one side, co-axially with the package axis. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s VLSI pin counts exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by small-outline integrated circuit—a carrier which occupies an area about 30–50% less than an equivalent DIP, with a typical thickness that is 70% less.The next big innovation was the area array package, which places the interconnection terminals throughout the surface area of the package, providing a greater number of connections than previous package types where only the outer perimeter is used. The first area array package was a ceramic pin grid array package. Not long after, the plastic ball grid array (BGA), another type of area array package, became one of the most commonly used packaging techniques. In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline packages (TSOP) replaced PGA packages as the most common for high pin count devices, though PGA packages are still often used for microprocessors. However, industry leaders Intel and AMD transitioned in the 2000s from PGA packages to land grid array (LGA) packages. Ball grid array (BGA) packages have existed since the 1970s, but evolved into flip-chip ball grid array (FCBGA) packages in the 1990s. FCBGA packages allow for much higher pin count than any existing package types. In an FCBGA package, the die is mounted upside-down (flipped) and connects to the package balls via a substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. Ceramic subtrates for BGA were replaced with organic substrates to reduce costs and use existing PCB manufacturing techniques to produce more packages at a time by using larger PCB panels during manufacturing. Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself. Recent developments consist of stacking multiple dies in single package called SiP, for System In Package, or three-dimensional integrated circuit. Combining multiple dies on a small substrate, often ceramic, is called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes blurry. Common package types Through-hole technology Surface-mount technology Chip carrier Pin grid array Flat package Small Outline Integrated Circuit Chip-scale package Ball grid array Transistor, diode, small pin count IC packages Multi-chip packages Operations For traditional ICs, after wafer dicing, the die is picked from the diced wafer using a vacuum tip or suction cup and undergoes die attachment which is the step during which a die is mounted and fixed to the package or support structure (header). In high-powered applications, the die is usually eutectic bonded onto the package, using e.g. gold-tin or gold-silicon solder (for good heat conduction). For low-cost, low-powered applications, the die is often glued directly onto a substrate (such as a printed wiring board) using an epoxy adhesive. Alternatively dies can be attached using solder. These techniques are usually used when the die will be wire bonded; dies with flip chip technology do not use these attachment techniques. IC bonding is also known as die bonding, die attach, and die mount. The following operations are performed at the packaging stage, as broken down into bonding, encapsulation, and wafer bonding steps. Note that this list is not all-inclusive and not all of these operations are performed for every package, as the process is highly dependent on the package type. IC bonding Wire bonding Thermosonic Bonding Down bonding Tape automated bonding Flip chip Quilt packaging Film attaching Spacer attaching Sintering die attach IC encapsulation Baking Plating Lasermarking Trim and form Wafer bonding Sintering die attach is a process that involves placing the semiconductor die onto the substrate and then subjecting it to high temperature and pressure in a controlled environment. See also Advanced packaging (semiconductors) List of electronic component packaging types List of electronics package dimensions Gold–aluminium intermetallic "purple plague" Co-fired ceramic B-staging Potting (electronics) Quilt packaging Electronic packaging Decapping References Semiconductor device fabrication Chip carriers Packaging (microfabrication)
Integrated circuit packaging
Materials_science
1,666
11,517,213
https://en.wikipedia.org/wiki/Jaumea%20carnosa
Jaumea carnosa, known by the common names marsh jaumea, fleshy jaumea, or simply jaumea, is a halophytic salt marsh plant native to the wetlands, coastal sea cliffs and salt marshes of the western coast of North America. Description It is a perennial dicotyledon. It has succulent green leaves on soft pinkish-green stems, not unlike ice plant in appearance. Its stems are weak and long. Flowers are yellow and the peduncle is enlarged below the head. It spreads by an extensive rhizome system. Distribution Jaumea carnosa ranges from British Columbia to northern Baja California, and can be found in wetlands and salt marshes. Some populations are located on the Channel Islands of California. References External links Jepson Manual Treatment, University of California United States Department of Agriculture Plants Profile Calflora photo gallery, University of California Washington State University, intertidal organisms, Jaumea carnosa (Marsh jaumea) photo and commentary Paul Slichter, Members of the Sunflower Family Found West of the Cascade Mountains With Flower Heads Consisting of Both Disc and Ray Flowers, Fleshy Jaumea, Marsh Jaumea Jaumea carnosa photos Tageteae Flora of California Flora of Washington (state) Flora of Oregon Flora of British Columbia Flora of Baja California Plants described in 1831 Halophytes Salt marsh plants Flora without expected TNC conservation status
Jaumea carnosa
Chemistry
296
2,376,929
https://en.wikipedia.org/wiki/RS%20Ophiuchi
RS Ophiuchi (RS Oph) is a recurrent nova system approximately 5,000 light-years away in the constellation Ophiuchus. In its quiet phase it has an apparent magnitude of about 12.5. It has been observed to erupt in 1898, 1933, 1958, 1967, 1985, 2006 and 2021 and reached about magnitude 5 on average. A further two eruptions, in 1907 and 1945, have been inferred from archival data. The recurrent nova is produced by a white dwarf star and a red giant in a binary system. About every 15 years, enough material from the red giant builds up on the surface of the white dwarf to produce a thermonuclear explosion. The white dwarf orbits close to the red giant, with an accretion disc concentrating the overflowing atmosphere of the red giant onto the white dwarf. Properties RS Ophiuchi is a system consisting of a white dwarf with a red giant companion. The stars are in a binary system with an orbital period of around 454 days. Eruptive history The chart below shows when every recorded nova had occurred since the first confirmed one in the year of 1898. 1898 The 1898 eruption was, in fact, not discovered until several years after it happened. Williamina Fleming discovered a nova-like spectrum in the Henry Draper Memorial photographs and announced it as a potential nova in 1904. This diagnosis was affirmed by Edward Charles Pickering in 1905, after which Annie Jump Cannon determined that RS Ophiuchi had likely reached maximum in 1898. 1907 Though the 1907 eruption was not observed during outburst, measurements of a dip in brightness from archival observations suggests that RS Oph underwent an eruption in early 1907 during a time when it was obscured by the sun. 1933 The 1933 outburst was first detected by Eppe Loreta, from Bologna, Italy. Loreta had been observing Y Ophiuchi when he serendipitously noticed a bright object about 50 arcminutes southwest of Y Oph. The detection of this luminous star resulted in the second recorded outburst of RS Oph. An independent discovery of this activity was made several days later by Leslie Peltier (P) while making his routine check of the variable. 1945 The 1945 eruption was also inferred from archival data after the outburst as a result of obscuration from the sun during the peak brightness. This eruption is more certain than that in 1907, as the tail of the eruption was also observed. 1958 The 1958 outburst was detected by Cyrus Fernald, located in Longwood, Florida. Fernald's monthly report for July 1958, containing 345 observations, displays a note in which he comments "Not too good of a month outside of the RS Oph observations (19 in total). It was interesting to watch the change in color as the star faded. It was reddish-yellow the first night, then yellowish-red, and so on. The last observation was the reddest star that I have ever seen." The crimson color of which Fernald speaks is indicative of the strong H-alpha emission displayed in the several days following the outburst. 1967 The 1967 outburst was again detected by Cyrus Fernald (FE), however, Fernald was not given credit for the earliest observation of maximum. For on the same evening, Dr. Max Beyer (BY), located in Hamburg, Germany, observed the variable at 6th magnitude. Due to the 6-hour difference in time zones, Dr. Beyer was credited with the first report. 1985 In January 1985, Warren Morrison of Peterborough, Canada discovered RS Oph to again be in outburst, reaching a maximum brightness of magnitude 5.4. 2006 On 12 February 2006 a new outburst occurred, reaching magnitude 4.5. The opportunity was taken to observe it at different wavelengths. It was notably observed with the VLTI by Olivier Chesneau, who discovered an elongated fireball as early as 5.5 days after the explosion (see the figure below). Silicate dust and SiO emissions were observed after eruptuon. 2021 On 8 August 2021, the Brazilian amateur astronomer Alexandre Amorim, from Florianópolis, Brazil detected a new outburst of RS Oph at 21:55 UT and sent a notification to AAVSO. The outbust was confirmed by an independent observation of Keith Geary from Ireland at 22:20 UT. The Fermi Gamma Ray Space Telescope corroborated optical observations made by Amorim and Geary of a new outburst associated with RS Oph, with an estimated visual magnitude of 5.0. It reached a peak visual magnitude of approximately 4.6 the following day. References Bibliography External links Entry at Astronomy Picture of the Day Entry in the Variable Star Index AAVSO Recurrent novae Ophiuchus M-type giants Ophiuchi, RS 162214 Durchmusterung objects
RS Ophiuchi
Astronomy
988
77,426,376
https://en.wikipedia.org/wiki/Pixhawk
Pixhawk is a project responsible for creating open-source standards for the flight controller hardware that can be installed on various unmanned aerial vehicles. Additionally, any flight controller built to the open standards often includes "Pixhawk" in its name and may be referred to as such. Overview An unmanned vehicle's flight controller, also referred to as an FC, FCB (flight control board), FMU (flight management unit), or autopilot, is a combination of hardware and software that is responsible for interfacing with a variety of onboard sensors and control systems in order to facilitate remote control or provide fully autonomous control. Pixhawk-standardized flight controllers are being used for academic, professional, and amateur applications, and are supported by two mainstream autopilot firmware options: PX4 and ArduPilot. Both firmware options allow for a variety of vehicle types through the Pixhawk flight controller system, including configuration options for unmanned boats, rovers, helicopters, planes, VTOLs, and multirotors. Many manufacturers have adopted various iterations of the Pixhawk standard, including Holybro and CubePilot. Refer to the UAV-systems hardware chart for a full list of flight controllers that have fully or partially adopted the Pixhawk standard. Pixhawk flight controllers typically feature one or two microcontrollers. In the case of two microcontrollers, a main flight management processor handles all sensor readings, PID calculations, and other resource-heavy computations, while the other handles input/output operations to external motors, switches and radio control receivers. Onboard sensors include an IMU with a multi-axis accelerometer and gyroscope, magnetometer to use as a compass, and a GPS tracking unit to estimate the vehicle's location. Standards The Pixhawk standards dictate the hardware requirements for manufacturers who are building products to be compatible with the PX4 autopilot software stack. However, due to ArduPilot's adaptation of Pixhawk flight controllers, the standard is able to ensure compatibility with ArduPilot as well. The open standards consist of a main autopilot reference standard for each iteration of the Pixhawk FMU, as well as various other standards that apply to the general Pixhawk control ecosystem, such as a payload bus standard or a smart battery standard. Autopilot Reference Standard This is the main section of the Pixhawk open standards, containing all mechanical and electrical specification for each version of the flight management unit. Currently, versions 1, 2, 3, 4, 4X, 5, 5X, 6X, 6U, and 6C autopilots have been released. The mechanical design standard includes dimensional drawings of the FMU's PCB, the selected sensor types and their locations, and areas that need additional heat sinking. The electrical standard includes the pin-out of each pin in the main processing microcontroller, and which interface each pin is set to communicate with. Autopilot Bus Standard The autopilot bus standard is an extension of the autopilot reference standard specifically for providing more information about manufacturing the latest reference versions of Pixhawk FMU, such as the 5X and 6X. The main reason for this is that these are the first flight units featuring a system on module design, where the housing of the flight controller module takes the form of a compact prism with a set of extremely high-density, 100-pin connectors between the module and the baseboard (seen at the bottom of the image on the right). The baseboard allows users to plug the necessary peripheral devices (such as motors, servos, and radios) into the flight controller, while the system on module design results in an easily swappable flight computer. Additionally, this bus standard details PCB layout guidelines for the system on module along with a catalog of reference schematics for interfaces between the module and the baseboard. Connector Standard In the connector standard, the Pixhawk project specifies using the JST GH for the vast majority of all interfaces between the flight controller board and pluggable peripherals. Just as importantly, the standard defines a convention for user-facing pin-outs for telemetry, GPS, CAN bus, SPI, power, and debug ports. External pin-out information is critical for anyone developing a vehicle with an autopilot, as improperly plugging in peripherals results in a non-functional system at best, and a dangerous environment with broken hardware at worst. Although there is a great deal of variation within the Pixhawk family in terms of available ports and port types, the standardization of pin-outs for the most popular interfaces is immensely helpful to any user working with multiple generations of Pixhawk flight controllers. Other standards Payload Bus Standard Although this section serves as an accessory to the main Autopilot Reference Standard, it concisely details how the Pixhawk standards suggest making additional vehicle payloads that are compatible with a Pixhawk autopilot. Although it is not strictly enforced across all vehicle payload manufacturers, this facilitates the possibility for users to implement payloads and flight controllers from different manufacturers. Smart Battery Standard The smart battery standard has not been published yet, but it is set to define the interface between a smart battery and a Pixhawk FMU. Such a standard would define the communication protocols, connectors, and capabilities of a battery management system that would be used in a Pixhawk-operated vehicle. Radio Interface Standard Although there are a variety of radio solutions that can be interfaced with a Pixhawk flight controller, the project does have a short mechanical, electrical, and software definition for a Pixhawk-specific radio communication system. The standard anticipates connections between ground stations and radio modules to be over USB or Ethernet, while connections between local and remote radios could go over traditional radio-frequency links, or LTE. History In 2008, Lorenz Meier, a master's student at ETH Zurich, wanted to make an indoor drone that could use computer vision to autonomously traverse a space and avoid collisions with obstacles. However, such technology did not exist, let alone in a way that was accessible to a university student. Motivated by participating in the indoor autonomy category of a European Micro Air Vehicle competition, Lorenz leveraged the help of professor Marc Pollefeys and assembled a group of 14 teammates to spend nine tireless months creating custom flight controller hardware, firmware, and high-level software. The team, named "Pixhawk," won first place in their category in 2009, being the first competitors to successfully implement computer vision for obstacle avoidance. Revisiting the project in subsequent years, Lorenz realized that there were not a lot of existing industry tools that could be used to accomplish what he and his team did. As a result, the Pixhawk team made the entire project open source. The ground control software that allowed the team to interface with the drone while it was in flight, the MAVLink communication protocol that was custom developed for streaming telemetry back to the ground station, the PX4 autopilot software that was responsible for controlling the drone, and the Pixhawk flight controller hardware that the autopilot ran on were all released to the public for further development. Over time, the released project began to grow. MAVLink was picked up by the open-source ArduPilot autopilot software development project, and the ground control software QGroundControl was subsequently used to interface with MAVLink systems. After a couple codebase rewrites and hardware development cycles, Lorenz and a worldwide team of open-source maintainers were able to support a manufacturer that would build a flight controller to their standards. In 2013, 3D Robotics became the first manufacturer of commercial Pixhawk flight controllers, officially lowering the barrier to entry to autonomous flight for enthusiasts and corporations worldwide. Now, anyone could purchase an extremely capable autonomous flight control, flash it with free, open-source PX4 or ArduPilot firmware, and have a university research-level drone platform. Lorenz heavily credits the open-source community with the extensive success of the Pixhawk platform, as the combined development power seemed to be greater than that of a well-resourced company. In order to help standardize various developments across the project and ensure that it remained accessible and open-source, the Dronecode organization was founded in 2014. Dronecode is currently a non-profit organization under the Linux Foundation, and it has been responsible for facilitating conversations that define the Pixhawk standards. References External links Official repository on GitHub PX4 autopilot software home page ArduPilot autopilot software home page Avionics computers Embedded systems Flight control systems Open-source hardware Unmanned aerial vehicles Unmanned surface vehicles Unmanned underwater vehicles Robotics engineering
Pixhawk
Technology,Engineering
1,820
54,265,281
https://en.wikipedia.org/wiki/International%20Journal%20on%20Semantic%20Web%20and%20Information%20Systems
The International Journal on Semantic Web and Information Systems (IJSWIS) is a quarterly peer-reviewed academic journal covering the semantic web and information systems. It was established in 2005 and is published by IGI Global. The editor-in-chief is Brij B. Gupta, Who is a professor at Asia University, Taiwan, and Director of the International Center for AI and Cyber Security Research and Innovations. Brij B. Gupta is also serving as Member-in-Large, Board of Governors, IEEE Consumer Technology Society (2022–2024) and also included in the list of 2022 Highly Cited Researchers in Computer Science by Clarivate. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 4.1 References External links Academic journals established in 2005 English-language journals Quarterly journals Semantic Web and Information Systems, International Journal on Computer science journals Semantic Web Information systems journals
International Journal on Semantic Web and Information Systems
Technology
198
3,145,356
https://en.wikipedia.org/wiki/Pose%20%28computer%20vision%29
In the fields of computing and computer vision, pose (or spatial pose) represents the position and the orientation of an object, each usually in three dimensions. Poses are often stored internally as transformation matrices. The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale, whereas pose does not. In computer vision, the pose of an object is often estimated from camera input by the process of pose estimation. This information can then be used, for example, to allow a robot to manipulate an object or to avoid moving into the object based on its perceived position and orientation in the environment. Other applications include skeletal action recognition. Pose estimation The specific task of determining the pose of an object in an image (or stereo images, image sequence) is referred to as pose estimation. Pose estimation problems can be solved in different ways depending on the image sensor configuration, and choice of methodology. Three classes of methodologies can be distinguished: Analytic or geometric methods: Given that the image sensor (camera) is calibrated and the mapping from 3D points in the scene and 2D points in the image is known. If also the geometry of the object is known, it means that the projected image of the object on the camera image is a well-known function of the object's pose. Once a set of control points on the object, typically corners or other feature points, has been identified, it is then possible to solve the pose transformation from a set of equations which relate the 3D coordinates of the points with their 2D image coordinates. Algorithms that determine the pose of a point cloud with respect to another point cloud are known as point set registration algorithms, if the correspondences between points are not already known. Genetic algorithm methods: If the pose of an object does not have to be computed in real-time a genetic algorithm may be used. This approach is robust especially when the images are not perfectly calibrated. In this particular case, the pose represent the genetic representation and the error between the projection of the object control points with the image is the fitness function. Learning-based methods: These methods use artificial learning-based system which learn the mapping from 2D image features to pose transformation. In short, this means that a sufficiently large set of images of the object, in different poses, must be presented to the system during a learning phase. Once the learning phase is completed, the system should be able to present an estimate of the object's pose given an image of the object. Camera pose See also Gesture recognition Homography (computer vision) Camera calibration Structure from motion Essential matrix and Trifocal tensor (relative pose) References Computer vision Geometry in computer vision Robot control
Pose (computer vision)
Mathematics,Engineering
545
32,720,954
https://en.wikipedia.org/wiki/Osmunda%20wehrii
Osmunda wehrii is an extinct species of fern in the modern genus Osmunda of the family Osmundaceae. Osmunda wehrii is known from Langhian age Miocene fossils found in Central Washington. History and classification The species was described from specimens of silicified rhizomes and frond bases in blocks of chert. The cherts were recovered from sediments outcropping near the contact of the Roza Basalts and the overlying Priest Rapids Basalts, designated the type locality, near the town of Beverly, Washington by Fred Brinkman of Sunnyside, Washington. Further specimens of O. wehrii have been found at the "Ho ho" site, one of the "county line hole" fossil localities north of Interstate 82 in Yakima County, Washington. The "Ho ho" site works strata which is part of the Museum Flow Package within the interbeds of the Sentinel Bluffs Unit of the central Columbia Plateau N2 Grande Ronde Basalt, Columbia River Basalt Group. The Museum Flow Package interbeds are dated to the middle Miocene and are approximately 15.6 million years old. The holotype specimens, two pieces of the same chert specimen containing rhizomes and frond bases, are preserved in the Burke Museum of Natural History and Culture as specimen numbers "4772" and "4773". The specimens of chert were studied by paleobotanists Charles N. Miller jr of University of Montana. Miller published his 1982 type description for Osmunda wehrii in the American Journal of Botany volume 69 article "Osmunda wehrii, a New Species Based on Petrified Rhizomes from the Miocene of Washington". In his type description he noted the etymology for the specific epithet wehrii, in honor of Wesley C. Wehr who made the type specimens available to Miller for study. Description Wessiea possesses rhizomes which are approximately in diameter. The fossils have distinct stipular frond bases characteristic of the family Osmundaceae, while the interior of the fronds show distinct long fibers in the frond bases are both representative of the modern genus Osmunda. It is found in the chert blocks intertwined with the extinct genus Wessiea yakimaensis and anatomically preserved Woodwardia virginica, which still lives in the forests of eastern coastal North America. References Osmundales Prehistoric plants Plants described in 1982 Fossil taxa described in 1982 Miocene plants Extinct flora of North America †
Osmunda wehrii
Biology
515
15,062,976
https://en.wikipedia.org/wiki/OTX1
Homeobox protein OTX1 is a protein that in humans is encoded by the OTX1 gene. Function This gene encodes a member of the bicoid sub-family of homeodomain-containing transcription factors. The encoded protein acts as a transcription factor and may play a role in brain and sensory organ development. The Otx gene is active in the region of the first gill arch, which is related to the upper and lower jaw and two of the bones of the ear. A similar protein in mice is required for proper brain and sensory organ development and can cause epilepsy. References Further reading External links Transcription factors
OTX1
Chemistry,Biology
130
5,752,422
https://en.wikipedia.org/wiki/Helena%20Sheehan
Helena Sheehan is an academic philosopher, historian of science, philosophy, culture and politics. Sheehan is professor emeritus at Dublin City University, where she taught media studies and history of ideas in the School of Communications. She was a visiting professor at the University of Cape Town on several occasions. She has been active on the left since the 1960s. She has given many conference papers and public lectures in universities and other bodies in USA, USSR, GDR, Mexico, Canada, Ireland, UK, France, Germany, Czechoslovakia, Yugoslavia, Greece and South Africa. Biography Born in the United States in 1944, Sheehan describes her childhood as Catholic and conservative, She began her university studies and taught primary school as a nun. She left the convent in 1965 and became an agnostic and liberal, then an atheist and radical. Sheehan graduated with a BS in 1967 from St. Joseph's University in Philadelphia, followed by an MA in 1970 from Temple University in Philadelphia. She earned a PhD in 1980 from Trinity College (Dublin) in philosophy. She became active on the new left in the US in the 1960s. In Ireland, she joined Sinn Féin (Official) in 1972 and then the Communist Party of Ireland in 1975. She chaired the Trinity College Dublin Communist Society. She joined the Labour Party in 1981, where she was a founder of Labour Left. Since 2011, she has belonged to no party, but has remained active on the left. She organised Occupy University during Occupy Dame Street in 2011. She travelled often to Greece and became involved with Syriza, writing about this in The Syriza Wave published in 2017. An autobiographical work entitled Navigating the Zeitgeist was published in 2019 with a sequel Until We Fall published in 2023. As a philosopher and historian of science, Sheehan writes from a Marxist perspective. She argues that Marx and Engels shared fundamentally the same view on the philosophy of science and written critically of Lysenkoism and Stalin's impact on scientific development, while stressing the necessity of understanding such trends in full socio-historical context. She is a strong critic of both positivism and postmodernism. Sheehan has lectured at the Humanist Association of Ireland. In her personal life, Sheehan was the partner of the trade unionist Sam Nolan. Quotes On her life:"Sometimes I feel as if I have lived eons in a matter of decades. The wave of historical change, such as swept over centuries in the past, seem to have swept through my world several times over already. And who knows what I have yet to see? I am perhaps only halfway through the time I may expect my life to be." [1988] On Marxism:"Whatever Marxism is, it is systemic analysis and historical perspective. It is a totalising (not totalised) philosophy of history. It is the only mode of thought able to give a coherent, comprehensive, and credible account of the complexity of contemporary experience. It is the only coherent analysis of the capitalist mode of production and how it structurally generates, not only the maximum expropriation of surplus value, but maximum dissolution of social bonds, involving decreasing access to totality and increasing atomisation of thought processes. It is the only credible analysis of an alternative mode of production, proposing socialism, not only as a radical restructuring of the relations of production, but as a fundamental transformation of patterns of thought and forms of social organisation." On Marxism and science:"Marxism has made the strongest claims of any intellectual tradition before or since about the socio-historical character of science, yet always affirmed its cognitive achievements. Science was seen as inextricably enmeshed with economic systems, technological developments, political movements, philosophical theories, cultural trends, ethical norms, ideological positions, indeed all that was human. It was also a path of access to the natural world." On Lysenkoism:"What went wrong was that the proper procedures for coming to terms with such complex issues were short-circuited by grasping for easy slogans and simplistic solutions and imposing them by administrative fiat. On the fall of communism:"These are the days of our defeat, we ought not to pretend otherwise, but defeat is not death. On the death of communism:"The socialist experiment has been portrayed as having played itself out and finally thrown up leaders who have seen the superiority of the capitalist way and decided to go for it. The world is 'going our way', the leaders of 'the free world' have declared. The iron curtain has come tumbling down. The Kremlin has been conquered without a single marine opening fire, without a single ICBM being launched. It unravels before me like a nightmare. No more the red flags flying. No more the heads held high and the fists clenched and the voices raised to the strains of The International. No more the larger-than-life murals of workers and soldiers and peasants marching into the future shaping the world with the labour of their hands and hearts and minds. Now it is to be Mickey Mouse and Coca Cola and Michael Jackson and Sacchi & Sacchi." Works Books Books by Sheehan include: Until We Fall: Long Distance Life on the Left, Monthly Review Press, 2023 Navigating the Zeitgeist: A Story of the Cold War, the New Left, Irish Republicanism, and International Communism, Monthly Review Press, 2019 The Syriza Wave: Surging and Crashing with the Greek Left, Monthly Review Press, 2017 Marxism and the Philosophy of Science: A Critical History, Humanities Press, 1985, 1993, Verso Books, 2017 European Socialism: A Blind Alley or a Long and Winding Road?, MSF, 1992 Has the Red Flag Fallen?, Attic Press, 1989 Irish Television Drama: A Society and Its Stories, Radio Telefís Éireann, 1987, Four Courts Press, 2004 The Continuing Story of Irish Television Drama: Tracking the Tiger, Four Courts Press, 2004 Articles In academic journals (peer-reviewed): Is history a coherent story? Critical Legal Thinking February 2012 The Wire and the world: Narrative and metanarrative Jump Cut 51, 2009 Contradictory transformations: Observations on the intellectual dynamics of South African universities Journal of Critical Education Policy Studies 7, 1, 2009 Marxism and science studies: A sweep through the decades International Studies in the Philosophy of Science 21, 2, 2007 JD Bernal: Politics, philosophy and the science of science Journal of Physics 57, 2007 Fair City . Journal of Irish Studies January 2006 Grand narratives then and now: Can we still conceptualise history? Socialism and Democracy 12, 1998 On public service broadcasting: Against the tide Irish Communications Review 2, 1992 The parameters of the permissible: How Scrap Saturday got away with it Irish Communications Review 2, 1992 Writing and the zeitgeist. Irish University Review 21, 1991 In political journals: Totality and Decades of Debate and the Return of Nature Monthly Review 76, 4, 2023 The Disinformation Wars: An Epistemological, Political and Socio-Historical Interrogation Monthly Review 75, 4, 2023 Return of the Dialectics of Nature Debate Monthly Review 74, 4, 2022 Marxism, Science and Science Studies: From Marx and Engels to Covid-19 and Cop-26 Monthly Review 74,1, 2022 When the old world unravelled Jacobin 29, 2018 As the world turned upside down Monthly Review 69, 3, 2017 Centenary of Christopher Caudwell'. Communist Review 50 Spring 2008 "IRELAND: Don't forget Dublin!". Green Left Weekly, February 2003 Book reviews The Synthesising Impulse Monthly Review, October 2021 Between Science and Society Monthly Review, March 2018 Closed Rooms and Class War Jacobin, July 2017 South Africa Pushed to the Limit Monthly Review, November 2011 Religion and the Human Prospect Science & Society, October 2009 Popular Television Drama: Critical Perspectives European Journal of Communication, June 2006 The Drama of the Science Wars: What is the Plot?' Public Understanding of Science, April 2001 'Ecological Roots: Which Go Deepest?' Monthly Review, October 2000 Questioning Ireland Irish Political Studies 2000 Ideological Analysis and the Alternatives Irish Communications Review, 5, 1995 Miscellaneous Introductions: Bukharin, Nikolai. Philosophical Arabesques. New York: Monthly Review Press, 2005 Pamphlets and articles: 'Communism and the Emancipation of Women'. (Communist Party of Ireland, 1976) 'The centenary of Christopher Caudwell and the philosophical landscape of the century' (2007) See also List of Dublin City University people References External links Sheehan's Home Page at the website of Dublin City University DORAS DCU open access repository (browse by author) Facebook profile Twitter feed "Lysenko and Lysenkoism"extract from Marxism and the Philosophy of Science: A Critical History at the Marxists Internet Archive Alumni of Trinity College Dublin Temple University alumni Academics of Dublin City University Historians of science American anti-capitalists American emigrants to Ireland Irish anti-capitalists Irish communists Irish feminists Labour Party (Ireland) politicians Irish political writers Women science writers Irish women writers Marxist humanists Marxist writers Communist women writers Former Roman Catholic religious sisters and nuns Former Roman Catholics Irish former Christians Irish atheists Year of birth missing (living people) Living people American Marxist historians Critics of postmodernism Irish Marxists Socialist feminists Marxist feminists Marxist theorists Media studies writers Philosophers of science Scholars of Marxism Irish philosophers
Helena Sheehan
Technology
1,896
49,139
https://en.wikipedia.org/wiki/Decentralization
Decentralization or decentralisation is the process by which the activities of an organization, particularly those related to planning and decision-making, are distributed or delegated away from a central, authoritative location or group and given to smaller factions within it. Concepts of decentralization have been applied to group dynamics and management science in private businesses and organizations, political science, law and public administration, technology, economics and money. History The word "centralisation" came into use in France in 1794 as the post-Revolution French Directory leadership created a new government structure. The word "décentralisation" came into usage in the 1820s. "Centralization" entered written English in the first third of the 1800s; mentions of decentralization also first appear during those years. In the mid-1800s Tocqueville would write that the French Revolution began with "a push towards decentralization" but became, "in the end, an extension of centralization." In 1863, retired French bureaucrat Maurice Block wrote an article called "Decentralization" for a French journal that reviewed the dynamics of government and bureaucratic centralization and recent French efforts at decentralization of government functions. Ideas of liberty and decentralization were carried to their logical conclusions during the 19th and 20th centuries by anti-state political activists calling themselves "anarchists", "libertarians", and even decentralists. Tocqueville was an advocate, writing: "Decentralization has, not only an administrative value but also a civic dimension since it increases the opportunities for citizens to take interest in public affairs; it makes them get accustomed to using freedom. And from the accumulation of these local, active, persnickety freedoms, is born the most efficient counterweight against the claims of the central government, even if it were supported by an impersonal, collective will." Pierre-Joseph Proudhon (1809–1865), influential anarchist theorist wrote: "All my economic ideas as developed over twenty-five years can be summed up in the words: agricultural-industrial federation. All my political ideas boil down to a similar formula: political federation or decentralization." In the early 20th century, America's response to the centralization of economic wealth and political power was a decentralist movement. It blamed large-scale industrial production for destroying middle-class shop keepers and small manufacturers and promoted increased property ownership and a return to small scale living. The decentralist movement attracted Southern Agrarians like Robert Penn Warren, as well as journalist Herbert Agar. New Left and libertarian individuals who identified with social, economic, and often political decentralism through the ensuing years included Ralph Borsodi, Wendell Berry, Paul Goodman, Carl Oglesby, Karl Hess, Donald Livingston, Kirkpatrick Sale (author of Human Scale), Murray Bookchin, Dorothy Day, Senator Mark O. Hatfield, Mildred J. Loomis and Bill Kauffman. Leopold Kohr, author of the 1957 book The Breakdown of Nations – known for its statement "Whenever something is wrong, something is too big" – was a major influence on E. F. Schumacher, author of the 1973 bestseller Small Is Beautiful: A Study of Economics As If People Mattered. In the next few years a number of best-selling books promoted decentralization. Daniel Bell's The Coming of Post-Industrial Society discussed the need for decentralization and a "comprehensive overhaul of government structure to find the appropriate size and scope of units", as well as the need to detach functions from current state boundaries, creating regions based on functions like water, transport, education and economics which might have "different 'overlays' on the map." Alvin Toffler published Future Shock (1970) and The Third Wave (1980). Discussing the books in a later interview, Toffler said that industrial-style, centralized, top-down bureaucratic planning would be replaced by a more open, democratic, decentralized style which he called "anticipatory democracy". Futurist John Naisbitt's 1982 book "Megatrends" was on The New York Times Best Seller list for more than two years and sold 14 million copies. Naisbitt's book outlines 10 "megatrends", the fifth of which is from centralization to decentralization. In 1996 David Osborne and Ted Gaebler had a best selling book Reinventing Government proposing decentralist public administration theories which became labeled the "New Public Management". Stephen Cummings wrote that decentralization became a "revolutionary megatrend" in the 1980s. In 1983 Diana Conyers asked if decentralization was the "latest fashion" in development administration. Cornell University's project on Restructuring Local Government states that decentralization refers to the "global trend" of devolving responsibilities to regional or local governments. Robert J. Bennett's Decentralization, Intergovernmental Relations and Markets: Towards a Post-Welfare Agenda describes how after World War II governments pursued a centralized "welfarist" policy of entitlements which now has become a "post-welfare" policy of intergovernmental and market-based decentralization. In 1983, "Decentralization" was identified as one of the "Ten Key Values" of the Green Movement in the United States. A 1999 United Nations Development Programme report stated: Overview Systems approach Those studying the goals and processes of implementing decentralization often use a systems theory approach, which according to the United Nations Development Programme report applies to the topic of decentralization "a whole systems perspective, including levels, spheres, sectors and functions and seeing the community level as the entry point at which holistic definitions of development goals are from the people themselves and where it is most practical to support them. It involves seeing multi-level frameworks and continuous, synergistic processes of interaction and iteration of cycles as critical for achieving wholeness in a decentralized system and for sustaining its development." However, it has been seen as part of a systems approach. Norman Johnson of Los Alamos National Laboratory wrote in a 1999 paper: "A decentralized system is where some decisions by the agents are made without centralized control or processing. An important property of agent systems is the degree of connectivity or connectedness between the agents, a measure global flow of information or influence. If each agent is connected (exchange states or influence) to all other agents, then the system is highly connected." University of California, Irvine's Institute for Software Research's "PACE" project is creating an "architectural style for trust management in decentralized applications." It adopted Rohit Khare's definition of decentralization: "A decentralized system is one which requires multiple parties to make their own independent decisions" and applies it to Peer-to-peer software creation, writing: Goals Decentralization in any area is a response to the problems of centralized systems. Decentralization in government, the topic most studied, has been seen as a solution to problems like economic decline, government inability to fund services and their general decline in performance of overloaded services, the demands of minorities for a greater say in local governance, the general weakening legitimacy of the public sector and global and international pressure on countries with inefficient, undemocratic, overly centralized systems. The following four goals or objectives are frequently stated in various analyses of decentralization. Participation In decentralization, the principle of subsidiarity is often invoked. It holds that the lowest or least centralized authority that is capable of addressing an issue effectively should do so. According to one definition: "Decentralization, or decentralizing governance, refers to the restructuring or reorganization of authority so that there is a system of co-responsibility between institutions of governance at the central, regional and local levels according to the principle of subsidiarity, thus increasing the overall quality and effectiveness of the system of governance while increasing the authority and capacities of sub-national levels." Decentralization is often linked to concepts of participation in decision-making, democracy, equality and liberty from a higher authority. Decentralization enhances the democratic voice. Theorists believe that local representative authorities with actual discretionary powers are the basis of decentralization that can lead to local efficiency, equity and development." Columbia University's Earth Institute identified one of three major trends relating to decentralization: "increased involvement of local jurisdictions and civil society in the management of their affairs, with new forms of participation, consultation, and partnerships." Decentralization has been described as a "counterpoint to globalization [which] removes decisions from the local and national stage to the global sphere of multi-national or non-national interests. Decentralization brings decision-making back to the sub-national levels". Decentralization strategies must account for the interrelations of global, regional, national, sub-national, and local levels. Diversity Norman L. Johnson writes that diversity plays an important role in decentralized systems like ecosystems, social groups, large organizations, political systems. "Diversity is defined to be unique properties of entities, agents, or individuals that are not shared by the larger group, population, structure. Decentralized is defined as a property of a system where the agents have some ability to operate "locally." Both decentralization and diversity are necessary attributes to achieve the self-organizing properties of interest." Advocates of political decentralization hold that greater participation by better informed diverse interests in society will lead to more relevant decisions than those made only by authorities on the national level. Decentralization has been described as a response to demands for diversity. Efficiency In business, decentralization leads to a management by results philosophy which focuses on definite objectives to be achieved by unit results. Decentralization of government programs is said to increase efficiency – and effectiveness – due to reduction of congestion in communications, quicker reaction to unanticipated problems, improved ability to deliver services, improved information about local conditions, and more support from beneficiaries of programs. Firms may prefer decentralization because it ensures efficiency by making sure that managers closest to the local information make decisions and in a more timely fashion; that their taking responsibility frees upper management for long term strategics rather than day-to-day decision-making; that managers have hands on training to prepare them to move up the management hierarchy; that managers are motivated by having the freedom to exercise their own initiative and creativity; that managers and divisions are encouraged to prove that they are profitable, instead of allowing their failures to be masked by the overall profitability of the company. The same principles can be applied to the government. Decentralization promises to enhance efficiency through both inter-governmental competitions with market features and fiscal discipline which assigns tax and expenditure authority to the lowest level of government possible. It works best where members of the subnational government have strong traditions of democracy, accountability, and professionalism. Conflict resolution Economic and/or political decentralization can help prevent or reduce conflict because they reduce actual or perceived inequities between various regions or between a region and the central government. Dawn Brancati finds that political decentralization reduces intrastate conflict unless politicians create political parties that mobilize minority and even extremist groups to demand more resources and power within national governments. However, the likelihood this will be done depends on factors like how democratic transitions happen and features like a regional party's proportion of legislative seats, a country's number of regional legislatures, elector procedures, and the order in which national and regional elections occur. Brancati holds that decentralization can promote peace if it encourages statewide parties to incorporate regional demands and limit the power of regional parties. Processes Initiation The processes by which entities move from a more to a less centralized state vary. They can be initiated from the centers of authority ("top-down") or from individuals, localities or regions ("bottom-up"), or from a "mutually desired" combination of authorities and localities working together. Bottom-up decentralization usually stresses political values like local responsiveness and increased participation and tends to increase political stability. Top-down decentralization may be motivated by the desire to "shift deficits downwards" and find more resources to pay for services or pay off government debt. Some hold that decentralization should not be imposed, but done in a respectful manner. Appropriate size Gauging the appropriate size or scale of decentralized units has been studied in relation to the size of sub-units of hospitals and schools, road networks, administrative units in business and public administration, and especially town and city governmental areas and decision-making bodies. In creating planned communities ("new towns"), it is important to determine the appropriate population and geographical size. While in earlier years small towns were considered appropriate, by the 1960s, 60,000 inhabitants was considered the size necessary to support a diversified job market and an adequate shopping center and array of services and entertainment. Appropriate size of governmental units for revenue raising also is a consideration. Even in bioregionalism, which seeks to reorder many functions and even the boundaries of governments according to physical and environmental features, including watershed boundaries and soil and terrain characteristics, appropriate size must be considered. The unit may be larger than many decentralist-bioregionalists prefer. Inadvertent or silent Decentralization ideally happens as a careful, rational, and orderly process, but it often takes place during times of economic and political crisis, the fall of a regime and the resultant power struggles. Even when it happens slowly, there is a need for experimentation, testing, adjusting, and replicating successful experiments in other contexts. There is no one blueprint for decentralization since it depends on the initial state of a country and the power and views of political interests and whether they support or oppose decentralization. Decentralization usually is a conscious process based on explicit policies. However, it may occur as "silent decentralization" in the absence of reforms as changes in networks, policy emphasize and resource availability lead inevitably to a more decentralized system. Asymmetry Decentralization may be uneven and "asymmetric" given any one country's population, political, ethnic and other forms of diversity. In many countries, political, economic and administrative responsibilities may be decentralized to the larger urban areas, while rural areas are administered by the central government. Decentralization of responsibilities to provinces may be limited only to those provinces or states which want or are capable of handling responsibility. Some privatization may be more appropriate to an urban than a rural area; some types of privatization may be more appropriate for some states and provinces but not others. Determinants The academic literature frequently mentions the following factors as determinants of decentralization: "The number of major ethnic groups" "The degree of territorial concentration of those groups" "The existence of ethnic networks and communities across the border of the state" "The country's dependence on natural resources and the degree to which those resources are concentrated in the region's territory" "The country's per capita income relative to that in other regions" The presence of self-determination movements In government policy Historians have described the history of governments and empires in terms of centralization and decentralization. In his 1910 The History of Nations Henry Cabot Lodge wrote that Persian king Darius I (550–486 BC) was a master of organization and "for the first time in history centralization becomes a political fact." He also noted that this contrasted with the decentralization of Ancient Greece. Since the 1980s a number of scholars have written about cycles of centralization and decentralization. Stephen K. Sanderson wrote that over the last 4000 years chiefdoms and actual states have gone through sequences of centralization and decentralization of economic, political and social power. Yildiz Atasoy writes this process has been going on "since the Stone Age" through not just chiefdoms and states, but empires and today's "hegemonic core states". Christopher K. Chase-Dunn and Thomas D. Hall review other works that detail these cycles, including works which analyze the concept of core elites which compete with state accumulation of wealth and how their "intra-ruling-class competition accounts for the rise and fall of states" and their phases of centralization and decentralization. Rising government expenditures, poor economic performance and the rise of free market-influenced ideas have convinced governments to decentralize their operations, to induce competition within their services, to contract out to private firms operating in the market, and to privatize some functions and services entirely. Government decentralization has both political and administrative aspects. Its decentralization may be territorial, moving power from a central city to other localities, and it may be functional, moving decision-making from the top administrator of any branch of government to lower level officials, or divesting of the function entirely through privatization. It has been called the "new public management" which has been described as decentralization, management by objectives, contracting out, competition within government and consumer orientation. Political Political decentralization signifies a reduction in the authority of national governments over policy-making. This process is accomplished by the institution of reforms that either delegate a certain degree of meaningful decision-making autonomy to sub-national tiers of government, or grant citizens the right to elect lower-level officials, like local or regional representatives. Depending on the country, this may require constitutional or statutory reforms, the development of new political parties, increased power for legislatures, the creation of local political units, and encouragement of advocacy groups. A national government may decide to decentralize its authority and responsibilities for a variety of reasons. Decentralization reforms may occur for administrative reasons, when government officials decide that certain responsibilities and decisions would be handled best at the regional or local level. In democracies, traditionally conservative parties include political decentralization as a directive in their platforms because rightist parties tend to advocate for a decrease in the role of central government. There is also strong evidence to support the idea that government stability increases the probability of political decentralization, since instability brought on by gridlock between opposing parties in legislatures often impedes a government's overall ability to enact sweeping reforms. The rise of regional ethnic parties in the national politics of parliamentary democracies is also heavily associated with the implementation of decentralization reforms. Ethnic parties may endeavor to transfer more autonomy to their respective regions, and as a partisan strategy, ruling parties within the central government may cooperate by establishing regional assemblies in order to curb the rise of ethnic parties in national elections. This phenomenon famously occurred in 1999, when the United Kingdom's Labour Party appealed to Scottish constituents by creating a semi-autonomous Scottish Parliament in order to neutralize the threat from the increasingly popular Scottish National Party at the national level. In addition to increasing the administrative efficacy of government and endowing citizens with more power, there are many projected advantages to political decentralization. Individuals who take advantage of their right to elect local and regional authorities have been shown to have more positive attitudes toward politics, and increased opportunities for civic decision-making through participatory democracy mechanisms like public consultations and participatory budgeting are believed to help legitimize government institutions in the eyes of marginalized groups. Moreover, political decentralization is perceived as a valid means of protecting marginalized communities at a local level from the detrimental aspects of development and globalization driven by the state, like the degradation of local customs, codes, and beliefs. In his 2013 book, Democracy and Political Ignorance, George Mason University law professor Ilya Somin argued that political decentralization in a federal democracy confronts the widespread issue of political ignorance by allowing citizens to engage in foot voting, or moving to other jurisdictions with more favorable laws. He cites the mass migration of over one million southern-born African Americans to the North or the West to evade discriminatory Jim Crow laws in the late 19th century and early 20th century. The European Union follows the principle of subsidiarity, which holds that decision-making should be made by the most local competent authority. The EU should decide only on enumerated issues that a local or member state authority cannot address themselves. Furthermore, enforcement is exclusively the domain of member states. In Finland, the Centre Party explicitly supports decentralization. For example, government departments have been moved from the capital Helsinki to the provinces. The centre supports substantial subsidies that limit potential economic and political centralization to Helsinki. Political decentralization does not come without its drawbacks. A study by Fan concludes that there is an increase in corruption and rent-seeking when there are more vertical tiers in the government, as well as when there are higher levels of subnational government employment. Other studies warn of high-level politicians that may intentionally deprive regional and local authorities of power and resources when conflicts arise. In order to combat these negative forces, experts believe that political decentralization should be supplemented with other conflict management mechanisms like power-sharing, particularly in regions with ethnic tensions. Administrative Four major forms of administrative decentralization have been described. Deconcentration, the weakest form of decentralization, shifts responsibility for decision-making, finance and implementation of certain public functions from officials of central governments to those in existing districts or, if necessary, new ones under direct control of the central government. Delegation passes down responsibility for decision-making, finance and implementation. It involves the creation of public-private enterprises or corporations, or of "authorities", special projects or service districts. All of them will have a great deal of decision-making discretion and they may be exempt from civil service requirements and may be permitted to charge users for services. Devolution transfers responsibility for decision-making, finance and implementation of certain public functions to the sub-national level, such as a regional, local, or state government. Divestment, also called privatization, may mean merely contracting out services to private companies. Or it may mean relinquishing totally all responsibility for decision-making, finance and implementation of certain public functions. Facilities will be sold off, workers transferred or fired and private companies or non-for-profit organizations allowed to provide the services. Many of these functions originally were done by private individuals, companies, or associations and later taken over by the government, either directly, or by regulating out of business entities which competed with newly created government programs. Fiscal Fiscal decentralization means decentralizing revenue raising and/or expenditure of moneys to a lower level of government while maintaining financial responsibility. While this process usually is called fiscal federalism, it may be relevant to unitary, federal, or confederal governments. Fiscal federalism also concerns the "vertical imbalances" where the central government gives too much or too little money to the lower levels. It actually can be a way of increasing central government control of lower levels of government, if it is not linked to other kinds of responsibilities and authority. Fiscal decentralization can be achieved through user fees, user participation through monetary or labor contributions, expansion of local property or sales taxes, intergovernmental transfers of central government tax monies to local governments through transfer payments or grants, and authorization of municipal borrowing with national government loan guarantees. Transfers of money may be given conditionally with instructions or unconditionally without them. Market Market decentralization can be done through privatization of public owned functions and businesses, as described briefly above. But it also is done through deregulation, the abolition of restrictions on businesses competing with government services, for example, postal services, schools, garbage collection. Even as private companies and corporations have worked to have such services contracted out to or privatized by them, others have worked to have these turned over to non-profit organizations or associations. From the 1970s to the 1990s, there was deregulation of some industries, like banking, trucking, airlines and telecommunications, which resulted generally in more competition and lower prices. According to the Cato Institute, an American libertarian think-tank, in some cases deregulation in some aspects of an industry were offset by increased regulation in other aspects, the electricity industry being a prime example. For example, in banking, Cato Institute believes some deregulation allowed banks to compete across state lines, increasing consumer choice, while an actual increase in regulators and regulations forced banks to make loans to individuals incapable of repaying them, leading eventually to the financial crisis of 2007–2008. One example of economic decentralization, which is based on a libertarian socialist model, is decentralized economic planning. Decentralized planning is a type of economic system in which decision-making is distributed amongst various economic agents or localized within production agents. An example of this method in practice is in Kerala, India which experimented in 1996 with the People's Plan campaign. Emmanuelle Auriol and Michel Benaim write about the "comparative benefits" of decentralization versus government regulation in the setting of standards. They find that while there may be a need for public regulation if public safety is at stake, private creation of standards usually is better because "regulators or 'experts' might misrepresent consumers' tastes and needs." As long as companies are averse to incompatible standards, standards will be created that satisfy needs of a modern economy. Environmental Central governments themselves may own large tracts of land and control the forest, water, mineral, wildlife and other resources they contain. They may manage them through government operations or leasing them to private businesses; or they may neglect them to be exploited by individuals or groups who defy non-enforced laws against exploitation. It also may control most private land through land-use, zoning, environmental and other regulations. Selling off or leasing lands can be profitable for governments willing to relinquish control, but such programs can face public scrutiny because of fear of a loss of heritage or of environmental damage. Devolution of control to regional or local governments has been found to be an effective way of dealing with these concerns. Such decentralization has happened in India and other developing nations. In economic ideology Libertarian socialism Libertarian socialism is a political philosophy that promotes a non-hierarchical, non-bureaucratic society without private ownership in the means of production. Libertarian socialists believe in converting present-day private productive property into common or public goods. It promotes free association in place of government, non-coercive forms fo social organization, and opposes the various social relations of capitalism, such as wage slavery. The term libertarian socialism is used by some socialists to differentiate their philosophy from state socialism, and by some as a synonym for left anarchism. Accordingly, libertarian socialists believe that "the exercise of power in any institutionalized form – whether economic, political, religious, or sexual – brutalizes both the wielder of power and the one over whom it is exercised". Libertarian socialists generally place their hopes in decentralized means of direct democracy such as libertarian municipalism, citizens' assemblies, or workers' councils. Libertarian socialists are strongly critical of coercive institutions, which often leads them to reject the legitimacy of the state in favor of anarchism. Adherents propose achieving this through decentralization of political and economic power, usually involving the socialization of most large-scale private property and enterprise (while retaining respect for personal property). Libertarian socialism tends to deny the legitimacy of most forms of economically significant private property, viewing capitalist property relations as forms of domination that are antagonistic to individual freedom. Free market Free market ideas popular in the 19th century such as those of Adam Smith returned to prominence in the 1970s and 1980s. Austrian School economist Friedrich von Hayek argued that free markets themselves are decentralized systems where outcomes are produced without explicit agreement or coordination by individuals who use prices as their guide. Eleanor Doyle writes that "[e]conomic decision-making in free markets is decentralized across all the individuals dispersed in each market and is synchronized or coordinated by the price system," and holds that an individual right to property is part of this decentralized system. Criticizing central government control, Hayek wrote in The Road to Serfdom: According to Bruce M. Owen, this does not mean that all firms themselves have to be equally decentralized. He writes: "markets allocate resources through arms-length transactions among decentralized actors. Much of the time, markets work very efficiently, but there is a variety of conditions under which firms do better. Hence, goods and services are produced and sold by firms with various degrees of horizontal and vertical integration." Additionally, he writes that the "economic incentive to expand horizontally or vertically is usually, but not always, compatible with the social interest in maximizing long-run consumer welfare." It is often claimed that free markets and private property generate centralized monopolies and other ills; free market advocates counter with the argument that government is the source of monopoly. Historian Gabriel Kolko in his book The Triumph of Conservatism argued that in the first decade of the 20th century businesses were highly decentralized and competitive, with new businesses constantly entering existing industries. In his view, there was no trend towards concentration and monopolization. While there were a wave of mergers of companies trying to corner markets, they found there was too much competition to do so. According to Kolko, this was also true in banking and finance, which saw decentralization as leading to instability as state and local banks competed with the big New York City firms. He argues that, as a result, the largest firms turned to the power of the state and worked with leaders like United States Presidents Theodore Roosevelt, William H. Taft and Woodrow Wilson to pass as "progressive reforms" centralizing laws like The Federal Reserve Act of 1913 that gave control of the monetary system to the wealthiest bankers; the formation of monopoly "public utilities" that made competition with those monopolies illegal; federal inspection of meat packers biased against small companies; extending Interstate Commerce Commission to regulating telephone companies and keeping rates high to benefit AT&T; and using the Sherman Antitrust Act against companies which might combine to threaten larger or monopoly companies. Author and activist Jane Jacobs's influential 1961 book The Death and Life of American Cities criticized large-scale redevelopment projects which were part of government-planned decentralization of population and businesses to suburbs. She believed it destroyed cities' economies and impoverished remaining residents. Her 1980 book The Question of Separatism: Quebec and the Struggle over Sovereignty supported secession of Quebec from Canada. Her 1984 book Cities and the Wealth of Nations proposed a solution to the problems faced by cities whose economies were being ruined by centralized national governments: decentralization through the "multiplication of sovereignties", meaning an acceptance of the right of cities to secede from the larger nation states that were greatly limiting their ability to produce wealth. In the organizational structure of a firm In response to incentive and information conflicts, a firm can either centralize their organizational structure by concentrating decision-making to upper management, or decentralize their organizational structure by delegating authority throughout the organization. The delegation of authority comes with a basic trade-off: while it can increase efficiency and information flow, the central authority consequentially suffers a loss of control. However, through creating an environment of trust and allocating authority formally in the firm, coupled with a stronger rule of law in the geographical location of the firm, the negative consequences of the trade-off can be minimized. In having a decentralized organizational structure, a firm can remain agile to external shocks and competing trends. Decision-making in a centralized organization can face information flow inefficiencies and barriers to effective communication which decreases the speed and accuracy in which decisions are made. A decentralized firm is said to hold greater flexibility given the efficiency in which it can analyze information and implement relevant outcomes. Additionally, having decision-making power spread across different areas allows for local knowledge to inform decisions, increasing their relevancy and implementational effectiveness. In the process of developing new products or services, the decentralization enable the firm gain advantages of closely meet particular division's needs. Decentralization also impacts human resource management. The high level of individual agency that workers experience within a decentralized firm can create job enrichment. Studies have shown this enhances the development of new ideas and innovations given the sense of involvement that comes from responsibility. The impacts of decentralization on innovation are furthered by the ease of information flow that comes from this organizational structure. With increased knowledge sharing, workers are more able to use relevant information to inform decision-making. These benefits are enhanced in firms with skill-intensive environments. Skilled workers are more able to analyze information, they pose less risk of information duplication given increased communication abilities, and the productivity cost of multi-tasking is lower. These outcomes of decentralizion make it a particularly effective organizational structure for entrepreneurial and competitive firm environments, such as start-up companies. The flexibility, efficiency of information flow and higher worker autonomy complement the rapid growth and innovation seen in successful start up companies. In technology and the Internet Technological decentralization can be defined as a shift from concentrated to distributed modes of production and consumption of goods and services. Generally, such shifts are accompanied by transformations in technology and different technologies are applied for either system. Technology includes tools, materials, skills, techniques and processes by which goals are accomplished in the public and private spheres. Concepts of decentralization of technology are used throughout all types of technology, including especially information technology and appropriate technology. Technologies often mentioned as best implemented in a decentralized manner, include: water purification, delivery and waste water disposal, agricultural technology and energy technology. Advances in technology may create opportunities for decentralized and privatized replacements for what had traditionally been public services or utilities, such as power, water, mail, telecommunications, consumer product safety, banking, medical licensure, parking meters, and auto emissions. However, in terms of technology, a clear distinction between fully centralized or decentralized technical solutions is often not possible and therefore finding an optimal degree of centralization difficult from an infrastructure planning perspective. Information technology Information technology encompasses computers and computer networks, as well as information distribution technologies such as television and telephones. The whole computer industry of computer hardware, software, electronics, Internet, telecommunications equipment, e-commerce and computer services are included. Executives and managers face a constant tension between centralizing and decentralizing information technology for their organizations. They must find the right balance of centralizing which lowers costs and allows more control by upper management, and decentralizing which allows sub-units and users more control. This will depend on analysis of the specific situation. Decentralization is particularly applicable to business or management units which have a high level of independence, complicated products and customers, and technology less relevant to other units. Information technology applied to government communications with citizens, often called e-Government, is supposed to support decentralization and democratization. Various forms have been instituted in most nations worldwide. The Internet is an example of an extremely decentralized network, having no owners at all (although some have argued that this is less the case in recent years). "No one is in charge of internet, and everyone is." As long as they follow a certain minimal number of rules, anyone can be a service provider or a user. Voluntary boards establish protocols, but cannot stop anyone from developing new ones. Other examples of open source or decentralized movements are Wikis which allow users to add, modify, or delete content via the internet. Wikipedia has been described as decentralized (although it is a centralized web site, with a single entity operating the servers). Smartphones have been described as being an important part of the decentralizing effects of smaller and cheaper computers worldwide. Decentralization continues throughout the industry, for example as the decentralized architecture of wireless routers installed in homes and offices supplement and even replace phone companies' relatively centralized long-range cell towers. Inspired by system and cybernetics theorists like Norbert Wiener, Marshall McLuhan and Buckminster Fuller, in the 1960s Stewart Brand started the Whole Earth Catalog and later computer networking efforts to bring Silicon Valley computer technologists and entrepreneurs together with countercultural ideas. This resulted in ideas like personal computing, virtual communities and the vision of an "electronic frontier" which would be a more decentralized, egalitarian and free-market libertarian society. Related ideas coming out of Silicon Valley included the free software and creative commons movements which produced visions of a "networked information economy". Because human interactions in cyberspace transcend physical geography, there is a necessity for new theories in legal and other rule-making systems to deal with decentralized decision-making processes in such systems. For example, what rules should apply to conduct on the global digital network and who should set them? The laws of which nations govern issues of Internet transactions (like seller disclosure requirements or definitions of "fraud"), copyright and trademark? Decentralized computing Centralization and re-decentralization of the Internet The New Yorker reports that although the Internet was originally decentralized, by 2013 it had become less so: "a staggering percentage of communications flow through a small set of corporations – and thus, under the profound influence of those companies and other institutions [...] One solution, espoused by some programmers, is to make the Internet more like it used to be – less centralized and more distributed." Examples of projects that attempt to contribute to the re-decentralization of the Internet include ArkOS, Diaspora, FreedomBox, IndieWeb, Namecoin, SAFE Network, twtxt and ZeroNet as well as advocacy group Redecentralize.org, which provides support for projects that aim to make the Web less centralized. In an interview with BBC Radio 5 Live one of the co-founders of Redecentralize.org explained that: Blockchain technology In blockchain, decentralization refers to the transfer of control and decision-making from a centralized entity (individual, organization, or group thereof) to a distributed network. Decentralized networks strive to reduce the level of trust that participants must place in one another, and deter their ability to exert authority or control over one another in ways that degrade the functionality of the network. Decentralized protocols, applications, and ledgers (used in Web3) could be more difficult for governments to regulate, similar to difficulties regulating BitTorrent (which is not a blockchain technology). Criticism Factors hindering decentralization include weak local administrative or technical capacity, which may result in inefficient or ineffective services; inadequate financial resources available to perform new local responsibilities, especially in the start-up phase when they are most needed; or inequitable distribution of resources. Decentralization can make national policy coordination too complex; it may allow local elites to capture functions; local cooperation may be undermined by any distrust between private and public sectors; decentralization may result in higher enforcement costs and conflict for resources if there is no higher level of authority. Additionally, decentralization may not be as efficient for standardized, routine, network-based services, as opposed to those that need more complicated inputs. If there is a loss of economies of scale in procurement of labor or resources, the expense of decentralization can rise, even as central governments lose control over financial resources. It has been noted that while decentralization may increase "productive efficiency" it may undermine "allocative efficiency" by making redistribution of wealth more difficult. Decentralization will cause greater disparities between rich and poor regions, especially during times of crisis when the national government may not be able to help regions needing it. See also Centralization Federalism Subsidiarity References Further reading Aucoin, Peter, and Herman Bakvis. The Centralization-Decentralization Conundrum: Organization and Management in the Canadian Government (IRPP, 1988), Campbell, Tim. Quiet Revolution: Decentralization and the Rise of Political Participation in Latin American Cities (University of Pittsburgh Press, 2003), . Faguet, Jean-Paul. Decentralization and Popular Democracy: Governance from Below in Bolivia, (University of Michigan Press, 2012), . Fisman, Raymond and Roberta Gatti (2000). Decentralization and Corruption: Evidence Across Countries, Journal of Public Economics, Vol.83, No.3, pp. 325–45. Frischmann, Eva. Decentralization and Corruption. A Cross-Country Analysis, (Grin Verlag, 2010), . Miller, Michelle Ann, ed. Autonomy and Armed Separatism in South and Southeast Asia (Singapore: ISEAS, 2012). Miller, Michelle Ann. Rebellion and Reform in Indonesia. Jakarta's Security and Autonomy Policies in Aceh (London and New York: Routledge, 2009). Rosen, Harvey S., ed.. Fiscal Federalism: Quantitative Studies National Bureau of Economic Research Project Report, NBER-Project Report, University of Chicago Press, 2008), . Taylor, Jeff. Politics on a Human Scale: The American Tradition of Decentralism (Lanham, Md.: Lexington Books, 2013), . Richard M. Burton, Børge Obel, Design Models for Hierarchical Organizations: Computation, Information, and Decentralization, Springer, 1995, Merilee Serrill Grindle, Going Local: Decentralization, Democratization, And The Promise of Good Governance, Princeton University Press, 2007, Daniel Treisman, The Architecture of Government: Rethinking Political Decentralization, Cambridge University Press, 2007, Ryan McMaken, Breaking Away: The Case for Secession, Radical Decentralization, and Smaller Polities, Ludwig von Mises Institute, 2022, Schakel, Arjan H. (2008), Validation of the Regional Authority Index, Regional and Federal Studies, Routledge, Vol. 18 (2). Decentralization, article at the "Restructuring local government project" of Dr. Mildred Warner, Cornell University includes a number of articles on decentralization trends and theories. Robert J. Bennett, ed., Decentralization, Intergovernmental Relations and Markets: Towards a Post-Welfare Agenda, Clarendon, 1990, pp. 1–26. External links Organization design Cyberpunk themes Military tactics
Decentralization
Engineering
8,675
34,372,981
https://en.wikipedia.org/wiki/Bradsher%20cycloaddition
The Bradsher cycloaddition reaction, also known as the Bradsher cyclization reaction is a form of the Diels–Alder reaction which involves the [4+2] addition of a common dienophile with a cationic aromatic azadiene such as acridizinium or isoquinolinium. The Bradsher cycloaddition was first reported by C. K. Bradsher and T. W. G. Solomons in 1958. References Name reactions Cycloadditions
Bradsher cycloaddition
Chemistry
108
39,259,185
https://en.wikipedia.org/wiki/VICAR%20file%20format
VICAR is an image file format developed by the NASA's Jet Propulsion Laboratory. It is used to transport images from a variety of space missions including Cassini–Huygens and the Viking Orbiter. References External links Collection of images from the Cassini orbiter VICAR2PNG a tool that converts VICAR images. Jet Propulsion Laboratory Computer file formats
VICAR file format
Technology
72
1,881,728
https://en.wikipedia.org/wiki/Grating%20light%20valve
The grating light valve (GLV) is a "micro projection" technology that operates using a dynamically adjustable diffraction grating. It competes with other light valve technologies such as Digital Light Processing (DLP) and liquid crystal on silicon (LCoS) for implementation in video projector devices such as rear-projection televisions. The use of microelectromechanical systems (MEMS) in optical applications, which is known as optical MEMS or micro-opto-electro-mechanical structures (MOEMS), has enabled the possibility to combine the mechanical, electrical, and optical components in tiny-scale. Silicon Light Machines (SLM), in Sunnyvale CA, markets and licenses GLV technology with the capitalised trademarks "'Grated Light Valve'" and GLV, previously Grating Light Valve. The valve diffracts laser light using an array of tiny movable ribbons mounted on a silicon base. The GLV uses six ribbons as each pixel's diffraction gratings. Electronic signals alter the alignment of the gratings, and this displacement controls the intensity of the diffracted light in a very smooth gradation. Brief history The light valve was initially developed at Stanford University, in California, by electrical engineering professor David M. Bloom, along with William C. Banyai, Raj Apte, Francisco Sandejas, and Olav Solgaard, professor in the Stanford Department of Electrical Engineering. In 1994, the start-up company Silicon Light Machines was founded by Bloom to develop and commercialize the technology. Cypress Semiconductor acquired Silicon Light Machines in 2000 and sold the company to Dainippon Screen. Before the acquisition by Dainippon Screen, several marketing articles were published in EETimes, EETimes China, EETimes Taiwan, Electronica Olgi, and Fibre Systems Europe, highlighting Cypress Semiconductor's new MEMS manufacturing capabilities. The company is now wholly owned by Dainippon Screen Manufacturing Co., Ltd. In July 2000, Sony announced the signing of a technology licensing agreement with SLM for the implementation of GLV technology in laser projectors for large venues, but by 2004 Sony announced the SRX-R110 front projector using its LCoS-based technology SXRD. SLM then partnered with Evans & Sutherland (E&S). Using GLV technology, E&S developed the E&S Laser Projector, designed for use in domes and planetariums. The E&S Laser Projector was incorporated into the Digistar 3 dome projection system. Technology The GLV device is built on a silicon wafer and consists of parallel rows of "'highly reflective micro-ribbons'" – ribbons of sizes of a few μm with a top layer of aluminium – suspended above an air gap that is configured such that alternate ribbons (active ribbons are interlaced with static ribbons) can be dynamically actuated. Individual electrical connections to each active ribbon electrode provide for independent actuation. The ribbons and the substrate are electrically conductive so that the deflection of the ribbon can be controlled in an analog manner: When the voltage of the active ribbons is set to ground potential, all ribbons are undeflected, and the device acts as a mirror so the incident light returns along the same path. When a voltage is applied between the ribbon and base conductor, an electrical field is generated and deflects the active ribbon downward toward the substrate. This deflection can be as big as one-quarter wavelength hence creating diffraction effects on incident light that is reflected at an angle that is different from that of the incident light. The wavelength to diffract is determined by the spatial frequency of the ribbons. As this spatial frequency is determined by the photolithographic mask used to form the GLV device in the CMOS fabrication process, the departure angles can be very accurately controlled, which is useful for optical switching applications. Switching from undeflected to maximum ribbon deflection can occur in 20 nanoseconds, which is a million times faster than conventional LCD display devices and about 1000 times faster than TI's DMD technology. This high speed can be achieved thanks to the small size, small mass, and small excursion (of a few hundreds of nanometers) of the ribbons. Besides, there is no physical contact between moving elements which makes the lifetime of the GLV as long as 15 years without stopping (over 210 billion switching cycles). Applications The GLV technology has been applied to various products, from laser-based HDTV sets to computer-to-plate offset printing presses to DWDM components used for wavelength management. Applications of the GLV device in maskless photolithography have also been extensively investigated. Displays To build a display system using the GLV device, different approaches can be followed: ranging from a simple process using a single GLV device with white light as a source, thus having a monochrome system, to a more complex solution using three different GLV devices each for one of the RGB primaries' sources that once diffracted require additional optical filters to point the light onto the screen or an intermediate using a single white source with a GLV device. Besides, the light can be diffracted by the GLV device into an eyepiece for virtual retinal display or into an optical system for image projection onto a screen (projector and rear-projector). See also DLP Liquid crystal on silicon References External links Silicon Light Machines Dainippon Screen Manufacturing Co., Ltd. Sony Evans & Sutherland MEKO-European Display Data and Market Research HDTVExpert Defence Research and Development Canada Display technology Optoelectronics
Grating light valve
Engineering
1,190
77,031,966
https://en.wikipedia.org/wiki/Partial%20information%20decomposition
Partial Information Decomposition is an extension of information theory, that aims to generalize the pairwise relations described by information theory to the interaction of multiple variables. Motivation Information theory can quantify the amount of information a single source variable has about a target variable via the mutual information . If we now consider a second source variable , classical information theory can only describe the mutual information of the joint variable with , given by . In general however, it would be interesting to know how exactly the individual variables and and their interactions relate to . Consider that we are given two source variables and a target variable . In this case the total mutual information , while the individual mutual information . That is, there is synergistic information arising from the interaction of about , which cannot be easily captured with classical information theoretic quantities. Definition Partial information decomposition further decomposes the mutual information between the source variables with the target variable as Here the individual information atoms are defined as is the unique information that has about , which is not in is the synergistic information that is in the interaction of and about is the redundant information that is in both or about There is, thus far, no universal agreement on how these terms should be defined, with different approaches that decompose information into redundant, unique, and synergistic components appearing in the literature. Applications Despite the lack of universal agreement, partial information decomposition has been applied to diverse fields, including climatology, neuroscience sociology, and machine learning Partial information decomposition has also been proposed as a possible foundation on which to build a mathematically robust definition of emergence in complex systems and may be relevant to formal theories of consciousness. See also Mutual information Total correlation Dual total correlation Interaction information References Information theory
Partial information decomposition
Mathematics,Technology,Engineering
346
166,202
https://en.wikipedia.org/wiki/White%20people
White is a racial classification of people generally used for those of predominantly European ancestry. It is also a skin color specifier, although the definition can vary depending on context, nationality, ethnicity and point of view. Description of populations as "White" in reference to their skin color is occasionally found in Greco-Roman ethnography and other ancient or medieval sources, but these societies did not have any notion of a White race or pan-European identity. The term "White race" or "White people", defined by their light skin among other physical characteristics, entered the major European languages in the later seventeenth century, when the concept of a "unified White" achieved greater acceptance in Europe, in the context of racialized slavery and social status in the European colonies. Scholarship on race distinguishes the modern concept from pre-modern descriptions, which focused on physical complexion rather than the idea of race. Prior to the modern era, no European peoples regarded themselves as "White", but rather defined their identity in terms of their religion, ancestry, ethnicity, or nationality. Contemporary anthropologists and other scientists, while recognizing the reality of biological variation between different human populations, regard the concept of a unified, distinguishable "White race" as a social construct with no scientific basis. Physical descriptions in antiquity According to anthropologist Nina Jablonski: The Ancient Egyptian (New Kingdom) funerary text known as the Book of Gates distinguishes "four groups" in a procession. These are the Egyptians, the Levantine and Canaanite peoples or "Asiatics", the "Nubians" and the "fair-skinned Libyans". The Egyptians are depicted as considerably darker-skinned than the Levantines (persons from what is now Lebanon, Israel, Palestine and Jordan) and Libyans, but considerably lighter than the Nubians (modern Sudan). The assignment of positive and negative connotations of White and Black to certain persons date to the very old age in a number of Indo-European languages, but these differences were not necessarily used in respect to skin colors. Religious conversion was sometimes described figuratively as a change in skin color. Similarly, the Rigveda uses "black skin" as a metaphor for irreligiosity. Ancient Egyptians, Mycenaean Greeks and Minoans generally depicted women as having pale or white skin while men were depicted as dark brown or tanned. As a result, men with pale or light skin, leukochrōs (λευκόχρως, "white-skinned") could be considered weak and effeminate by Ancient Greek writers such as Plato and Aristotle. According to Aristotle "Those whose skin is too dark are cowardly: witness Egyptians and the Ethiopians. Those whose skin is too light are equally cowardly: witness women. The skin color typical of the courageous should be halfway between the two." Similarly, Xenophon of Athens describes Persian prisoners of war as "white-skinned because they were never without their clothing, and soft and unused to toil because they always rode in carriages" and states that Greek soldiers as a result believed "that the war would be in no way different from having to fight with women." Classicist James H. Dee states "the Greeks do not describe themselves as 'White people'or as anything else because they had no regular word in their color vocabulary for themselves." People's skin color did not carry useful meaning; what mattered is where they lived. Herodotus described the Scythian Budini as having deep blue eyes and bright red hair and the Egyptians – quite like the Colchians – as (, "dark-skinned") and curly-haired. He also gives the possibly first reference to the common Greek name of the tribes living south of Egypt, otherwise known as Nubians, which was (, "burned-faced"). Later Xenophanes of Colophon described the Aethiopians as black and the Thracians as having red hair and blue eyes. In his description of the Scythians, Hippocrates states that the cold weather "burns their white skin and turns it ruddy." Modern racial hierarchies The term "White race" or "White people" entered the major European languages in the later seventeenth century, originating with the racialization of slavery at the time, in the context of the Atlantic slave trade and the enslavement of indigenous peoples in the Spanish Empire. It has repeatedly been ascribed to strains of blood, ancestry, and physical traits, and was eventually made into a subject of pseudoscientific research, which culminated in scientific racism, which was later widely repudiated by the scientific community. According to historian Irene Silverblatt, "Race thinking… made social categories into racial truths." Bruce David Baum, citing the work of Ruth Frankenberg, states, "the history of modern racist domination has been bound up with the history of how European peoples defined themselves (and sometimes some other peoples) as members of a superior 'white race'." Alastair Bonnett argues that "white identity", as it is presently conceived, is an American project, reflecting American interpretations of race and history. According to Gregory Jay, a professor of English at the University of Wisconsin–Milwaukee: In the sixteenth and seventeenth centuries, "East Asian peoples were almost uniformly described as White, never as yellow." Michael Keevak's history Becoming Yellow, finds that East Asians were redesignated as being yellow-skinned because "yellow had become a racial designation," and that the replacement of White with yellow as a description came through pseudoscientific discourse. A social category formed by colonialism A three-part racial scheme in color terms was used in seventeenth-century Latin America under Spanish rule. Irene Silverblatt traces "race thinking" in South America to the social categories of colonialism and state formation: "White, black, and brown are abridged, abstracted versions of colonizer, slave, and colonized." By the mid-seventeenth century, the novel term ("Spaniard") was being equated in written documents with , or "White". In Spain's American colonies, Black African, Indigenous (), Jewish, or morisco ancestry formally excluded individuals from the "purity of blood" () requirements for holding any public office under the Royal Pragmatic of 1501. Similar restrictions applied in the military, some religious orders, colleges, and universities, leading to a nearly all-White priesthood and professional stratum. Blacks and were subject to tribute obligations and forbidden to bear arms, and black and women were forbidden to wear jewels, silk, or precious metals in early colonial Mexico and Peru. Those (people with dark skin) and (people of mixed African and European ancestry) with resources largely sought to evade these restrictions by passing as White. A brief royal offer to buy the privileges of Whiteness for a substantial sum of money attracted fifteen applicants before pressure from White elites ended the practice. In the British colonies in North America and the Caribbean, the designation English or Christian was initially used in contrast to Native Americans or Africans. Early appearances of White race or White people in the Oxford English Dictionary begin in the seventeenth century. Historian Winthrop Jordan reports that, "throughout the [thirteen] colonies the terms Christian, free, English, and white were ... employed indiscriminately" in the seventeenth century as proxies for one another. In 1680, Morgan Godwyn "found it necessary to explain" to English readers that "in Barbados, 'white' was 'the general name for Europeans.'" Several historians report a shift towards greater use of White as a legal category alongside a hardening of restrictions on free or Christian blacks. White remained a more familiar term in the American colonies than in Britain well into the 1700s, according to historian Theodore W. Allen. Scientific racism Western studies of race and ethnicity in the eighteenth and nineteenth centuries developed into what would later be termed scientific racism. Prominent European pseudoscientists writing about human and natural difference included a White or West Eurasian race among a small set of human races and imputed physical, mental, or aesthetic superiority to this White category. These ideas were discredited by twentieth-century scientists. Eighteenth century beginnings In 1758, Carl Linnaeus proposed what he considered to be natural taxonomic categories of the human species. He distinguished between Homo sapiens and Homo sapiens europaeus, and he later added four geographical subdivisions of humans: white Europeans, red Americans, yellow Asians and black Africans. Although Linnaeus intended them as objective classifications, his descriptions of these groups included cultural patterns and derogatory stereotypes. In 1775, the naturalist Johann Friedrich Blumenbach asserted that "The white color holds the first place, such as is that of most European peoples. The redness of the cheeks in this variety is almost peculiar to it: at all events it is but seldom to be seen in the rest". In the various editions of his On the Natural Variety of Mankind, he categorized humans into four or five races, largely built on Linnaeus' classifications. But while, in 1775, he had grouped into his "first and most important" race "Europe, Asia this side of the Ganges, and all the country situated to the north of the Amoor, together with that part of North America, which is nearest both in position and character of the inhabitants", he somewhat narrows his "Caucasian variety" in the third edition of his text, of 1795: "To this first variety belong the inhabitants of Europe (except the Lapps and the remaining descendants of the Finns) and those of Eastern Asia, as far as the river Obi, the Caspian Sea and the Ganges; and lastly, those of Northern Africa." Blumenbach quotes various other systems by his contemporaries, ranging from two to seven races, authored by the authorities of that time, including, besides Linnæus, Georges-Louis Leclerc, Comte de Buffon, Christoph Meiners and Immanuel Kant. In the question of color, he conducts a rather thorough inquiry, considering also factors of diet and health, but ultimately believes that "climate, and the influence of the soil and the temperature, together with the mode of life, have the greatest influence". Blumenbach's conclusion was, however, to proclaim all races' attribution to one single human species. Blumenbach argued that physical characteristics like skin color, cranial profile, etc., depended on environmental factors, such as solarization and diet. Like other monogenists, Blumenbach held to the "degenerative hypothesis" of racial origins. He claimed that Adam and Eve were Caucasian inhabitants of Asia, and that other races came about by degeneration from environmental factors such as the sun and poor diet. He consistently believed that the degeneration could be reversed in a proper environmental control and that all contemporary forms of man could revert to the original Caucasian race. Nineteenth and twentieth century: the "Caucasian race" Between the mid-nineteenth and mid-twentieth centuries, race scientists, including most physical anthropologists classified the world's populations into three, four, or five races, which, depending on the authority consulted, were further divided into various sub-races. During this period the Caucasian race, named after people of the Caucasus Mountains but extending to all Europeans, figured as one of these races and was incorporated as a formal category of both pseudoscientific research and, in countries including the United States, social classification. There was never any scholarly consensus on the delineation between the Caucasian race, including the populations of Europe, and the Mongoloid one, including the populations of East Asia. Thus, Carleton S. Coon (1939) included the populations native to all of Central and Northern Asia under the Caucasian label, while Thomas Henry Huxley (1870) classified the same populations as Mongoloid, and Lothrop Stoddard (1920) classified as "brown" most of the populations of the Middle East, North Africa and Central Asia, and counted as "White" only the European peoples and their descendants, as well as some populations in parts of Anatolia and the northern areas of Morocco, Algeria and Tunisia. Some authorities, following Huxley (1870), distinguished the Xanthochroi or "light Whites" of Northern Europe with the Melanochroi or "dark Whites" of the Mediterranean. Although modern neo-Nazis often invoke Nazi iconography on behalf of White nationalism, Nazi Germany repudiated the idea of a unified White race, instead promoting Nordicism. In Nazi propaganda, Eastern European Slavs were often referred to as Untermensch (subhuman in English), and the relatively under-developed economic status of Eastern European countries such as Poland and the USSR was attributed to the racial inferiority of their inhabitants. Fascist Italy took the same view, and both of these nations justified their colonial ambitions in Eastern Europe on racist, anti-Slavic grounds. These nations were not alone in their view; during the long nineteenth century and interwar period, there were numerous casesregardless of the position in the political spectrum of the personwhere European ethnic groups and nations labeled or treated other Europeans as members of another, somehow "inferior race". Between the Enlightenment era and interwar period, the racist worldviews fit well into the liberal worldview, and they were almost general among the liberal thinkers and politicians. Census and social definitions in different regions Definitions of White have changed over the years, including the official definitions used in many countries, such as the United States and Brazil. Through the mid to late twentieth century, numerous countries had formal legal standards or procedures defining racial categories (see cleanliness of blood, casta, apartheid in South Africa, hypodescent). Some countries do not ask questions about race or colour at all in their census. Africa South Africa White Dutch people first arrived in South Africa around 1652. By the beginning of the eighteenth century, some 2,000 Europeans and their descendants were established in the region. Although these early Afrikaners represented various nationalities, including German peasants and French Huguenots, the community retained a thoroughly Dutch character. The Kingdom of Great Britain captured Cape Town in 1795 during the Napoleonic Wars and permanently acquired South Africa from Amsterdam in 1814. The first British immigrants numbered about 4,000 and were introduced in 1820. They represented groups from England, Ireland, Scotland, or Wales and were typically more literate than the Dutch. The discovery of diamonds and gold led to a greater influx of English speakers who were able to develop the mining industry with capital unavailable to Afrikaners. They have been joined in more subsequent decades by former colonials from elsewhere, such as Zambia and Kenya, and poorer British nationals looking to escape famine at home. Both Afrikaners and English have been politically dominant in South Africa during the past; due to the controversial racial order under apartheid, the nation's predominantly Afrikaner government became a target of condemnation by other African states and the site of considerable dissension between 1948 and 1991. There were 4.6 million Whites in South Africa in 2011, down from an all-time high of 5.2 million in 1995 following a wave of emigration commencing in the late twentieth century. However, many returned over time. Asia Hong Kong In the recent 2021 census of Hong Kong, 61,582 people identified as white representing 0.8% of the total population. Philippines According to Spanish colonial statistics, 5% of the Philippine population in the 1700s had partial Spanish ancestry. Australia and Oceania Australia The recent 2021 Australian census does not use the term “white” on their census form, therefore results showed 54.7% of the population identifying with a European ancestry. From 1788, when the first British colony in Australia was founded, until the early nineteenth century, most immigrants to Australia were English, Scottish, Welsh and Irish convicts. These were augmented by small numbers of free settlers from the British Isles and other European countries. However, until the mid-nineteenth century, there were few restrictions on immigration, although members of ethnic minorities tended to be assimilated into the Anglo-Celtic populations. People of many nationalities, including many non-White people, emigrated to Australia during the goldrushes of the 1850s. However, the vast majority was still White and the goldrushes inspired the first racist activism and policy, directed mainly at Chinese immigrants. From the late nineteenth century, the Colonial/State and later federal governments of Australia restricted all permanent immigration to the country by non-Europeans. These policies became known as the "White Australia policy", which was consolidated and enabled by the Immigration Restriction Act 1901, but was never universally applied. Immigration inspectors were empowered to ask immigrants to take dictation from any European language as a test for admittance, a test used in practice to exclude people from Asia, Africa, and some European and South American countries, depending on the political climate. Although they were not the prime targets of the policy, it was not until after World War II that large numbers of southern European and eastern European immigrants were admitted for the first time. Following this, the White Australia Policy was relaxed in stages: non-European nationals who could demonstrate European descent were admitted (e.g., descendants of European colonizers and settlers from Latin America or Africa), as were autochthonous inhabitants (such as Maronites, Assyrians and Mandeans) of various nations from the Middle East, most significantly from Lebanon and to a lesser degree Iraq, Syria and Iran. In 1973, all immigration restrictions based on race and geographic origin were officially terminated. Australia enumerated its population by race between 1911 and 1966, by racial origin in 1971 and 1976, and by self-declared ancestry alone since 1981, meaning no attempt is now made to classify people according to skin color. As at the 2016 census, it was estimated by the Australian Human Rights Commission that around 58% of the Australian population were Anglo-Celtic Australians with 18% being of other European origins, a total of 76% for European ancestries as a whole. New Zealand According to the 2023 New Zealand census 67.8% or 3,383,742 people identified with a European ethnic origin, down from 70.2% in 2018, and 90.6% in 1966. In 1926, 95.0% of the population was of European descent. The establishment of British colonies in Australia from 1788 and the boom in whaling and sealing in the Southern Ocean brought many Europeans to the vicinity of New Zealand. Whalers and sealers were often itinerant, and the first real settlers were missionaries and traders in the Bay of Islands area from 1809. Early visitors to New Zealand included whalers, sealers, missionaries, mariners, and merchants, attracted to natural resources in abundance. They came from the Australian colonies, Great Britain and Ireland, Germany (forming the next biggest immigrant group after the British and Irish), France, Portugal, the Netherlands, Denmark, the United States, and Canada. In the 1860s, the discovery of gold started a gold rush in Otago. By 1860 more than 100,000 British and Irish settlers lived throughout New Zealand. The Otago Association actively recruited settlers from Scotland, creating a definite Scottish influence in that region, while the Canterbury Association recruited settlers from the south of England, creating a definite English influence over that region. In the 1870s, MP Julius Vogel borrowed millions of pounds from Britain to help fund capital development such as a nationwide rail system, lighthouses, ports, and bridges, and encouraged mass migration from Britain. By 1870 the non-Māori population reached over 250,000. Other smaller groups of settlers came from Germany, Scandinavia, and other parts of Europe as well as from China and India, but British and Irish settlers made up the vast majority and did so for the next 150 years. Other Oceania Europe France White people in France are a broad racial-based, or skin color-based, social category in French society. In statistical terms, the French government banned the collection of racial or ethnic information in 1978, and the National Institute of Statistics and Economic Studies (INSEE), therefore, does not provide census data on White residents or citizens in France. French courts have, however, made cases, and issued rulings, which have identified White people as a demographic group within the country. White people in France are defined, or discussed, as a racial or social grouping, from a diverse and often conflicting range of political and cultural perspectives; in anti-racism activism in France, from right-wing political dialogue or propaganda, and other sources. Background Whites in France have been studied with regard the group's historical involvement in French colonialism; how "whites in France have played a major international role in colonizing areas of the globe such as the African continent." They have been described as a privileged social class within the country, comparatively sheltered from racism and poverty. has reported how "most white people in France only know the banlieues as a kind of caricature". Banlieues, outer-city regions across the country that are increasingly identified with minority groups, often have residents who are disproportionately affected by unemployment and poverty. The lack of census data collected by the INED and INSEE for Whites in France has been analyzed, from some academic perspectives, as masking racial issues within the country, or a form of false racial color blindness. Writing for Al Jazeera, French journalist Rokhaya Diallo suggests that "a large portion of White people in France are not used to having frank conversations about race and racism." According to political sociologist Eduardo Bonilla-Silva, "whites in France lie to themselves and the world by proclaiming that they do not have institutional racism in their nation." Sociologist Crystal Marie Fleming has written; "While many whites in France refuse to acknowledge institutionalized racism and white supremacy, there is widespread belief in the specter of 'anti-white racism'". Use in right-wing politics Accusations of anti-White racism, suggestions of the displacement of, or lack of representation for, the group, and rhetoric surrounding Whites in France experiencing poverty have been, at times, utilised by various right-wing political elements in the country. University of Lyon's political scientist Angéline Escafré-Dublet has written that "the equivalent to a White backlash in France can be traced through the debate over the purported neglect of the 'poor Whites' in France". In 2006, French politician Jean-Marie Le Pen suggested there were too many "players of colour" in the France national football team after he suggested that 7 of the 23-player squad were White. In 2020, French politician Nadine Morano stated that French actress Aïssa Maïga, who was born in Senegal, should "go back to Africa" if she "was not happy with seeing so many white people in France". Republic of Ireland According to the 2022 Irish census, 4,444,145 or 87.4% of the total population declared their race as “White Irish” and Other White, this was a decline from 92.4% in 2016 and 94.24% in 2011. People who identified as “White Irish” in 2022 were 3,893,056 or 76.5% of the total population, a decline from 87.4% in 2006. Malta As of the 2021 census, 89.1% self-identified as Caucasian racial origin. Maltese-born natives make up the majority of the island with 386,280 people out of a total population of 519,562. However, there are minorities, the largest of which by European birthplace were: 15,082 from the United Kingdom, Italy (13,361) and Serbia (5,935). Among racial origins for the non-Maltese, 58.1% of all identified as Caucasian. United Kingdom Historical White identities Before the Industrial Revolutions in Europe whiteness may have been associated with social status. Aristocrats may have had less exposure to the sun and therefore a pale complexion may have been associated with status and wealth. This may be the origin of "blue blood" as a description of royalty, the skin being so lightly pigmented that the blueness of the veins could be clearly seen. The change in the meaning of White that occurred in the colonies (see above) to distinguish Europeans from non-Europeans did not apply to the 'home land' countries (England, Ireland, Scotland and Wales). Whiteness therefore retained a meaning associated with social status for the time being, and, during the nineteenth century, when the British Empire was at its peak, many of the bourgeoisie and aristocracy developed extremely negative attitudes to those of lower social rank. Edward Lhuyd discovered that Welsh, Gaelic, Cornish and Breton are all part of the same language family, which he termed the "Celtic family", and was distinct from the Germanic English; this can be seen in context of the emerging romantic nationalism, which was also prevalent among those of Celtic descent. Just as race reified whiteness in America, Africa, and Asia, capitalism without social welfare reified whiteness with regard to social class in nineteenth-century Britain and Ireland; this social distinction of whiteness became, over time, associated with racial differences. For example, George Sims in his 1883 book How the poor live wrote of "a dark continent that is within easy reach of the General Post Office ... the wild races who inhabit it will, I trust, gain public sympathy as easily as [other] savage tribes". Modern and official use From the early 1700s, Britain received a small-scale immigration of black people due to the transatlantic slave trade. The oldest Chinese community in Britain (as well as in Europe) dates from the nineteenth century. Since the end of World War II, a substantial immigration from the African, Caribbean and South Asian (namely the British Raj) colonies changed the picture more radically, while the adhesion to the European Union brought with it a heightened immigration from Central and Eastern Europe. Today the Office for National Statistics uses the term White as an ethnic category. The terms White British, White Irish, White Scottish and White Other are used. These classifications rely on individuals' self-identification, since it is recognised that ethnic identity is not an objective category. Socially, in the UK White usually refers only to people of native British, Irish and European origin. As a result of the 2011 census the White population stood at 85.5% in England (White British: 79.8%), at 96% in Scotland (White British: 91.8%), at 95.6% in Wales (White British: 93.2%), while in Northern Ireland 98.28% identified themselves as White, amounting to a total of 87.2% White population (or White British and Irish). North America Bermuda (U.K.) At the 2016 census the number of Bermudians who identify as white was 19,466 or 31 percent of the total population. The White population of Bermuda made up the entirety of the Bermuda's population, other than a black and an Indian slave brought in for a very short-lived pearl fishery in 1616, from settlement (which began accidentally in 1609 with the wreck of the Sea Venture) until the middle of the 17th century, and the majority until some point in the 18th century. In 2010, census data found that White Bermudians accounted for 31% including 10% native Bermudians and 21% foreign-born. Canada Of the over 36 million Canadians enumerated in 2021 approximately 25 million reported being "White", representing 69.8 percent of the population. In the 1995 Employment Equity Act, "'members of visible minorities' means persons, other than Aboriginal peoples, who are non-Caucasian in race or non-white in colour". In the 2001 Census, persons who selected Chinese, South Asian, African, Filipino, Latin American, Southeast Asian, Arab, West Asian, Middle Eastern, Japanese, or Korean were included in the visible minority population. A separate census question on "cultural or ethnic origin" (question 17) does not refer to skin color. Costa Rica The 2022 census counted a total population of 5,044,197 people. In 2022, the census also recorded ethnic or racial identity for all groups separately for the first time in more than ninety-five years since the 1927 census. Options included indigenous, Black or Afro-descendant, Mulatto, Chinese, Mestizo, white and other on section IV: question 7. White people (including mestizo) make up 94%, 3% are black people, 1% are Amerindians, and 1% are Chinese. White Costa Ricans are mostly of Spanish ancestry, but there are also significant numbers of Costa Ricans descended from British, Italian, German, English, Dutch, French, Irish, Portuguese and Polish families, as well a sizable Jewish (namely Ashkenazi and Sephardic) community. Cuba White people in Cuba make up 64.1% of the total population according to the 2012 census with the majority being of diverse Spanish descent. However, after the mass exodus resulting from the Cuban Revolution in 1959, the number of white Cubans actually residing in Cuba diminished. Today various records claiming the percentage of Whites in Cuba are conflicting and uncertain; some reports (usually coming from Cuba) still report a less, but similar, pre-1959 number of 65% and others (usually from outside observers) report a 40–45%. Despite most White Cubans being of Spanish descent, many others are of French, Portuguese, German, Italian and Russian descent. During the eighteenth, nineteenth, and early part of the twentieth century, large waves of Canarians, Catalans, Andalusians, Castilians, and Galicians emigrated to Cuba. Many European Jews have also immigrated there, with some of them being Sephardic. Between 1901 and 1958, more than a million Spaniards arrived to Cuba from Spain; many of these and their descendants left after Castro's communist regime took power. Historically, Chinese descendants in Cuba were classified as White. In 1953, it was estimated that 72.8% of Cubans were of European ancestry, mainly of Spanish origin, 12.4% of African ancestry, 14.5% of both African and European ancestry (mulattos), and 0.3% of the population was of Chinese and or East Asian descent (officially called "amarilla" or "yellow" in the census). However, after the Cuban revolution, due to a combination of factors, mainly mass exodus to Miami, United States, a drastic decrease in immigration, and interracial reproduction, Cuba's demography changed. As a result, those of complete European ancestry and those of pure African ancestry have decreased, the mixed population has increased, and the Chinese (or East Asian) population has, for all intents and purposes, disappeared. The Institute for Cuban and Cuban American Studies at the University of Miami says the present Cuban population is 38% White and 62% Black/Mulatto. The Minority Rights Group International says that "An objective assessment of the situation of Afro-Cubans remains problematic due to scant records and a paucity of systematic studies both pre- and post-revolution. Estimates of the percentage of people of African descent in the Cuban population vary enormously, ranging from 33.9 per cent to 62 per cent". Dominican Republic They are 18.7% of the Dominican Republic's population, according to a 2022 survey by the United Nations Population Fund. The majority of white Dominicans have ancestry from the first European settlers to arrive in Hispaniola in 1492 and are descendants of the Spanish and Portuguese who settled in the island during colonial times, as well as the French who settled in the 17th and 18th centuries. About 9.2% of the Dominican population claims a European immigrant background, according to the 2021 Fondo de Población de las Naciones Unidas survey. El Salvador In 2013, White Salvadorans were a minority ethnic group in El Salvador, accounting for 12.7% of the country's population. An additional 86.3% of the population were mestizo, having mixed Amerindian and European ancestry. Guatemala In 2010, 18.5% of Guatemalans belonged to the White ethnic group, with 41.7% of the population being Mestizo, and 39.8% of the population belonging to the 23 Indigenous groups. It is difficult to make an accurate census of Whites in Guatemala, because the country categorizes all non-indigenous people are mestizo or ladino and a large majority of White Guatemalans consider themselves as mestizos or ladinos. By the nineteenth century the majority of immigrants were Germans, many who were bestowed fincas and coffee plantations in Cobán, while others went to Quetzaltenango and Guatemala City. Many young Germans married mestiza and indigenous Q'eqchi' women, which caused a gradual whitening. There was also immigration of Belgians to Santo Tomas and this contributed to the mixture of black and mestiza women in that region. Honduras As of 2013, Hondurans of solely White ancestry are a small minority in Honduras, accounting for 1% of the country's population. An additional 90% of the population is mestizo, having mixed indigenous and European ancestry. Mexico White Mexicans are individuals in Mexico who identify as white, often due to their physical appearance or their recognition of European or West Asian ancestry. The Mexican government conducts ethnic censuses that allow individuals to identify as "White," but the specific results of these censuses are not made public. Instead, the government releases data on the percentage of "light-skinned Mexicans" in the country, with nationwide surveys conducted by the Mexico's National Institute of Statistics and the National Council to Prevent Discrimination reporting results of about one-third. The term "Light-skinned Mexican" is preferred by both the government and media to describe individuals in Mexico who possess European physical traits when discussing ethno-racial dynamics. However, "White Mexican" is still used at times. Europeans began arriving in Mexico during the Spanish conquest of the Aztec Empire; and while during the colonial period, most European immigration was Spanish (mostly from northern provinces such as Cantabria, Navarra, Galicia and the Basque Country,), in the nineteenth and twentieth centuries European and European-derived populations from North and South America did immigrate to the country. According to twentieth- and twenty-first-century academics, large-scale intermixing between the European immigrants and the native Indigenous peoples produced a Mestizo group which would become the overwhelming majority of Mexico's population by the time of the Mexican Revolution. However, according to church and censal registers from the colonial times, the majority of Spanish men married Spanish women. Said registers also put in question other narratives held by contemporary academics, such as European immigrants who arrived to Mexico being almost exclusively men or that "pure Spanish" people were all part of a small powerful elite, as Spaniards were often the most numerous ethnic group in the colonial cities and there were menial workers and people in poverty who were of complete Spanish origin. Another ethnic group in Mexico, the Mestizos, is composed of people with varying degrees of European and indigenous ancestry, with some showing a European genetic ancestry higher than 90%. However, the criteria for defining what constitutes a Mestizo varies from study to study, as in Mexico a large number of White people have been historically classified as Mestizos, because after the Mexican Revolution the Mexican government began defining ethnicity on cultural standards (mainly the language spoken) rather than racial ones in an effort to unite all Mexicans under the same racial identity. Estimates of Mexico's White population differ greatly in both, methodology and percentages given, extra-official sources such as the World Factbook, which use the 1921 census results as the base of their estimations, calculate Mexico's White population as only 10% (the results of the 1921 census, however, have been contested by various historians and deemed inaccurate). other sources suggest rather higher percentages: using the presence of blond hair as reference to classify a Mexican as White, the Metropolitan Autonomous University of Mexico calculated the percentage of said ethnic group at 23% within said institution. With a similar methodology, the American Sociological Association obtained a percentage of 18.8%. Another study made by the University College London in collaboration with Mexico's National Institute of Anthropology and History found that the frequencies of blond hair and light eyes in Mexicans are of 18% and 28% respectively. A study performed in hospitals of Mexico City suggests that socioeconomic factors influence the frequency of Mongolian spots among newborns, as evidenced by the higher prevalence of 85% in newborns from a public institution, typically associated with lower socioeconomic status, compared to a 33% prevalence in newborns from private hospitals, which generally cater to families with higher socioeconomic status. The Mongolian spot appears with a very high frequency (85-95%) in Native American, and African children, but can be present in some individuals in the Mediterranean populations. The skin lesion reportedly almost always appears on South American and Mexican children who are racially Mestizos, while having a very low frequency (5–10%) in European children. According to the Mexican Social Security Institute (shortened as IMSS) nationwide, around half of Mexican babies have the Mongolian spot. Mexico's northern and western regions have the highest percentages of white population, with the majority of the people not having native admixture or being of predominantly European ancestry. In the north and west of Mexico the indigenous tribes were substantially smaller and unlike those found in central and southern Mexico they were mostly nomadic, therefore remaining isolated from colonial population centers, with hostilities between them and Mexican colonists often taking place. This eventually led the northeast region of the country to become the region with the highest proportion of whites during the Spanish colonial period albeit recent migration waves have been changing its demographic trends. A number of settlements on which European immigrants have maintained their original culture and language survive to this day and are spread all over Mexican territory; among the most notable groups are the Mennonites who have colonies in states as variated as Chihuahua or Campeche and the town of Chipilo in the state of Puebla, inhabited nearly in its totality by descendants of Italian immigrants that still speak their Venetian-derived dialect. Nicaragua As of 2013, the White ethnic group in Nicaragua accounts for 17% of the country's population. An additional 69% of the population is Mestizo, having mixed indigenous and European ancestry. In the nineteenth century, Nicaragua was the subject of central European immigration, mostly from Germany, England and the United States, who often married native Nicaraguan women. Some Germans were given land to grow coffee in Matagalpa, Jinotega and Esteli, although most Europeans settled in San Juan del Norte. In the late seventeenth century, pirates from England, France and Holland mixed with the indigenous population and started a settlement at Bluefields (Mosquito Coast). Puerto Rico (U.S.) Puerto Rico had a small stream of predominantly European immigration. Puerto Ricans of Spanish, Italian and French descent comprise the majority. According to the most recent 2020 census, the number of people who identified as "White alone" was 536,044 with an additional non-Hispanic 24,548, for a total of 560,592 or 17.1% of the population. Previously in 1899, one year after the United States acquired the island, 61.8% or 589,426 people self-identified as White. One hundred years later (2000), the total increased to 80.5% or 3,064,862; due to a change of race perceptions, mainly because of Puerto Rican elites to portray Puerto Rico's image as the "White island of the Antilles", partly as a response to scientific racism. Hundreds are from Corsica, France, Italy, Portugal, Ireland, Scotland, and Germany, along with large numbers of immigrants from Spain. This was the result of granted land from Spain during the Real Cedula de Gracias de 1815 (Royal Decree of Graces of 1815), which allowed European Catholics to settle on the island with a certain amount of free land. Between 1960 and 1990, the census questionnaire in Puerto Rico did not ask about race or color. Racial categories therefore disappeared from the dominant discourse on the Puerto Rican nation. However, the 2000 census included a racial self-identification question in Puerto Rico and, for the first time since 1950, allowed respondents to choose more than one racial category to indicate mixed ancestry. (Only 4.2% chose two or more races.) With few variations, the census of Puerto Rico used the same questionnaire as in the U.S. mainland. According to census reports, most islanders responded to the new federally mandated categories on race and ethnicity by declaring themselves "White"; few declared themselves to be Black or some other race. However, it was estimated that 20% of White Puerto Ricans may have Black ancestry. Trinidad and Tobago United States The cultural boundaries separating White Americans from other racial or ethnic categories are contested and always changing. Professor David R. Roediger of the University of Illinois, suggests that the construction of the White race in the United States was an effort to mentally distance slave owners from slaves. By the eighteenth century, White had become well established as a racial term. Author John Tehranian has noted the changing classifications of immigrant ethnic groups in American history. At various times each of the following groups has been allegedly excluded from being considered White, despite generally having been considered legally White under the US census and US naturalization law: Germans, Greeks, White Hispanics, Arabs, Iranians, Afghans, Irish, Italians, Jews of European and Mizrahi descent, Slavs, and Spaniards. On several occasions Finns were "racially" discriminated against in their early years of immigration and not considered European but "Asian". Some believed that they were of Mongolian ancestry rather than "native" European origin due to the Finnish language belonging to the Uralic and not the Indo-European language family. During American history, the process of officially being defined as White by law often came about in court disputes over the pursuit of citizenship. The Immigration Act of 1790 offered naturalization only to "any alien, being a free white person". In at least 52 cases, people denied the status of White by immigration officials sued in court for status as White people. By 1923, courts had vindicated a "common-knowledge" standard, concluding that "scientific evidence" was incoherent. Legal scholar John Tehranian says that this was a "performance-based" standard, relating to religious practices, education, intermarriage, and a community's role in the United States. In 1923, the Supreme Court decided in United States v. Bhagat Singh Thind that people of Indian descent were not White men, and thus not eligible for citizenship. While Thind was a high caste Hindu born in the northern Punjab region and classified by certain scientific authorities as of the Aryan race, the court conceded that he was not White or Caucasian since the word Aryan "has to do with linguistic and not at all with physical characteristics" and "the average man knows perfectly well that there are unmistakable and profound differences" between Indians and White people. In United States v. Cartozian (1925), an Armenian immigrant successfully argued (and the Supreme Court agreed) that his nationality was White in contradistinction to other people of the Near EastKurds, Turks, and Arabs in particularon the basis of their Christian religious traditions. In conflicting rulings In re Hassan (1942) and Ex parte Mohriez, United States District Courts found that Arabs did not, and did qualify as White, respectively, under immigration law. In the early twenty-first century, the relationship between some ethnic groups and whiteness remains complex. In particular, some Jewish and Arab individuals both self-identify and are considered as part of the White American racial category, but others with the same ancestry feel they are not White and may not always be perceived as White by American society. The United States Census Bureau proposed but withdrew plans to add a new category for Middle Eastern and North African peoples in the U.S. Census 2020. Specialists disputed whether this classification should be considered a White ethnicity or a race. According to Frank Sweet, "various sources agree that, on average, people with 12 percent or less admixture appear White to the average American and those with up to 25 percent look ambiguous (with a Mediterranean skin tone)". The current U.S. Census definition includes as White "a person having origins in any of Europe, the Middle East or North Africa." The U.S. Department of Justice's Federal Bureau of Investigation describes White people as "having origins in any of the original peoples of Europe, the Middle East, or North Africa through racial categories used in the Uniform Crime Reports Program adopted from the Statistical Policy Handbook (1978) and published by the Office of Federal Statistical Policy and Standards, U.S. Department of Commerce." The "White" category in the UCR includes non-black Hispanics. White Americans made up nearly 90% of the population in 1950. A report from the Pew Research Center in 2008 projects that by 2050, non-Hispanic White Americans will make up 47% of the population, down from 67% projected in 2005. According to a study on the genetic ancestry of Americans, White Americans (stated "European Americans") on average are 98.6% European, 0.2% African and 0.2% Native American. Whites born in those Southern states with higher proportions of African-American populations, tend to have higher percentages of African ancestry. For instance, according to the 23andMe database, up to 13% of self-identified White American Southerners have greater than 1% African ancestry. White persons born in Southern states with the highest African-American populations tended to have the highest percentages of hidden African ancestry. Robert P. Stuckert, member of the Department of Sociology and Anthropology at Ohio State University, has said that today the majority of the descendants of African slaves are White. Black author Rich Benjamin, in his book, Searching for Whitopia: An Improbable Journey to the Heart of White America, reveals how racial divides and White decline, both real and perceived, shape democratic and economic urgencies in America. The book examines how White flight, and the fear of White decline, affects the country's political debates and policy-making, including housing, lifestyle, social psychology, gun control, and community. Benjamin says that such issues as fiscal policy or immigration or "Best Place to Live" lists, which might be considered race-neutral, are also defined by racial anxiety over perceived White decline. One-drop rule The "one-drop rule" – that a person with any amount of known black African ancestry (however small or invisible) is considered black – is a classification that was used in parts of the United States. It is a colloquial term for a set of laws passed by 18 U.S. states between 1910 and 1931. Such laws were declared unconstitutional in 1967 when the Supreme Court ruled on anti-miscegenation laws while hearing Loving v. Virginia; it also found that Virginia's Racial Integrity Act of 1924, based on enforcing the one-drop rule in classifying vital records, was unconstitutional. The one-drop rule attempted to create a binary system, classifying all persons as either Black or White regardless of a person's physical appearance. Previously persons had sometimes been classified as mulatto or mixed-race, including on censuses up to 1930. They were also recorded as Indian. Some people with a high proportion of European ancestry could pass as "White", as noted above. This binary approach contrasts with the more flexible social structures present in Latin America (derived from the Spanish colonial era system), where there were less clear-cut divisions between various ethnicities. People are often classified not only by their appearance but by their class. As a result of centuries of having children with White people, the majority of African Americans have some European admixture, and many people long accepted as White also have some African ancestry. Among the most notable examples of the latter is President Barack Obama, who is believed to have been descended from an early African enslaved in America, recorded as "John Punch", through his mother's apparently White line. In the twenty-first century, writer and editor Debra Dickerson renewed questions about the one-drop rule, saying that "easily one-third of black people have White DNA". She says that, in ignoring their European ancestry, African Americans are denying their full multi-racial identities. Singer Mariah Carey, who is multi-racial, was publicly described as "another White girl trying to sing black". But in an interview with Larry King, she said that, despite her physical appearance and having been raised primarily by her White mother, she did not "feel White". Since the late twentieth century, genetic testing has provided many Americans, both those who identify as White and those who identify as black, with more nuanced and complex information about their genetic backgrounds. Other Caribbean South America Argentina Argentina, along with other areas of new settlement like Canada, Australia, Brazil, New Zealand, the United States or Uruguay, is considered a country of immigrants where the vast majority originated from Europe. White people can be found in all areas of the country, but especially in the central-eastern region (Pampas), the central-western region (Cuyo), the southern region (Patagonia) and the north-eastern region (Litoral). White Argentines are mainly descendants of immigrants who came from Europe and the Middle East in the late nineteenth and early twentieth centuries. After the regimented Spanish colonists, waves of European settlers came to Argentina from the late nineteenth to mid-twentieth centuries. Major contributors included Italy (initially from Piedmont, Veneto and Lombardy, later from Campania, Calabria, and Sicily), and Spain (most are Galicians and Basques, but there are Asturians, Cantabrians, Catalans, and Andalusians). Smaller but significant numbers of immigrants include Germans, primarily Volga Germans from Russia, but also Germans from Germany, Switzerland, and Austria; French which mainly came from the Occitania region of France; Portuguese, which already conformed an important community since colonial times; Slavic groups, most of which were Croats, Bosniaks, Poles, but also Ukrainians, Belarusians, Russians, Bulgarians, Serbs and Montenegrins; Britons, mainly from England and Wales; Irish who migrated due to the Great Irish Famine or prior famines and Scandinavians from Sweden, Denmark, Finland, and Norway. Smaller waves of settlers from Australia and South Africa, and the United States can be traced in Argentine immigration records. By the 1910s, after immigration rates peaked, over 30 percent of the country's population was from outside Argentina, and over half of Buenos Aires' population was foreign-born. However, the 1914 National Census revealed that around 80% of the national population were either European immigrants, their children or grandchildren. Among the remaining 20 percent (those descended from the population residing locally before this immigrant wave took shape in the 1870s), around a third were White. European immigration continued to account for over half the nation's population growth during the 1920s and was again significant (albeit in a smaller wave) following World War II. It is estimated that Argentina received over 6 million European immigrants during the period 1857–1940. Since the 1960s, increasing immigration from bordering countries to the north (especially from Bolivia and Paraguay, which have Amerindian and Mestizo majorities) has lessened that majority somewhat. Criticism of the national census states that data has historically been collected using the category of national origin rather than race in Argentina, leading to undercounting Afro-Argentines and Mestizos. (Living Africa) is a black rights group in Buenos Aires with the support of the Organization of American States, financial aid from the World Bank and Argentina's census bureau is working to add an "Afro-descendants" category to the 2010 census. The 1887 national census was the final year where blacks were included as a separate category before it was eliminated by the government. Bolivia There is no present day data as the Bolivian census does not count racial identity for white people. However, past census data showed that in 1900, people who self-identified as "Blanco" (white) composed 12.7% or 231,088 of the total population. This was the last time data on race was collected. There were 529 Italians, 420 Spaniards, 295 Germans, 279 French, 177 Austrians, 141 English and 23 Belgians living in Bolivia. Brazil Recent censuses in Brazil are conducted on the basis of self-identification. According to the 2022 Census, they totaled 88,252,121 people and made up 43.5% of the Brazilian population. As a term, "White" in Brazil is generally applied to people of European descent. The term may also encompass other people, such as Brazilians of West Asian descent, and in some contexts, East Asians. Though Brazilians of East Asian descent are, in other contexts, classified as "Yellow" (amarela). The census shows a trend of fewer Brazilians of a different descent (most likely mixed) identifying as White people as their social status increases. Nevertheless, light-skinned Mulattoes and Mestizos with European features were also historically deemed as more closely related to "whiteness" then unmixed Blacks. Chile Scholarly estimates of the White population in Chile vary dramatically, ranging from 20% to 52%. According to a study by the University of Chile about 30% of the Chilean population is Caucasian, while the 2011 Latinobarómetro survey shows that some 60% of Chileans consider themselves White. During colonial times in the eighteenth century, an important flux of emigrants from Spain populated Chile, mostly Basques, who vitalized the Chilean economy and rose rapidly in the social hierarchy and became the political elite that still dominates the country. An estimated 1.6 million (10%) to 3.2 million (20%) Chileans have a surname (one or both) of Basque origin. The Basques liked Chile because of its great similarity to their native land: similar geography, cool climate, and the presence of fruits, seafood, and wine. Chile was not an attractive place for European migrants in the nineteenth and twentieth centuries simply because it was far from Europe and difficult to reach. Chile experienced a tiny but steady arrival of Spanish, Italians, Irish, French, Greeks, Germans, English, Scots, Croats and Ashkenazi Jews, in addition to immigration from other Latin American countries. The original arrival of Spaniards was the most radical change in demographics due to the arrival of Europeans in Chile, since there was never a period of massive immigration, in contrast to neighboring nations such as Argentina and Uruguay. Facts about the amount of immigration do not coincide with certain national chauvinistic discourse, which claims that Chile, like Argentina or Uruguay, would be considered one of the "White" Latin American countries, in contrast to the racial mixture that prevails in the rest of the continent. However, it is undeniable that immigrants have played a major role in Chilean society. Between 1851 and 1924 Chile only received 0.5% of the European immigration flow to Latin America, compared to the 46% received by Argentina, 33% by Brazil, 14% by Cuba, and 4% by Uruguay. This was because most of the migration occurred across the Atlantic before the construction of the Panama Canal. Europeans preferred to stay in countries closer to their homelands instead of taking the long trip through the Straits of Magellan or across the Andes. In 1907, European-born immigrants composed 2.4% of the Chilean population, which fell to 1.8% in 1920, and 1.5% in 1930. After the failed liberal revolution of 1848 in the German states, a significant German immigration took place, laying the foundation for the German-Chilean community. Sponsored by the Chilean government to "civilize" and colonize the southern region, these Germans (including German-speaking Swiss, Silesians, Alsatians and Austrians) settled mainly in Valdivia, Llanquihue and Los Ángeles. The Chilean Embassy in Germany estimated 150,000 to 200,000 Chileans are of German origin. Another historically significant immigrant group were Croatian immigrants. The Croatian Chileans, their descendants today, number at an estimated 380,000 persons, the equivalent of 2.4% of the population. Other authors claim on the other hand, that close to 4.6% of the Chilean population have some Croatian ancestry. Over 700,000 Chileans may have British (English, Scottish or Welsh) origin, 4.5% of Chile's population. Chileans of Greek descent are estimated 90,000 to 120,000. Most of them live either in the Santiago area or in the Antofagasta area, and Chile is one of the 5 countries with the most descendants of Greeks in the world. The descendants of the Swiss reach 90,000 and it is estimated that about 5% of the Chilean population has some French ancestry. 184,000–800,000 (estimates) are descendants of Italians. Other groups of European descendants are found in smaller numbers. Colombia The Colombian government does not carry out official racial censuses, nor does it carry out self-identification racial censuses as is the case in Argentina, so the figures shown are usually based on data from populations considered "non-ethnic", which are those (Whites and Mestizos). According to the 2018 census, approximately 87.6% of the Colombian population are White or Mestizo. Many Spanish began their explorations searching for gold, while other Spanish established themselves as leaders of the native social organizations teaching natives the Christian faith and the ways of their civilization. Catholic priests would provide education for Native Americans that otherwise was unavailable. 100 years after the first Spanish settlement, 90 percent of all Native Americans in Colombia had died. The majority of the deaths of Native Americans were the cause of diseases such as measles and smallpox, which were spread by European settlers. Many Native Americans were also killed by armed conflicts with European settlers. Between 1540 and 1559, 8.9 percent of the residents of Colombia were of Basque origin. It has been suggested that the present-day incidence of business entrepreneurship in the region of Antioquia is attributable to the Basque immigration and Basque character traits. Few Colombians of distant Basque descent are aware of their Basque ethnic heritage. In Bogota, there is a small colony of thirty to forty families who emigrated as a consequence of the Spanish Civil War or because of different opportunities. Basque priests were the ones who introduced handball into Colombia. Basque immigrants in Colombia were devoted to teaching and public administration. In the first years of the Andean multinational company, Basque sailors navigated as captains and pilots on the majority of the ships until the country was able to train its own crews. In December 1941 the United States government estimated that there were 4,000 Germans living in Colombia. There were some Nazi agitators in Colombia, such as Barranquilla businessman Emil Prufurt. Colombia invited Germans who were on the U.S. blacklist to leave. SCADTA, a Colombian-German air transport corporation that was established by German expatriates in 1919, was the first commercial airline in the Western Hemisphere. The Italians arrived on the Colombian coast, and quickly moved towards the expanding agricultural areas. There, some of them achieved success in the commercialization of livestock, agricultural products, and imported goods, which later led to the transfer of their lucrative activities to Barranquilla. Some important buildings were created by Italians in the nineteenth century, like the famous Colón Theater of the capital. It is one of the most representative theatres of Colombia, with neoclassic architecture: was built by the Italian architect Pietro Cantini and founded in 1892; has more than 2,400 square metres (26,000 sq ft) for 900 people. This famous Italian architect also contributed to the construction of the Capitolio Nacional of the capital. Oreste Sindici was an Italian-born Colombian musician and composer, who composed the music for the Colombian national anthem in 1887. Oreste Sindici died in Bogotá on 12 January 1904, due to severe arteriosclerosis. In 1937 the Colombian government honored his memory. After the Second World War, Italian emigration to Colombia was directed primarily toward Bogota, Cali and Medellin. They have Italian schools in Bogota (Institutes "Leonardo da Vinci" and "Alessandro Volta"), Medellín ("Leonardo da Vinci") & Barranquilla ("Galileo Galilei"). The Italian migration government estimates that there are at least 2 million Colombians of Italian descent, making them the second largest and most numerous European group in the country after the Spanish. The first and largest wave of immigration from the Middle East began around 1880 and remained during the first two decades of the twentieth century. They were mainly Maronite Christians from Greater Syria (Syria and Lebanon) and Palestine, fleeing the then colonized Ottoman territories. Syrians, Palestinians, and Lebanese continued since then to settle in Colombia. Due to poor existing information it is impossible to know the exact number of Lebanese and Syrians that immigrated to Colombia. A figure of 5,000–10,000 from 1880 to 1930 may be reliable. Whatever the figure, Syrians and Lebanese are perhaps the biggest immigrant group next to the Spanish since independence. Those who left their homeland in the Middle East to settle in Colombia left for different reasons such as religious, economic, and political reasons. Some left to experience the adventure of migration. After Barranquilla and Cartagena, Bogota stuck next to Cali, among cities with the largest number of Arabic-speaking representatives in Colombia in 1945. The Arabs that went to Maicao were mostly Sunni Muslim with some Druze and Shiites, as well as Orthodox and Maronite Christians. The mosque of Maicao is the second largest mosque in Latin America. Middle Easterns are generally called (Turkish). Ecuador According to the most-recent 2022 national census, 2.2% of Ecuadorians self-identified as European Ecuadorian, a decrease from 6.1% in 2010. Guyana In 2016, 0.3% of Guyana were of European descent, predominantly Portuguese Guyanese. Paraguay Peru According to the 2017 census 5.9% or 1.3 million (1,336,931) people 12 years of age and above self-identified as White. There were 619,402 (5.5%) males and 747,528 (6.3%) females. This was the first time a question for ethnic origins had been asked. The regions with the highest proportion of self-identified Whites were in La Libertad (10.5%), Tumbes and Lambayeque (9.0% each), Piura (8.1%), Callao (7.7%), Cajamarca (7.5%), Lima Province (7.2%) and Lima Region (6.0%). Suriname In 2012, there were 1,667 or 0.3% of the population identified as white. Many Dutch settlers left Suriname after independence in 1975 and this diminished Suriname's Dutch population. Currently there are around 1,000 boeroes left in Suriname, and 3,000 outside Suriname. Uruguay Different estimates state that Uruguay's population of 3.4 million is composed of 88% to 93% White Uruguayans. Though Uruguay has welcomed immigrants from around the world, its population largely consists of people of European origin, mainly Spaniards and Italians. Other European immigrants include Jews from Eastern and Central Europe. According to the 2006 National Survey of Homes by the Uruguayan National Institute of Statistics: 94.6% self-identified as having a White background, 9.1% chose black ancestry, and 4.5% chose an Amerindian ancestry (people surveyed were allowed to choose more than one option). Venezuela According to the official Venezuelan census, the term "White" involves external issues such as light skin, shape, and color of hair and eyes, among other factors. Though the meaning and usage of the term "White" has varied in different ways depending on the time period and area, leaving its precise definition as somewhat confusing. The 2011 Venezuelan Census states that "White" in Venezuela is used to describe Venezuelans of European origin. The 2011 National Population and Housing Census states that 43.6% of the Venezuelan population (approx. 13.1 million people) identify as White. Genetic research by the University of Brasília shows an average admixture of 60.6% European, 23.0% Amerindian and 16.3% African ancestry in Venezuelan populations. The majority of White Venezuelans are of Spanish, Italian, Portuguese and German descent. Nearly half a million European immigrants, mostly from Spain (as a consequence of the Spanish Civil War), Italy, and Portugal, entered the country during and after World War II, attracted by a prosperous, rapidly developing country where educated and skilled immigrants were welcomed. Spaniards were introduced into Venezuela during the colonial period. Most of them were from Andalusia, Galicia, Basque Country and from the Canary Islands. Until the last years of World War II, a large part of the European immigrants to Venezuela came from the Canary Islands, and its cultural impact was significant, influencing the development of Castilian in the country, its gastronomy, and customs. With the beginning of oil operations during the first decades of the twentieth century, citizens and companies from the United States, United Kingdom, and Netherlands established themselves in Venezuela. Later, in the middle of the century, there was a new wave of originating immigrants from Spain (mainly from Galicia, Andalucia and the Basque Country), Italy (mainly from southern Italy and Venice) and Portugal (from Madeira) and new immigrants from Germany, France, England, Croatia, Netherlands, and other European countries, among others, animated simultaneously by the program of immigration and colonization implanted by the government. See also Caucasoid Criollo people Demographics of Europe Ethnic groups in Europe Ethnic groups in West Asia European diaspora Westerners White demographic decline White flight White identity References Bibliography Allen, Theodore, The Invention of the White Race, 2 vols. Verso, London 1994. Baum, Bruce David, The Rise and Fall of the Caucasian Race: A Political History of Racial Identity. NYU Press, New York and London 2006, . Brodkin, Karen, How Jews Became White Folks and What That Says About Race in America, Rutgers, 1999, . Foley, Neil, The White Scourge: Mexicans, Blacks, and Poor Whites in Texas Cotton Culture (Berkeley: University of California Press, 1997) Gossett, Thomas F., Race: The History of an Idea in America, New ed. (New York: Oxford University, 1997) Guglielmo, Thomas A., White on Arrival: Italians, Race, Color, and Power in Chicago, 1890–1945, 2003, Hannaford, Ivan, Race: The History of an Idea in the West (Baltimore: Johns Hopkins University, 1996) Ignatiev, Noel, How the Irish Became White, Routledge, 1996, . Jackson, F. L. C. (2004). Book chapter: British Medical Bulletin 2004; 69: 215–35 . Retrieved 29 December 2006. Jacobson, Matthew Frye, Whiteness of a Different Color: European Immigrants and the Alchemy of Race, Harvard, 1999, . Oppenheimer, Stephen (2006). The Origins of the British: A Genetic Detective Story. Constable and Robinson, London. . Smedley, Audrey, Race in North America: Origin and Evolution of a Worldview, 2nd ed. (Boulder: Westview, 1999). Tang, Hua., Tom Quertermous, Beatriz Rodriguez, Sharon L. R. Kardia, Xiaofeng Zhu, Andrew Brown, James S. Pankow, Michael A. Province, Steven C. Hunt, Eric Boerwinkle, Nicholas J. Schork, and Neil J. Risch (2005) Genetic Structure, Self-Identified Race/Ethnicity, and Confounding in Case-Control Association Studies Am. J. Hum. Genet. 76: 268–275. Further reading External links Race (human categorization) White
White people
Biology
14,087
1,372,610
https://en.wikipedia.org/wiki/Coleman%E2%80%93Mandula%20theorem
In theoretical physics, the Coleman–Mandula theorem is a no-go theorem stating that spacetime and internal symmetries can only combine in a trivial way. This means that the charges associated with internal symmetries must always transform as Lorentz scalars. Some notable exceptions to the no-go theorem are conformal symmetry and supersymmetry. It is named after Sidney Coleman and Jeffrey Mandula who proved it in 1967 as the culmination of a series of increasingly generalized no-go theorems investigating how internal symmetries can be combined with spacetime symmetries. The supersymmetric generalization is known as the Haag–Łopuszański–Sohnius theorem. History In the early 1960s, the global flavor symmetry associated with the eightfold way was shown to successfully describe the hadron spectrum for hadrons of the same spin. This led to efforts to expand the global symmetry to a larger symmetry mixing both flavour and spin, an idea similar to that previously considered in nuclear physics by Eugene Wigner in 1937 for an symmetry. This non-relativistic model united vector and pseudoscalar mesons of different spin into a 35-dimensional multiplet and it also united the two baryon decuplets into a 56-dimensional multiplet. While this was reasonably successful in describing various aspects of the hadron spectrum, from the perspective of quantum chromodynamics this success is merely a consequence of the flavour and spin independence of the force between quarks. There were many attempts to generalize this non-relativistic model into a fully relativistic one, but these all failed. At the time it was also an open question whether there existed a symmetry for which particles of different masses could belong to the same multiplet. Such a symmetry could then account for the mass splitting found in mesons and baryons. It was only later understood that this is instead a consequence of the differing up-, down-, and strange-quark masses which leads to a breakdown of the internal flavor symmetry. These two motivations led to a series of no-go theorems to show that spacetime symmetries and internal symmetries could not be combined in any but a trivial way. The first notable theorem was proved by William McGlinn in 1964, with a subsequent generalization by Lochlainn O'Raifeartaigh in 1965. These efforts culminated with the most general theorem by Sidney Coleman and Jeffrey Mandula in 1967. Little notice was given to this theorem in subsequent years. As a result, the theorem played no role in the early development of supersymmetry, which instead emerged in the early 1970s from the study of dual resonance models, which are the precursor to string theory, rather than from any attempts to overcome the no-go theorem. Similarly, the Haag–Łopuszański–Sohnius theorem, a supersymmetric generalization of the Coleman–Mandula theorem, was proved in 1975 after the study of supersymmetry was already underway. Theorem Consider a theory that can be described by an S-matrix and that satisfies the following conditions The symmetry group is a Lie group which includes the Poincaré group as a subgroup, Below any mass, there are only a finite number of particle types, Any two-particle state undergoes some reaction at almost all energies, The amplitudes for elastic two-body scattering are analytic functions of the scattering angle at almost all energies and angles, A technical assumption that the group generators are distributions in momentum space. The Coleman–Mandula theorem states that the symmetry group of this theory is necessarily a direct product of the Poincaré group and an internal symmetry group. The last technical assumption is unnecessary if the theory is described by a quantum field theory and is only needed to apply the theorem in a wider context. A kinematic argument for why the theorem should hold was provided by Edward Witten. The argument is that Poincaré symmetry acts as a very strong constraint on elastic scattering, leaving only the scattering angle unknown. Any additional spacetime dependent symmetry would overdetermine the amplitudes, making them nonzero only at discrete scattering angles. Since this conflicts with the assumption of the analyticity of the scattering angles, such additional spacetime dependent symmetries are ruled out. Limitations Conformal symmetry The theorem does not apply to a theory of massless particles, with these allowing for conformal symmetry as an additional spacetime dependent symmetry. In particular, the algebra of this group is the conformal algebra, which consists of the Poincaré algebra together with the commutation relations for the dilaton generator and the special conformal transformations generator. Supersymmetry The Coleman–Mandula theorem assumes that the only symmetry algebras are Lie algebras, but the theorem can be generalized by instead considering Lie superalgebras. Doing this allows for additional anticommutating generators known as supercharges which transform as spinors under Lorentz transformations. This extension gives rise to the super-Poincaré algebra, with the associated symmetry known as supersymmetry. The Haag–Łopuszański–Sohnius theorem is the generalization of the Coleman–Mandula theorem to Lie superalgebras, with it stating that supersymmetry is the only new spacetime dependent symmetry that is allowed. For a theory with massless particles, the theorem is again evaded by conformal symmetry which can be present in addition to supersymmetry giving a superconformal algebra. Low dimensions In a one or two dimensional theory the only possible scattering is forwards and backwards scattering so analyticity of the scattering angles is no longer possible and the theorem no longer holds. Spacetime dependent internal symmetries are then possible, such as in the massive Thirring model which can admit an infinite tower of conserved charges of ever higher tensorial rank. Quantum groups Models with nonlocal symmetries whose charges do not act on multiparticle states as if they were a tensor product of one-particle states, evade the theorem. Such an evasion is found more generally for quantum group symmetries which avoid the theorem because the corresponding algebra is no longer a Lie algebra. Other limitations For other spacetime symmetries besides the Poincaré group, such as theories with a de Sitter background or non-relativistic field theories with Galilean invariance, the theorem no longer applies. It also does not hold for discrete symmetries, since these are not Lie groups, or for spontaneously broken symmetries since these do not act on the S-matrix level and thus do not commute with the S-matrix. See also Extended supersymmetry Supergroup Supersymmetry algebra Notes Further reading Coleman–Mandula theorem on Scholarpedia Sascha Leonhardt on the Coleman–Mandula theorem Quantum field theory Supersymmetry Theorems in quantum mechanics No-go theorems
Coleman–Mandula theorem
Physics,Mathematics
1,429
37,279,342
https://en.wikipedia.org/wiki/Zariski%27s%20lemma
In algebra, Zariski's lemma, proved by , states that, if a field is finitely generated as an associative algebra over another field , then is a finite field extension of (that is, it is also finitely generated as a vector space). An important application of the lemma is a proof of the weak form of Hilbert's Nullstellensatz: if I is a proper ideal of (k an algebraically closed field), then I has a zero; i.e., there is a point x in such that for all f in I. (Proof: replacing I by a maximal ideal , we can assume is maximal. Let and be the natural surjection. By the lemma is a finite extension. Since k is algebraically closed that extension must be k. Then for any , ; that is to say, is a zero of .) The lemma may also be understood from the following perspective. In general, a ring R is a Jacobson ring if and only if every finitely generated R-algebra that is a field is finite over R. Thus, the lemma follows from the fact that a field is a Jacobson ring. Proofs Two direct proofs are given in Atiyah–MacDonald; the one is due to Zariski and the other uses the Artin–Tate lemma. For Zariski's original proof, see the original paper. Another direct proof in the language of Jacobson rings is given below. The lemma is also a consequence of the Noether normalization lemma. Indeed, by the normalization lemma, K is a finite module over the polynomial ring where are elements of K that are algebraically independent over k. But since K has Krull dimension zero and since an integral ring extension (e.g., a finite ring extension) preserves Krull dimensions, the polynomial ring must have dimension zero; i.e., . The following characterization of a Jacobson ring contains Zariski's lemma as a special case. Recall that a ring is a Jacobson ring if every prime ideal is an intersection of maximal ideals. (When A is a field, A is a Jacobson ring and the theorem below is precisely Zariski's lemma.) Proof: 2. 1.: Let be a prime ideal of A and set . We need to show the Jacobson radical of B is zero. For that end, let f be a nonzero element of B. Let be a maximal ideal of the localization . Then is a field that is a finitely generated A-algebra and so is finite over A by assumption; thus it is finite over and so is finite over the subring where . By integrality, is a maximal ideal not containing f. 1. 2.: Since a factor ring of a Jacobson ring is Jacobson, we can assume B contains A as a subring. Then the assertion is a consequence of the next algebraic fact: (*) Let be integral domains such that B is finitely generated as A-algebra. Then there exists a nonzero a in A such that every ring homomorphism , K an algebraically closed field, with extends to . Indeed, choose a maximal ideal of A not containing a. Writing K for some algebraic closure of , the canonical map extends to . Since B is a field, is injective and so B is algebraic (thus finite algebraic) over . We now prove (*). If B contains an element that is transcendental over A, then it contains a polynomial ring over A to which φ extends (without a requirement on a) and so we can assume B is algebraic over A (by Zorn's lemma, say). Let be the generators of B as A-algebra. Then each satisfies the relation where n depends on i and . Set . Then is integral over . Now given , we first extend it to by setting . Next, let . By integrality, for some maximal ideal of . Then extends to . Restrict the last map to B to finish the proof. Notes Sources Lemmas in algebra Theorems about algebras
Zariski's lemma
Mathematics
859
43,562,466
https://en.wikipedia.org/wiki/Mockery
Mockery or mocking is the act of insulting or making light of a person or other thing, sometimes merely by taunting, but often by making a caricature, purporting to engage in imitation in a way that highlights unflattering characteristics. Mockery can be done in a lighthearted and gentle way, but can also be cruel and hateful, such that it "conjures images of corrosion, deliberate degradation, even subversion; thus, 'to laugh at in contempt, to make sport of' (OED)". Mockery appears to be unique to humans, and serves a number of psychological functions, such as reducing the perceived imbalance of power between authority figures and common people. Examples of mockery can be found in literature and the arts. Etymology and function The root word mock traces to the Old French mocquer (later moquer), meaning to scoff at, laugh at, deride, or fool, although the origin of mocquer is itself unknown. Labeling a person or thing as a mockery may also be used to imply that it or they are a poor quality or counterfeit version of some genuine other, such as the case in the usages: "mockery of man" or "the trial was a mockery of justice". Mockery in psychology Australian linguistics professor Michael Haugh differentiated between teasing and mockery by emphasizing that, while the two do have substantial overlap in meaning, mockery does not connote repeated provocation or the intentional withholding of desires, and instead implies a type of imitation or impersonation where a key element is that the nature of the act places a central importance on the expectation that it not be taken seriously. Specifically in examining non-serious forms of jocular mockery, Haugh summarized the literature on the features of mockery as consisting of the following: Laughter, especially on the part of the speaker, acting as a cue that others are invited to laugh also Phonetic practices, such as a "smile voice" and modulating "sing-song" pitch which mark actions "as laughable", denote an exaggerated level of animation, and indicate irony Facial cues, such as smiling, winking or other intentionally exaggerated expressions which mark actions as laughable, ironic, and non-serious Bodily cues, such as covering the face, or clapping Exaggeration, emphasizing extreme cases and making claims obviously above or below what is reasonable Incongruity through allusions and presuppositions to create implicit contrast Formulaicity and "topic shift markers" to indicate an end to non-seriousness and a return to serious interaction In turn, the audience of the mockery may reply with a number of additional cues to indicate that the actions are understood as non-serious, including laughter, explicit agreement, or a continuation or elaboration of the mockery. Jayne Raisborough and Matt Adams alternatively identified mockery as a type of disparagement humour mainly available as a tool of privileged groups, which ensures normative responses from non-privileged groups. They emphasize that mockery may be used ironically and comedically, to identify moral stigma and signal moral superiority, but also as a form of social encouragement, allowing those who are providing social cues, to do so in a way that provides a level of social distance between the criticism and critic through use of parody and satire. In this way, mockery can function as a "more superficially 'respectable', morally sensitive way of doing class-based distinction than less civil disgust." Mockery in philosophy The philosopher Baruch Spinoza took a dim view of mockery, contending that it rests "upon a false opinion and proclaim[s] the imperfection of the mocker". He reasoned that either the object of the mockery is not ridiculous, in which case the mocker is wrong in treating it in such a way, or it is ridiculous, in which case mockery is not an effective tool for improvement. Though the mocker reveals that they recognize the imperfection, they do nothing to resolve it using good reason. Writing in his Tractatus Politicus, Spinoza declared that mockery was a form of hatred and sadness "which can never be converted into joy". Catholic Bishop Francis de Sales, in his 1877 Introduction to the Devout Life, decried mockery as a sin: Alternatively, while philosophers John Locke and Anthony Ashley-Cooper, 3rd Earl of Shaftesbury agreed on the importance of critical inquiry regarding the views of authority figures, Shaftesbury saw an important role specifically for mockery in this process. Shaftesbury held that "a moderate use of mockery could correct vices," and that mockery was among the most important challenges for truth, because "if an opinion cannot stand mockery" then it similarly would be "revealed to be ridiculous". As such all serious claims of knowledge should be subjected to it. This was a view echoed by René Descartes, who saw mockery as a "trait of a good man" which "bears witness to the cheerfulness of his temper ... tranquility of his soul ... [and] the ingenuity of his mind." In philosophical argument, the appeal to ridicule (also called appeal to mockery, ab absurdo, or the horse laugh) is an informal fallacy which presents an opponent's argument as absurd, ridiculous, or humorous, and therefore not worthy of serious consideration. Appeal to ridicule is often found in the form of comparing a nuanced circumstance or argument to a laughably commonplace occurrence or to some other irrelevancy on the basis of comedic timing, wordplay, or making an opponent and their argument the object of a joke. This is a rhetorical tactic that mocks an opponent's argument or standpoint, attempting to inspire an emotional reaction (making it a type of appeal to emotion) in the audience and to highlight any counter-intuitive aspects of that argument, making it appear foolish and contrary to common sense. This is typically done by making a mockery of the argument's foundation that represents it in an uncharitable and oversimplified way. Mockery in the arts Mockery is one form of the literary genre of satire, and it has been noted that "[t]he mock genres and the practice of literary mockery goes back at least as far as the sixth century BCE". Mockery, as a genre, can also be directed towards other artistic genres: The English comedy troupe, Monty Python, was considered to be particularly adept at the mockery of both authority figures and people making a pretense to competence beyond their abilities. One such sketch, involving a nearly-deaf hearing aid salesman and a nearly-blind contact lens salesman, depicts them as "both desperately unsuccessful, and exceedingly hilarious. The comicality of such characters is largely due to the fact that the objects of mockery themselves create a specific context in which we find that they deserve being ridiculed". In the United States, the television show, Saturday Night Live has been noted as having "a history of political mockery", and it has been proposed that "[h]istorical and rhetorical analyses argue that this mockery matters" with respect to political outcomes. Development in humans Mockery appears to be a uniquely human activity. Although several species of animal are observed to engage in laughter, humans are the only animal observed to use laughter to mock one another. An examination of the appearance of the capacity for mockery during childhood development indicates that mockery "does not appear as an expectable moment in early childhood, but becomes more prominent as the latency child enters the social world of sibling rivalry, competition, and social interaction". As it develops, it is "displayed in forms of schoolyard bullying and certainly in adolescence with the attempt to achieve independence while negotiating the conflicts arising out of encounters with authority." One common element of mockery is caricature, a wide-ranging practice of imitating and exaggerating aspects of the subject being mocked. It has been suggested that caricature produced "survival advantages of rapid decoding of facial information", and at the same time that it provides "some of our best humor and, when suffused with too much aggression, may reach the form of mockery". Mockery serves a number of social functions: Richard Borshay Lee reported mockery as a facet of Bushmen culture designed to keep individuals who are successful in certain regards from becoming arrogant. When weaker people are mocked by stronger people, this can constitute a form of bullying. See also Bullying Irony Roast (comedy) Sarcasm Taunting Tongue-in-cheek Ad hominem References External links Abuse Harassment and bullying Human behavior
Mockery
Biology
1,788
1,082,550
https://en.wikipedia.org/wiki/Kronecker%E2%80%93Weber%20theorem
In algebraic number theory, it can be shown that every cyclotomic field is an abelian extension of the rational number field Q, having Galois group of the form . The Kronecker–Weber theorem provides a partial converse: every finite abelian extension of Q is contained within some cyclotomic field. In other words, every algebraic integer whose Galois group is abelian can be expressed as a sum of roots of unity with rational coefficients. For example, and The theorem is named after Leopold Kronecker and Heinrich Martin Weber. Field-theoretic formulation The Kronecker–Weber theorem can be stated in terms of fields and field extensions. Precisely, the Kronecker–Weber theorem states: every finite abelian extension of the rational numbers Q is a subfield of a cyclotomic field. That is, whenever an algebraic number field has a Galois group over Q that is an abelian group, the field is a subfield of a field obtained by adjoining a root of unity to the rational numbers. For a given abelian extension K of Q there is a minimal cyclotomic field that contains it. The theorem allows one to define the conductor of K as the smallest integer n such that K lies inside the field generated by the n-th roots of unity. For example the quadratic fields have as conductor the absolute value of their discriminant, a fact generalised in class field theory. History The theorem was first stated by though his argument was not complete for extensions of degree a power of 2. published a proof, but this had some gaps and errors that were pointed out and corrected by . The first complete proof was given by . Generalizations proved the local Kronecker–Weber theorem which states that any abelian extension of a local field can be constructed using cyclotomic extensions and Lubin–Tate extensions. , and gave other proofs. Hilbert's twelfth problem asks for generalizations of the Kronecker–Weber theorem to base fields other than the rational numbers, and asks for the analogues of the roots of unity for those fields. A different approach to abelian extensions is given by class field theory. References External links Class field theory Cyclotomic fields Theorems in algebraic number theory
Kronecker–Weber theorem
Mathematics
461
68,689,367
https://en.wikipedia.org/wiki/Morowali%20Industrial%20Park
The Indonesia Morowali Industrial Park (IMIP) is an industrial park hosting primarily nickel-related industries in Morowali Regency, Central Sulawesi, Indonesia. It is the largest nickel processing site in Indonesia, which is the world's top nickel producer. In 2023, Wired magazine called it "the world's epicenter for nickel production." The park is a joint venture between Indonesian mining company Bintang Delapan Group and the Chinese firm Tsingshan Holding Group. IMIP's activities have seriously polluted the local environment and disrupted nearby communities. Workers and advocacy organizations report poor working conditions, with a number of deaths and injuries from industrial accidents. The park employs about 81,000 people. Background Indonesia has the world's largest nickel reserves. Nickel is used to manufacture lithium-ion batteries used in electric vehicles, and Indonesia is positioned to be a key supplier of the mineral for the booming electric vehicle battery market. The country banned export of unprocessed nickel ores around 2013, and has signed several deals with battery manufacturers. In 2022, Indonesia produced 1.6 million tons of nickel. This was nearly half of world production. Also in 2022, the administration of president Joko Widodo relaxed environmental and worker safety regulations to attract foreign investment. There is a private airport inside Morowali Industrial Park called IMIP airport which is 20 minutes drive to the park. Commercial flights would be stopping via Manado. Description The industrial park is located in Bahodopi district of Morowali Regency, Central Sulawesi. It covers 3,000 hectares, and is served by a seaport, an airport, and a 2 GW coal power plant. It is operated by PT Indonesia Morowali Industrial Park. There were reportedly 18 companies operating at IMIP with a total investment of USD 15.3 billion in October 2022, with the park's management predicting 40 companies by 2025. By December 2024, the total investment was claimed to have reached USD 34.3 billion. The park's estimated total capacity for stainless steel production was 3 million metric tonnes per year in 2020. In 2019, IMIP had plans to develop electric vehicle battery plants. It generated USD 6.6 billion in exports for Indonesia in 2019. History Morowali had been the site of a nickel mining operation by Vale's subsidiary Inco since 1968. In August 2013, the Indonesian Ministry of Industry announced plans to develop a 1,500 hectare nickel-oriented industrial park in Morowali. A MoU for the USD 1.5 billion project was signed between Tsingshan Holding Group and Bintang Delapan Group in October 2013, and the first stone for the industrial park's construction was placed by Minister of Industry Saleh Husin on 5 December 2014. Companies began to operate starting in April 2015, with president Joko Widodo inaugurating a smelter on 28 May 2015. In 2018, it was estimated that IMIP produced 50% of Indonesia's nickel products. On 24 December 2023, an explosion occurred at a Tsingshan-owned nickel smelter inside the facility, killing at least 18 people and injuring 44 others. The dead were identified as ten Indonesians and eight Chinese workers. By 31 December, two more workers had died to their injuries. Impact Local Environmentalists report that pollution from IMIP has destroyed fish populations and local forests. Local residents also experience frequent disruptions in power, phone, and internet services due to oversaturation. The population has grown very quickly due to the workers, sanitation services have not been provided, and there are open sewers. Large numbers of commuting workers have also overloaded local roads and caused daily traffic jams lasting hours at a time. The regency government of Morowali reported an increase of municipal government revenues from Rp 180 billion in 2018 to Rp 600 billion in 2022, of which 80 percent were attributed to IMIP. On paper, Morowali Regency's GDP per capita became the highest in Indonesia at 927.2 million (~USD 60,000) in 2023, however, activists claimed that around 95 percent of this were remitted to other regions of Indonesia or abroad. Workforce There were around 28,500 employees working at IMIP in August 2018, of which around 3,100 were foreign workers, and an additional 50,000 indirect jobs were estimated to be related to the industrial park. In February 2023, approximately 81,000 people worked there, including around 10,700 foreign workers, mostly from China. Many workers report working 15-hour days for $25/day with no days off. Indonesian workers have also complained about how foreign (i.e. Chinese) workers receive higher pay, and communication problems have caused further frustrations. Workers and advocacy organisations report unsafe working conditions in some smelters at IMIP. Deaths and injuries are frequently reported, and pollution has caused respiratory illness and eye problems. One report found that ten people had died at the smelter between 2020 and 2023. Two workers were killed on the fourth day of the strike. A government official said they had little power to enforce safety regulations in the industry. In 2023, some workers filed a lawsuit against the company over poor working conditions. References Central Sulawesi Industrial parks in Indonesia Nickel mines in Indonesia
Morowali Industrial Park
Chemistry
1,085
63,589,365
https://en.wikipedia.org/wiki/Archaeal%20translation
Archaeal translation is the process by which messenger RNA is translated into proteins in archaea. Not much is known on this subject, but on the protein level it seems to resemble eukaryotic translation. Most of the initiation, elongation, and termination factors in archaea have homologs in eukaryotes. Shine-Dalgarno sequences only are found in a minority of genes for many phyla, with many leaderless mRNAs probably initiated by scanning. The process of ABCE1 ATPase-based recycling is also shared with eukaryotes. Being a prokaryote without a nucleus, archaea do perform transcription and translation at the same time like bacteria do. References Further reading Molecular biology Protein biosynthesis Gene expression
Archaeal translation
Chemistry,Biology
156
50,723,592
https://en.wikipedia.org/wiki/Bootstrap%20percolation
In statistical mechanics, bootstrap percolation is a percolation process in which a random initial configuration of active cells is selected from a lattice or other space, and then cells with few active neighbors are successively removed from the active set until the system stabilizes. The order in which this removal occurs makes no difference to the final stable state. When the threshold of active neighbors needed for an active cell to survive is high enough (depending on the lattice), the only stable states are states with no active cells, or states in which every cluster of active cells is infinitely large. For instance, on the square lattice with the von Neumann neighborhood, there are finite clusters with at least two active neighbors per cluster cell, but when three or four active neighbors are required, any stable cluster must be infinite. With three active neighbors needed to stay active, an infinite cluster must stretch infinitely in three or four of the possible cardinal directions, and any finite holes it contains will necessarily be rectangular. In this case, the critical probability is 1, meaning that when the probability of each cell being active in the initial state is anything less than 1, then almost surely there is no infinite cluster. If the initial state is active everywhere except for an square, within which one cell in each row and column is inactive, then these single-cell voids will merge to form a void that covers the whole square if and only if the inactive cells have the pattern of a separable permutation. In any higher dimension, for any threshold, there is an analogous critical probability below which all cells almost surely become inactive and above which some clusters almost surely survive. Bootstrap percolation can be interpreted as a cellular automaton, resembling Conway's Game of Life, in which live cells die when they have too few live neighbors. However, unlike Conway's Life, cells that have become dead never become alive again. It can also be viewed as an epidemic model in which inactive cells are considered as infected and active cells with too many infected neighbors become infected themselves. The smallest threshold that allows some cells of an initial cluster to survive is called the degeneracy of its adjacency graph, and the remnant of a cluster that survives with threshold k is called the k-core of this graph. One application of bootstrap percolation arises in the study of fault tolerance for distributed computing. If some processors in a large grid of processors fail (become inactive), then it may also be necessary to inactivate other processors with too few active neighbors, in order to preserve the high connectivity of the remaining network. The analysis of bootstrap percolation can be used to determine the failure probability that can be tolerated by the system. References Percolation theory Cellular automata
Bootstrap percolation
Physics,Chemistry,Mathematics
561
41,827
https://en.wikipedia.org/wiki/Turnkey
A turnkey, a turnkey project, or a turnkey operation (also spelled turn-key) is a type of project that is constructed so that it can be sold to any buyer as a completed product. This is contrasted with build to order, where the constructor builds an item to the buyer's exact specifications, or when an incomplete product is sold with the assumption that the buyer would complete it. A turnkey project or contract as described by Duncan Wallace (1984) is A turnkey contract is typically a construction contract under which a contractor is employed to plan, design and build a project or an infrastructure and do any other necessary development to make it functional or ‘ready to use’ at an agreed price and by a fixed date. In turnkey contracts, most of the time the employer provides the primary design. The contractor must follow the primary design provided by the employer. A turnkey computer system is a complete computer including hardware, operating system and application(s) designed and sold to satisfy specific business requirements. Common usage Turnkey refers to something that is ready for immediate use, generally used in the sale or supply of goods or services. The word is a reference to the fact that the customer, upon receiving the product, just needs to turn the ignition key to make it operational, or that the key just needs to be turned over to the customer. Turnkey is commonly used in the construction industry, for instance, in which it refers to bundling of materials and labour by the home builder or general contractor to complete the home without owner involvement. The word is often used to describe a home built on the developer's land with the developer's financing ready for the customer to move in. If a contractor builds a "turnkey home" it frames the structure and finish the interior; everything is completed down to the cabinets and carpet. Turnkey is also commonly used in motorsports to describe a car being sold with powertrain (engine, transmission, etc.) to contrast with a vehicle sold without one so that other components may be re-used. Similarly, this term may be used to advertise the sale of an established business, including all the equipment necessary to run it, or by a business-to-business supplier providing complete packages for business start-up. An example would be the creation of a "turnkey hospital" which would be building a complete medical. In manufacturing, the turnkey manufacturing contractor (the business that takes on the turnkey project) normally provide help during the initial design process, machining and tooling, quality assurance, to production, packaging and delivery. Turnkey manufacturing have advantages in saving production time, single point of contact, cost savings and price certainty and quality assurance. Specific usage The term turnkey is also often used in the technology industry, most commonly to describe pre-built computer "packages" in which everything needed to perform a certain type of task (e.g. audio editing) is put together by the supplier and sold as a bundle. This often includes a computer with pre-installed software, various types of hardware, and accessories. Such packages are commonly called appliances. A website with a ready-made solutions and some configurations is called a turnkey website. In real estate, turnkey is defined as a home or property that is ready for occupation for its intended purpose, i.e., a home that is fully functional, needs no upgrading or repairs (move-in ready). In commercial use, a building set up to do auto repairs would be defined as turnkey if it came fully stocked with all needed machinery and tools for that particular trade. The turnkey process includes all of the steps involved to open a location including the site selection, negotiations, space planning, construction coordination and complete installation. "Turnkey real estate" also refers to a type of investment. This process includes the purchase, construction or rehab (of an existing site), the leasing out to tenants, and then the sale of the property to a buyer. The buyer is purchasing an investment property which is producing a stream of income. In drilling, the term indicates an arrangement where a contractor must fully complete a well up to some milestone to receive any payment (in exchange for greater compensation upon completion). See also Commercial off-the-shelf Engineering, procurement and construction Turnkey supplier Value-added reseller References Business law Facilities engineering Product management Software features Management cybernetics
Turnkey
Technology,Engineering
894
46,549,650
https://en.wikipedia.org/wiki/List%20of%20countries%20by%20pharmaceutical%20exports
The following is a list of countries by pharmaceutical exports. Global sales from exported drugs and medicines by country total US$371.3 billion in 2018. Overall the value of drugs and medicine exports grew by an average 5.80% for all exporting countries since 2014 when drugs and medicines shipments were valued at $344.1 billion. Year over year, there was a 7.9% uptick from 2017 to 2018. Among continents, European countries sold the highest dollar value worth of exported drugs and medicines during 2018 with shipments from Europe totaling $295.8 billion or 79.70% of the global total. In second place were Asian pharmaceutical exporters at 10.70% while 8.10% of worldwide drugs and medicine shipments originated from North America. Smaller percentages came from drugs and medicines suppliers in Latin America (0.7%) excluding Mexico but including the Caribbean, Oceania (0.5%) led by Australia and New Zealand, then Africa (0.2%). The 4-digit Harmonized Tariff System code prefixes for drugs and medicines are: 3003 for medicaments consisting of two or more constituents mixed together (4.3% of global total) 3004 for medicaments consisting of mixed or unmixed products (95.7%) 2021 Below are the 15 countries that exported the highest dollar value worth of drugs and medicines during 2021. Germany: US$64.7 billion (15.2% of total drugs and medicines exports) Switzerland: $50.3 billion (11.8%) Belgium: $33.3 billion (7.8%) United States of America: $30.5 billion (7.1%) France: $27.4 billion (6.4%) Italy: $26.1 billion (6.1%) Ireland: $22.7 billion (5.3%) Netherlands: $19.7 billion (4.6%) United Kingdom: $19.2 billion (4.5%) India: $17.5 billion (4.1%) Denmark: $16 billion (3.8%) Spain: $11.4 billion (2.7%) Canada: $8.6 billion (2%) Slovenia: $8.5 billion (2%) Sweden: $8.1 billion (1.9%) By value, the listed 15 countries shipped 85.3% of all exported drugs and medicine for 2021. Among the above countries, the fastest-growing exporters of drugs and medicines from 2019 to 2020 were: Slovenia (up 42.2%), Ireland (up 28.8%), India (up 13.5%) and Italy (up 11.2%). Those countries that posted the slowest gains year over year were: Switzerland (up 0.7%), Canada (up 1.6%), United Kingdom (up 1.6%), United States (up 1.8%) and Germany (up 6.5%). 2014 Data is for 2014, in billions of United States dollars, as reported by The Observatory of Economic Complexity. Currently the top ten countries are listed, which account for more than 75% of the total market value, estimated to be $US 354 billions. Note: The total was calculated excluding the figures for individual member states (for this purpose these include the UK) in order to avoid double-counting. References External links Observatory of Economic complexity - Countries that export Packaged Medicaments (2014)' Pharmaceutical
List of countries by pharmaceutical exports
Chemistry,Biology
722
1,220,855
https://en.wikipedia.org/wiki/Tank%20%28video%20games%29
A tank or meat shield is a character class commonly seen in co-op video games such as real-time strategy games, role-playing games, fighting games, multiplayer online battle arenas and MUDs. Tank characters deliberately attract enemy attention and attacks (potentially by using game mechanics that force them to be targeted) to act as a decoy for teammates. Since this requires them to endure concentrated enemy attacks, they typically rely on a high health pool or support by friendly healers to survive while sacrificing their own damage output. Since they keep other members of a team alive, tanks often take on an unofficial leadership role:The tank acts as the de facto leader of the group by pulling and holding monsters' attention. It's up to me to set the pace as we clear the dungeon. But more than knowing how much the party can handle at once, I need to know where those monsters need to be positioned, what direction they should face, and what abilities they can use that might threaten the group. I'm also expected to stay on top of all the current meta strategies for beating a dungeon. When shortcuts are found that let players skip monsters, I need to know them.The term was used as early as 1992 on Usenet to describe the warrior class on BatMUD. Overview In most games with tank classes, three factors contribute to a tank's survivability: a large amount of health for absorbing damage damage mitigation, often accomplished through an armor or defense mechanic the ability to avoid attacks altogether Depending on the game, a tank may employ any combination of these: In Final Fantasy XI two commonly used tanking styles are nicknamed "Blood Tanks" and "Blink Tanks". A blood tank focuses purely on taking hits through higher than usual HP pools or heavy defense ratings. A blink tank focuses on evasiveness to prevent damage from landing in the first place. In Eve Online, a common form of tanking, called speed tanking, entails moving quickly enough to outpace the tracking of an enemy ship's turrets. In Sins of a Solar Empire, the Radiance Battleship has an ability called Animosity, which causes it to take all incoming damage. This can allow the Rapture Battlecruiser to deploy a reactive damage spell. In Ragnarok Online the signature tank belongs to the Swordsman Line in the Crusader, who can use a skill known as "Sacrifice" which places an aura around up to five party members. While this skill is active those who've had the skill cast on them will not take damage and instead the tank will take all of the inflicted damage from all "Sacrificed" party members. See also Healer (gaming), another common archetype focused on restoring the health of one's allies. Spell-caster (gaming), another common archetype focused on dealing damage, but is relatively weak in all other regards. DPS, another common archetype also focused on dealing damage. Bloodbath of B-R5RB, a famously large multiplayer battle which hinged on Sort Dragon's particularly successful tanking, References Character classes Video game terminology
Tank (video games)
Technology
641
16,220,305
https://en.wikipedia.org/wiki/PSR%20B1828%E2%88%9211
PSR B1828-11 (also known as PSR B1828-10) is a pulsar approximately 10,000 light-years away in the constellation of Scutum. The star exhibits variations in the timing and shape of its pulses: this was at one stage interpreted as due to a possible planetary system in orbit around the pulsar, though the model required an anomalously large second period derivative of the pulse times. The planetary model was later discarded in favour of precession effects as the planets could not cause the observed shape variations of the pulses. While the generally accepted model is that the pulsar is a neutron star undergoing free precession, a model has been proposed that interprets the pulsar as a quark star undergoing forced precession due to an orbiting "quark planet". The entry for the pulsar on SIMBAD lists this hypothesis as being controversial. References 10 Scutum (constellation)
PSR B1828−11
Astronomy
199
12,676,670
https://en.wikipedia.org/wiki/Etilevodopa
Etilevodopa (developmental code name TV-1203) is a dopaminergic agent which was developed as a treatment for Parkinson's disease. It is the ethyl ester of levodopa. It was never marketed. See also Melevodopa Foslevodopa XP-21279 References Abandoned drugs Antiparkinsonian agents Catecholamines Dopamine agonists Ethyl esters Monoamine precursors Prodrugs Propionate esters
Etilevodopa
Chemistry
106
4,472,066
https://en.wikipedia.org/wiki/Atomic%20formula
In mathematical logic, an atomic formula (also known as an atom or a prime formula) is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas. Atoms are thus the simplest well-formed formulas of the logic. Compound formulas are formed by combining the atomic formulas using the logical connectives. The precise form of atomic formulas depends on the logic under consideration; for propositional logic, for example, a propositional variable is often more briefly referred to as an "atomic formula", but, more precisely, a propositional variable is not an atomic formula but a formal expression that denotes an atomic formula. For predicate logic, the atoms are predicate symbols together with their arguments, each argument being a term. In model theory, atomic formulas are merely strings of symbols with a given signature, which may or may not be satisfiable with respect to a given model. Atomic formula in first-order logic The well-formed terms and propositions of ordinary first-order logic have the following syntax: Terms: , that is, a term is recursively defined to be a constant c (a named object from the domain of discourse), or a variable x (ranging over the objects in the domain of discourse), or an n-ary function f whose arguments are terms tk. Functions map tuples of objects to objects. Propositions: , that is, a proposition is recursively defined to be an n-ary predicate P whose arguments are terms tk, or an expression composed of logical connectives (and, or) and quantifiers (for-all, there-exists) used with other propositions. An atomic formula or atom is simply a predicate applied to a tuple of terms; that is, an atomic formula is a formula of the form P (t1 ,…, tn) for P a predicate, and the tn terms. All other well-formed formulae are obtained by composing atoms with logical connectives and quantifiers. For example, the formula ∀x. P (x) ∧ ∃y. Q (y, f (x)) ∨ ∃z. R (z) contains the atoms . As there are no quantifiers appearing in an atomic formula, all occurrences of variable symbols in an atomic formula are free. See also In model theory, structures assign an interpretation to the atomic formulas. In proof theory, polarity assignment for atomic formulas is an essential component of focusing. Atomic sentence References Further reading Predicate logic Logical expressions de:Aussage (Logik)#einfache Aussagen - zusammengesetzte Aussagen
Atomic formula
Mathematics
559
57,257,634
https://en.wikipedia.org/wiki/WireGuard
WireGuard is a communication protocol and free and open-source software that implements encrypted virtual private networks (VPNs). It aims to be lighter and better performing than IPsec and OpenVPN, two common tunneling protocols. The WireGuard protocol passes traffic over UDP. In March 2020, the Linux version of the software reached a stable production release and was incorporated into the Linux 5.6 kernel, and backported to earlier Linux kernels in some Linux distributions. The Linux kernel components are licensed under the GNU General Public License (GPL) version 2; other implementations are under GPLv2 or other free/open-source licenses. Protocol The WireGuard protocol is a variant of the Noise Protocol Framework IK handshake pattern, as illustrated by the choice of Noise_IKpsk2_25519_ChaChaPoly_BLAKE2s for the value of the Construction string listed on p10 of the Whitepaper. WireGuard uses the following: Curve25519 for key exchange ChaCha20 for symmetric encryption Poly1305 for message authentication codes SipHash24 for hashtable keys BLAKE2s for cryptographic hash function HKDF for key derivation function UDP-based only Base64-encoded private keys, public keys and preshared keys In May 2019, researchers from INRIA published a machine-checked proof of the WireGuard protocol, produced using the CryptoVerif proof assistant. Optional pre-shared symmetric key mode WireGuard supports pre-shared symmetric key mode, which provides an additional layer of symmetric encryption to mitigate future advances in quantum computing. This addresses the risk that traffic may be stored until quantum computers are capable of breaking Curve25519, at which point traffic could be decrypted. Pre-shared keys are "usually troublesome from a key management perspective and might be more likely stolen", but in the shorter term, if the symmetric key is compromised, the Curve25519 keys still provide more than sufficient protection. Networking WireGuard uses only UDP, due to the potential disadvantages of TCP-over-TCP. Tunneling TCP over a TCP-based connection is known as "TCP-over-TCP", and doing so can induce a dramatic loss in transmission performance due to the TCP meltdown problem. Its default server port is UDP 51820. WireGuard fully supports IPv6, both inside and outside of tunnel. It supports only layer 3 for both IPv4 and IPv6 and can encapsulate v4-in-v6 and vice versa. MTU overhead The overhead of WireGuard breaks down as follows: 20-byte IPv4 header or 40 bytes IPv6 header 8-byte UDP header 4-byte type 4-byte key index 8-byte nonce N-byte encrypted data 16-byte authentication tag MTU operational considerations Assuming the underlay network transporting the WireGuard packets maintains a 1500 bytes MTU, configuring the WireGuard interface to 1420 bytes MTU for all involved peers is ideal for transporting IPv6 + IPv4 traffic. However, when exclusively carrying legacy IPv4 traffic, a higher MTU of 1440 bytes for the WireGuard interface suffices. From an operational perspective and for network configuration uniformity, choosing to configure a 1420 MTU network-wide for the WireGuard interfaces would be advantageous. This approach ensures consistency and facilitates a smoother transition to enabling IPv6 for the WireGuard peers and interfaces in the future. Caveat There may be situations where, for instance, a peer is behind a network with 1500 bytes MTU, and a second peer is behind a wireless network such as an LTE network, where often times, the carrier opted to use an MTU that is far lower than 1420 bytes — In such cases, the underlying IP networking stack of the host will fragment the UDP encapsulated packet and send the packets through, the packets inside the tunnel however will remain consistent and will not be required to fragment as PMTUD will detect the MTU between the peers (in this example, that would be 1420 bytes) and send a fixed packet size between the peers. Extensibility WireGuard is designed to be extended by third-party programs and scripts. This has been used to augment WireGuard with various features including more user-friendly management interfaces (including easier setting up of keys), logging, dynamic firewall updates, dynamic IP assignment, and LDAP integration. Excluding such complex features from the minimal core codebase improves its stability and security. For ensuring security, WireGuard restricts the options for implementing cryptographic controls, limits the choices for key exchange processes, and maps algorithms to a small subset of modern cryptographic primitives. If a flaw is found in any of the primitives, a new version can be released that resolves the issue. Reception A review by Ars Technica found that WireGuard was easy to set up and use, used strong ciphers, and had a minimal codebase that provided for a small attack surface. WireGuard has received funding from the Open Technology Fund and donations from Jump Trading, Mullvad, Tailscale, Fly.io, and the NLnet Foundation. Oregon senator Ron Wyden has recommended to the National Institute of Standards and Technology (NIST) that they evaluate WireGuard as a replacement for existing technologies. Availability Implementations Implementations of the WireGuard protocol include: Donenfeld's initial implementation, written in C and Go. Cloudflare's BoringTun, a user space implementation written in Rust. Matt Dunwoodie's implementation for OpenBSD, written in C. Ryota Ozaki's wg(4) implementation for NetBSD, written in C. The FreeBSD implementation is written in C and shares most of the data path with the OpenBSD implementation. Native Windows kernel implementation named "wireguard-nt", since August 2021. AVM Fritz!Box modem-routers that support Fritz!OS version 7.39 and later. Permits site-to-site WireGuard connections from version 7.50 onwards. Vector Packet Processing user space implementation written in C. History Early snapshots of the code base exist from 30 June 2016. The logo is inspired by a stone engraving of the mythological Python that Jason Donenfeld saw while visiting a museum in Delphi. On 9 December 2019, David Miller – primary maintainer of the Linux networking stack – accepted the WireGuard patches into the "net-next" maintainer tree, for inclusion in an upcoming kernel. On 28 January 2020, Linus Torvalds merged David Miller's net-next tree, and WireGuard entered the mainline Linux kernel tree. On 20 March 2020, Debian developers enabled the module build options for WireGuard in their kernel config for the Debian 11 version (testing). On 29 March 2020 WireGuard was incorporated into the Linux 5.6 release tree. The Windows version of the software remains at beta. On 30 March 2020, Android developers added native kernel support for WireGuard in their Generic Kernel Image. On 22 April 2020, NetworkManager developer Beniamino Galvani merged GUI support for WireGuard in GNOME. On 12 May 2020, Matt Dunwoodie proposed patches for native kernel support of WireGuard in OpenBSD. On 22 June 2020, after the work of Matt Dunwoodie and Jason A. Donenfeld, WireGuard support was imported into OpenBSD. On 23 November 2020, Jason A. Donenfeld released an update of the Windows package improving installation, stability, ARM support, and enterprise features. On 29 November 2020, WireGuard support was imported into the FreeBSD 13 kernel. On 19 January 2021, WireGuard support was added for preview in pfSense Community Edition (CE) 2.5.0 development snapshots. In March 2021, kernel-mode WireGuard support was removed from FreeBSD 13.0, still in testing, after an urgent code cleanup in FreeBSD WireGuard could not be completed quickly. FreeBSD-based pfSense Community Edition (CE) 2.5.0 and pfSense Plus 21.02 removed kernel-based WireGuard as well. In May 2021, WireGuard support was re-introduced back into pfSense CE and pfSense Plus development snapshots as an experimental package written by a member of the pfSense community, Christian McDonald. The WireGuard package for pfSense incorporates the ongoing kernel-mode WireGuard development work by Jason A. Donenfeld that was originally sponsored by Netgate. In June 2021, the official package repositories for both pfSense CE 2.5.2 and pfSense Plus 21.05 included the WireGuard package. In 2023, WireGuard got over 200,000 Euros support from Germany's Sovereign Tech Fund. See also Comparison of virtual private network services Secure Shell (SSH), a cryptographic network protocol used to secure services over an unsecured network. Notes References Free security software Linux network-related software Tunneling protocols Virtual private networks
WireGuard
Engineering
1,884
153,831
https://en.wikipedia.org/wiki/Bacteriostatic%20agent
A bacteriostatic agent or bacteriostat, abbreviated Bstatic, is a biological or chemical agent that stops bacteria from reproducing, while not necessarily killing them otherwise. Depending on their application, bacteriostatic antibiotics, disinfectants, antiseptics and preservatives can be distinguished. When bacteriostatic antimicrobials are used, the duration of therapy must be sufficient to allow host defense mechanisms to eradicate the bacteria. Upon removal of the bacteriostat, the bacteria usually start to grow rapidly. This is in contrast to bactericides, which kill bacteria. Bacteriostats are often used in plastics to prevent growth of bacteria on surfaces. Bacteriostats commonly used in laboratory work include sodium azide (which is acutely toxic) and thiomersal. Bacteriostatic antibiotics Bacteriostatic antibiotics limit the growth of bacteria by interfering with bacterial protein production, DNA replication, or other aspects of bacterial cellular metabolism. They must work together with the immune system to remove the microorganisms from the body. However, there is not always a precise distinction between them and bactericidal antibiotics; high concentrations of some bacteriostatic agents are also bactericidal, whereas low concentrations of some bactericidal agents are bacteriostatic. This group includes: See also List of antibiotics Oligodynamic effect References Antibiotics
Bacteriostatic agent
Biology
302
1,935,217
https://en.wikipedia.org/wiki/Alef%20%28programming%20language%29
Alef is a discontinued concurrent programming language, designed as part of the Plan 9 operating system by Phil Winterbottom of Bell Labs. It implemented the channel-based concurrency model of Newsqueak in a compiled, C-like language. History Alef appeared in the first and second editions of Plan 9, but was abandoned during development of the third edition. Rob Pike later explained Alef's demise by pointing to its lack of automatic memory management, despite Pike's and other people's urging Winterbottom to add garbage collection to the language; also, in a February 2000 slideshow, Pike noted: "…although Alef was a fruitful language, it proved too difficult to maintain a variant language across multiple architectures, so we took what we learned from it and built the thread library for C." Alef was superseded by two programming environments. The Limbo programming language can be considered a direct successor of Alef and is the most commonly used language in the Inferno operating system. The Alef concurrency model was replicated in the third edition of Plan 9 in the form of the libthread library, which makes some of Alef's functionality available to C programs and allowed existing Alef programs (such as Acme) to be translated. Example This example was taken from the Alef reference manual. The piece illustrates the use of the tuple data type. (int, byte*, byte) func() { return (10, "hello", 'c'); } void main() { int a; byte* str; byte c; (a, str, c) = func(); } See also Communicating sequential processes Plan 9 from Bell Labs Go (programming language) References C programming language family Concurrent programming languages Plan 9 from Bell Labs Programming languages created in 1992
Alef (programming language)
Technology
374
3,187,994
https://en.wikipedia.org/wiki/Hardaliye
Hardaliye is a lactic acid fermented beverage produced from grapes, crushed mustard seeds, sour cherry leaves, and benzoic acid. It is an indigenous drink of the Trakya region of Turkey in southeastern Europe. A 2013 study showed that the ingestion of hardaliye had an antioxidant effect in adults. Hardaliye's nutritional value comes from the grapes as well as the fermentation process. Health benefits of hardaliye can be attributed to etheric oils from the mustard seeds. See also Drakshasava Podpiwek Şıra References External links Hardaliye: fermentedgrapejuice as a traditional Turkish beverage Fermented drinks Turkish words and phrases
Hardaliye
Biology
149
36,084,530
https://en.wikipedia.org/wiki/Depth-graded%20multilayer%20coating
A depth-graded multilayer coating is a multi-layer coating optimised for broadband response by varying the thickness of the layers used. A multi-layer coating consisting of alternating layers with different optical properties and the same thickness will tend to have a narrow frequency response, getting narrower as more layers are added; for some applications such as precise focussing of a monochromatic laser light source this is exactly what's desired, but it is useless for astronomical optics where it is often required to detect a whole range of frequencies emitted by some source of interest. The design of such coatings generally starts with an approximate analytical solution and then uses the simplex method of multi-variable optimisation to solve for optimal thicknesses of the layers. Typically the thin layers (to reflect high-energy X-rays) are on the inside since low-energy X-rays are absorbed more readily. One model used is a power law distribution of thicknesses, with the thickness of the ith bilayer as a/(b+i)^c for some optimised a, b, c. An optimum multilayer design depends on the graze angle, so ideally a different prescription would be used on each shell of a multi-shell X-ray Wolter mirror; in practice the same prescription is used for about ten shells. Characterising such coatings requires a synchrotron as a variable-wavelength X-ray source. The Danish Space Research Institute in Copenhagen is (in 2012) the world centre of excellence for such coatings, though a good deal of the earlier research and development was done in Russia. References Christensen, FE; Craig, WW; Windt, DL; Jimenez-Garate, MA; Hailey, CJ; Harrison, FA; Mao, PH; Chakan, JM; Ziegler, E; Honkimaki, V; "Measured reflectance of graded multilayer mirrors designed for astronomical hard X-ray telescopes," NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT Volume: 451 Issue: 3 Pages: 572-581 Published: SEP 11 2000 Thin-film optics
Depth-graded multilayer coating
Materials_science,Mathematics
443
18,348,983
https://en.wikipedia.org/wiki/World%20Mill
The World Mill (also "heavenly mill", "cosmic mill" and variants) is a mytheme suggested as recurring in Indo-European and other mythologies. It involves the analogy of the cosmos or firmament and a rotating millstone. The mytheme was extensively explored in Viktor Rydberg's 1886 Investigations into Germanic Mythology, who provides both ancient Scandinavian and Indian examples. Donald Mackenzie described the World Mill’s relationship to the sacred spiral and the revolution of the starry heavens, providing analogs in Chinese, Egyptian, Babylonian, and AmerInd folklore, before concluding "that the idea of the World Mill originated as a result of the observation of the seasonal revolutions of the constellation of the 'Great Bear'." Clive Tolley (1995) examined the significance of the mytheme in Indo-European and Finnish mythology. Tolley found that "the image of a cosmic mill, ambivalently churning out well-being or disaster, may be recognized in certain fragmentary myths", adding additional Indo-European and Finnish analogs of the mill to the material previously considered by Rydberg and others. Tolley comes to the conclusion that Richard M. Dorson surveyed the views of 19th-century writers on the World Mill in his 1968 historical review, Peasant Customs and Savage Myths: Selections from the British Folklorists, and the mytheme is discussed in the Kommentar zu den Liedern der Edda, in regard to the Eddic poem, Grottasöngr. See also Hamlet's Mill Rota Fortunae Axis Mundi Wyrd Sampo Dark Satanic Mills Mills of God Grótti Notes References Dorson, Richard M., ed. (1968). Peasant Customs and Savage Myths: Selections from the British Folklorists, Vol. I. University of Chicago Press. Tolley, Clive (1995). The Mill in Norse and Finnish Mythology. Saga-Book 24:63-82. Astronomical myths Comparative mythology Mythological objects
World Mill
Astronomy
401
3,148,619
https://en.wikipedia.org/wiki/Fort%20Vaux
Fort Vaux (), in Vaux-Devant-Damloup, Meuse, France, was a polygonal fort forming part of the ring of 19 large defensive works intended to protect the city of Verdun. Built from 1881 to 1884 for 1,500,000 francs, it housed a garrison of 150 men. Vaux was the second fort to fall in the Battle of Verdun after Fort Douaumont, which was captured by a small German raiding party in February 1916 in the confusion of the French retreat from the Woëvre plain. Vaux had been modernised before 1914 with reinforced concrete top protection like Fort Douaumont and was not destroyed by German heavy artillery fire, which had included shelling by howitzers. The superstructure of the fort was badly damaged but the garrison, the deep interior corridors and stations were intact when the fort was attacked on 2 June by German Stormtroops. The defence of Fort Vaux was marked by the heroism and endurance of the garrison, including Major Sylvain-Eugene Raynal. Under his command, the French garrison repulsed German assaults, including fighting underground from barricades inside the corridors, during the first big engagement inside a fort during the First World War. The last men of the French garrison gave up after running out of water (some of which was poisoned), ammunition, medical supplies and food. Raynal sent several messages by homing pigeon (including Le Vaillant), requesting relief for his soldiers. In his last message, Raynal wrote "This is my last pigeon". After the surrender of the garrison on 7 June, Crown Prince Wilhelm, the commander of the 5th Army, presented Major Raynal with a French officer's sword as a sign of respect. Raynal and his soldiers remained in captivity in Germany until the Armistice of 11 November 1918. The fort was recaptured by French infantry on 2 November 1916 after an artillery bombardment involving two long-range railway guns. After its recapture, Fort Vaux was repaired and garrisoned. Several underground galleries were dug to reach far outside the fort, one of them being long, the water reserve was quadrupled and light was provided by two electric generators. Some damage from the fighting on 2 June can still be seen. The underground installations of the fort are well preserved and are open to the public for guided visits. Battle of Verdun 11 September 1914, the 75 mm turret fires 22 rounds at a German detachment in the . 18 February 1915, the fort is bombarded for the first time by twelve 420 mm rounds which causes little damage. End of 1915, disarmament of the fort is carried out to send the guns and ammunition to the front-line. The four 75 mm guns are removed from the casemates, leaving only the two in the turret. In January 1916, enough gunpowder stored for the possible destruction of the fort in case of an enemy approach. From 21 to 26 February 1916, the fort is bombarded with shells of all sizes including 129 heavy shells. Pillboxes and armoured observatories are damaged and the gallery leading to the 75 mm turret was cut. Late February – early March 1916, the fort is frequently bombarded and the 75 mm turret is destroyed accidentally by heavy shells that cause the demolition explosives within to detonate. 14 May 1916, Commandant Raynal takes command of the fort, which has no artillery. 1 June 1916, the Germans begin preparations to enter the fort through the . They cannot be stopped due to the fort having no artillery. 2 and 3 June 1916, German troops led by Kurt Rackow attack the fort with flame throwers and force French troops outside to retreat into the fort. The Germans penetrate the fort through the coffers of the counterscarp (). 5 June 1916, Commandant Raynal requests the French army to bomb the fort, where the top is occupied by the Germans, to allow part of the garrison to evacuate the fort. 7 June 1916, for three days water supplies are empty and the fighting takes place inside the galleries with grenades, guns and bayonets. Commander Raynal is captured by the Germans under military honours for having fought bravely in extreme conditions with a thirsty garrison. From 8 June to 1 November 1916, the fort is used by the Germans as a shelter and command post for the area. The French attempt to retake the fort several times with enormous loss of life. They bombard the fort to destroy it with heavy shells, including super-heavy 400 mm rounds but the concrete walls resist. Life inside the structure becomes impossible and the Germans eventually abandon the fort at the end of October. 2 November 1916, the fort is recaptured without resistance by a French patrol which finds it empty. By the end of the battle, in December 1916, the fort is almost in the same condition as it was in June, except for some damage caused by French artillery. 1916–1918, the are rehabilitated before being rearmed; an observatory and an armoured command bunker are equipped with machine-guns. Further defences including machine-guns are fitted in place of the 75 mm turret, to defend the area between the ravine and the village of Dieppe-sous-Douaumont. Exits and entrances of the fort are equipped with masonry baffles, machine-guns and grenade launcher chutes. A network of tunnels long is dug beneath the fort and generators are used for lighting and ventilation. Footnotes References Further reading External links Les forts Séré de rivières Le fort de Vaux Memoirs & Diaries: Account of the assaults upon Fort Vaux, Verdun, June 1916 1884 establishments in France Military installations established in the 1880s 19th-century fortifications Battle of Verdun Buildings and structures in Meuse (department) Museums in Meuse (department) Séré de Rivières system World War I museums in France Wilhelm, German Crown Prince
Fort Vaux
Engineering
1,190
59,079,798
https://en.wikipedia.org/wiki/NGC%20668
NGC 668 is a spiral galaxy located 200 million light-years away in the constellation Andromeda. It was discovered by astronomer Édouard Stephan on December 4, 1880 and is a member of Abell 262. See also List of NGC objects (1–1000) References External links 0668 01238 006502 Andromeda (constellation) Astronomical objects discovered in 1880 Spiral galaxies Abell 262 Discoveries by Édouard Stephan
NGC 668
Astronomy
88
47,804,355
https://en.wikipedia.org/wiki/Metizolam
Metizolam (also known as desmethyletizolam) is a thienotriazolodiazepine that is the demethylated analogue of the closely related etizolam. Legal status Following its sale as a designer drug, metizolam was classified as controlled substance in Sweden on 26. January 2016. See also List of benzodiazepine designer drugs References 2-Chlorophenyl compounds Designer drugs GABAA receptor positive allosteric modulators Hypnotics Thienotriazolodiazepines
Metizolam
Biology
117
62,879,352
https://en.wikipedia.org/wiki/IRAS%2013224-3809
IRAS 13224-3809 is a highly active and fluctuating Seyfert 1 galaxy in the constellation Centaurus about 1 billion light-years from Earth. The galaxy is notable due to its centrally-located supermassive black hole that is closely studied by astronomers using x-ray astronomy, particularly X-ray reverberation echo mapping techniques, in an effort to better understand the inner workings, including mass and spin, of black holes. References External links SIMBAD SIMBAD/ascii Centaurus Seyfert galaxies TIC objects 088835 2MASS objects
IRAS 13224-3809
Astronomy
124
45,089,531
https://en.wikipedia.org/wiki/Congressional%20App%20Challenge
The Congressional Science Technology, Engineering and Math (STEM) Academic Competition, also known as the House App Contest or Congressional App Challenge (CAC), allows middle and high school students in participating congressional districts to compete in an annual application software ("app") development contest. Students are encouraged to design an app using any programming language on any platform, with no limits on topic or function. Winners from congressional districts have their apps featured online and in the United States Capitol Building and are invited to attend the annual #HouseofCode event. History The challenge was established by the United States House of Representatives in 2013 under the "Academic Competition Resolution of 2013" as a bipartisan effort to engage student creativity and participation in STEM education fields in a similar fashion as the Congressional Art Competition. The resolution passed with 99% support – a vote of 411 to 3 and outlined how and at what interval the competition would be hosted. The Congressional Internet Caucus Advisory Committee introduced the concept for the Congressional App Challenge in 2013 and the challenge was co-chaired by Congressional Internet Caucus co-chairs Rep. Bob Goodlatte and Rep. Anna Eshoo. Today, the Congressional App Challenge is managed by the non-profit organization, the Internet Education Foundation, in partnership with the House of Representatives. In its inaugural year, 84 congressional districts in 31 states and DC recognized 212 students for creating 109 apps. The 2023 Challenge had 11,334 students submit 3,645 apps in 374 congressional districts. Demographics With its focus on increasing diversity and inclusion within the computer science field, the Congressional App Challenge enrolls a large number of underrepresented minority, female, and rural students with various experience in coding. In 2018, 8% of participants were Black, 15% Hispanic, and 3% American Indian, and 36% of participants were female. Almost half of students indicated that they had no experience or were beginners in coding before participating in the Congressional App Challenge. Geographically, participants come from all around the United States and its territories, including Alaska, California, New York, Missouri, Puerto Rico, and the Northern Mariana Islands. Co-Chairs The first official Congressional App Challenge launched under the leadership of the inaugural CAC Co-Chairs Reps. Mimi Walters (R-CA) and Hakeem Jeffries (D-NY). Reps. Ed Royce (R-CA) and Seth Moulton (D-MA) took over as co-chairs in 2016, and in the first two years of the Challenge, the CAC reached nearly 4,000 students across 33 states. Reps. Tim Ryan (D-OH) and Ileana Ros-Lehtinen (R-FL) took over as co-chairs in 2017. In 2018 Representatives Suzan DelBene (D-WA) and French Hill (R-AK) became the co-chairs for a two year term. Reps. Jennifer Wexton (D-VA) and Young Kim (R-CA) chair the competition for the 117th Congress. Reps. Ted Lieu (D-CA) and Zach Nunn (R-IA) chair the completion for the 118th Congress. In the first eight years of the Challenge, the CAC has inspired over 50,000 students across all 50 states to code for congress. #HouseofCode Every Spring, the Congressional App Challenge hosts #HouseofCode in Washington D.C. The event is considered the national computer science fair, and winners from all participating congressional districts are invited to attend. During #HouseofCode, students demonstrate their apps to members from the House of Representatives, hear from speakers, and meet professionals from the Washington D.C. technology community. Attendees are encouraged to meet with their peers and connect with sponsors, partners, and other community organizations to discuss pathways in STEM. References External links Official congressional website Official competition website Science competitions Mathematics competitions
Congressional App Challenge
Technology
785
28,057,466
https://en.wikipedia.org/wiki/Mediterranean%20tropical-like%20cyclone
Mediterranean tropical-like cyclones, often referred to as Mediterranean cyclones or Mediterranean hurricanes, and shortened as medicanes, are meteorological phenomena occasionally observed over the Mediterranean Sea. On a few rare occasions, some storms have been observed reaching the strength of a Category 1 hurricane, on the Saffir–Simpson scale, and Medicane Ianos in 2020 was recorded reaching Category 2 intensity. The main societal hazard posed by medicanes is not usually from destructive winds, but through life-threatening torrential rains and flash floods. The occurrence of medicanes has been described as not particularly rare. Tropical-like systems were first identified in the Mediterranean basin in the 1980s, when widespread satellite coverage showing tropical-looking low pressures which formed a cyclonic eye in the center were identified. Due to the dry nature of the Mediterranean region, the formation of tropical, subtropical cyclones and tropical-like cyclones is infrequent and also hard to detect, in particular with the reanalysis of past data. Depending on the search algorithms used, different long-term surveys of satellite era and pre-satellite era data came up with 67 tropical-like cyclones of tropical storm intensity or higher between 1947 and 2014, and around 100 recorded tropical-like storms between 1947 and 2011. More consensus exists about the long term temporal and spatial distribution of tropical-like cyclones: they form predominantly over the western and central Mediterranean Sea while the area east of Crete is almost devoid of tropical-like cyclones. The development of tropical-like cyclones can occur year-round, with activity historically peaking between the months of September and January, while the counts for the summer months of June and July are the lowest, being within the peak dry season of the Mediterranean with stable air. Meteorological classification and history Historically, the term tropical-like cyclone was coined in the 1980s to unofficially distinguish tropical cyclones developing outside the tropics (like in the Mediterranean Basin) from those developing inside the tropics. The term tropical-like was in no way meant to indicate a hybrid cyclone exhibiting characteristics not usually seen in "true" tropical cyclones. In their matured stages, Mediterranean tropical cyclones show no difference from other tropical storms. Mediterranean hurricanes or medicanes are therefore not different from hurricanes elsewhere. Mediterranean tropical-like cyclones are not considered to be formally classified tropical cyclones and their region of formation is not officially monitored by any agency with meteorological tasks. However, the NOAA subsidiary Satellite Analysis Branch released information related to a medicane in November 2011 while it was active, which they dubbed as "Tropical Storm 01M", though they ceased services in the Mediterranean on 16 December 2011 for undisclosed reasons. However, in 2015, the NOAA resumed services in the Mediterranean region; by 2016, the NOAA was issuing advisories on a new tropical system, Tropical Storm 90M. Since 2005, ESTOFEX has been issuing bulletins that can include tropical-like cyclones, among others. No agency with meteorological tasks, however, is officially responsible for monitoring the formation and development of medicanes, as well as for their naming. Despite all this, the whole Mediterranean Sea lies within the Greek area of responsibility with the Hellenic National Meteorological Service (HNMS) as the governing agency, while France's Météo-France serves as a "preparation service" for the western part of the Mediterranean as well. As the only official agency covering the whole Mediterranean Sea, HNMS publications are of particular interest for the classification of medicanes. HNMS calls the meteorological phenomenon Mediterranean tropical-like Hurricane in its annual bulletin and – by also using the respective portmanteau word medicane– makes the term medicane quasi-official. In a joint article with the Laboratory of Climatology and Atmospheric Environment of the University of Athens, the Hellenic National Meteorological Service outlines conditions to consider a cyclone over the Mediterranean Sea a Medicane: In the same article, a survey of 37 medicanes revealed that medicanes could have a well-defined cyclone eye at estimated maximum sustained winds between , with the lower end being exceptionally low for warm core cyclones. Medicanes can indeed develop well-defined eyes at such low maximum sustained winds of around as could be seen for a 22 October 2015 medicane near the Albanian coast. This is much lower than the lower threshold for eye development in tropical systems in the Atlantic Ocean which seems to be close to , well below hurricane-force winds. Several notable and damaging medicanes are known to have occurred. In September 1969, a North African Mediterranean tropical cyclone produced flooding that killed nearly 600 individuals, left 250,000 homeless, and crippled local economies. A medicane in September 1996 that developed in the Balearic Islands region spawned six tornadoes, and inundated parts of the islands. Several medicanes have also been subject to extensive study, such as those of January 1982, January 1995, September 2006, November 2011, and November 2014. The January 1995 storm is one of the best-studied Mediterranean tropical cyclones, with its close resemblance to tropical cyclones elsewhere and availability of observations. The medicane of September 2006, meanwhile, is well-studied, due to the availability of existing observations and data. Given the low profile of HNMS in forecasting and classifying tropical-like systems in the Mediterranean, a proper classification system for Mediterranean tropical-like cyclones does not exist. The HNMS criterion of a cyclonic eye for considering a system a medicane is usually valid for a system at peak strength, often only hours before landfall, which is not suitable at least for forecasts and warnings. Unofficially, Deutscher Wetterdienst (DWD, the German meteorological service) proposed a system to forecast and classify tropical-like cyclones based on the NHC classification for the northern Atlantic Ocean. To account for the broader wind field and the larger radius of maximum winds of tropical-like systems in the Mediterranean (see the section Development and characteristics below), DWD is suggesting a lower threshold of for the use of the term medicane in the Mediterranean instead of as suggested by the Saffir–Simpson scale for Atlantic hurricanes. The DWD proposal and also US-based forecasts (NHC, NOAA, NRL etc.) use one-minute sustained winds while European-based forecasts use ten-minute sustained winds which makes a difference of roughly 14% in measurements. The distinction is also of direct practical use (for example for a comparison of NOAA bulletins with EUMETSAT, ESTOFEX and HNMS bulletins). To account for the difference, the DWD proposal is shown below for both one-minute and deduced ten-minute sustained winds (see tropical cyclone scales for conversions): Another proposal uses roughly the same scale but suggests to use the term medicane for tropical storm force cyclones and major medicane for hurricane force cyclones. Both proposals would fit the observation, that half of the 37 cyclones surveyed by HNMS with a clearly observable hurricane-like eye, as the major criterion for assigning the medicane status, showed maximum sustained winds between , while another quarter of the medicanes peaked at lower wind speeds. Climatology A majority of Mediterranean tropical cyclones (tropical cyclogenesis) form over two separate regions. The first, more conducive for development than the other, encompasses an area of the western Mediterranean bordered by the Balearic Islands, southern France, and the shorelines of the islands of Corsica and Sardinia. The second identified region of development, in the Ionian Sea between Sicily and Greece and stretching south to Libya, is less favorable for tropical cyclogenesis. An additional two regions, in the Aegean and Adriatic seas, produce fewer medicanes, while activity is minimal in the Levantine region. The geographical distribution of Mediterranean tropical-like cyclones is markedly different from that of other cyclones, with the formation of regular cyclones centering on the Pyrenees and Atlas mountain ranges, the Gulf of Genoa, and in the Ionian Sea. Although meteorological factors are most advantageous in the Adriatic and Aegean seas, the closed nature of the region's geography, bordered by land, allows little time for further evolution. The geography of mountain ranges bordering the Mediterranean are conducive for severe weather and thunderstorms, with the sloped nature of mountainous regions permitting the development of convective activity. Although the geography of the Mediterranean region, as well as its dry air, typically prevent the formation of tropical cyclones, when certain meteorological circumstances arise, difficulties influenced by the region's geography are overcome. The occurrence of tropical cyclones in the Mediterranean Sea is generally extremely rare, with an average of 1.57 forming annually and merely 99 recorded occurrences of tropical-like storms discovered between 1948 and 2011 in a modern study, with no definitive trend in activity in that period. Few medicanes form during the summer season, though activity typically rises in autumn, peaks in January, and gradually decreases from February to May. In the western Mediterranean region of development, approximately 0.75 such systems form each year, compared with 0.32 in the Ionian Sea region. However, on very rare occasions, similar tropical-like storms may also develop in the Black Sea. Studies have evaluated that global warming can result in higher observed intensities of tropical cyclones as a result of deviations in the surface energy flux and atmospheric composition, which both heavily influence the development of medicanes as well. In tropical and subtropical areas, sea surface temperatures (SSTs) rose within a 50-year period, and in the North Atlantic and Northwestern Pacific tropical cyclone basins, the potential destructiveness and energy of storms nearly doubled within the same duration, evidencing a clear correlation between global warming and tropical cyclone intensities. Within a similarly recent 20-year period, SSTs in the Mediterranean Sea increased by , though no observable increase in medicane activity has been noted, . In 2006, a computer-driven atmospheric model evaluated the future frequency of Mediterranean cyclones between 2071 and 2100, projecting a decrease in autumn, winter, and spring cyclonic activity coinciding with a dramatic increase in formation near Cyprus, with both scenarios attributed to elevated temperatures as a result of global warming. In another study, researchers found that more tropical-like storms in the Mediterranean could reach Category 1 strength by the end of the 21st century, with most of the stronger storms appearing in the autumn, though the models indicated that some storms could potentially reach Category 2 intensity. Other studies, however, have been inconclusive, forecasting both increases and decreases in duration, number, and intensity. Three independent studies, using different methodologies and data, evaluated that while medicane activity would likely decline with a rate depending on the climate scenario considered, a higher percentage of those that formed would be of greater strength. Development and characteristics The development of tropical or subtropical cyclones in the Mediterranean Sea can usually only occur under somewhat unusual circumstances. Low wind shear and atmospheric instability induced by incursions of cold air are often required. A majority of medicanes are also accompanied by upper-level troughs, providing energy required for intensifying atmospheric convection—thunderstorms—and heavy precipitation. The baroclinic properties of the Mediterranean region, with high temperature gradients, also provides necessary instability for the formation of tropical cyclones. Another factor, rising cool air, provides necessary moisture as well. Warm sea surface temperatures (SSTs) are mostly unnecessary, however, as most medicanes' energy are derived from warmer air temperatures. When these favorable circumstances coincide, the genesis of warm-core Mediterranean tropical cyclones, often from within existing cut-off cold-core lows, is possible in a conducive environment for formation. Factors required for the formation of medicanes are somewhat different from that normally expected of tropical cyclones; known to emerge over regions with sea surface temperatures (SSTs) below , Mediterranean tropical cyclones often require incursions of colder air to induce atmospheric instability. A majority of medicanes develop above regions of the Mediterranean with SSTs of , with the upper bound only found in the southernmost reaches of the sea. Despite the low sea surface temperatures, the instability incited by cold atmospheric air within a baroclinic zone—regions with high differences in temperature and pressure—permits the formation of medicanes, in contrast with tropical areas lacking high baroclinity, where raised SSTs are needed. While significant deviations in air temperature have been noted around the time of Mediterranean tropical cyclones' formation, few anomalies in sea surface temperature coincide with their development, indicating that the formation of medicanes is primarily controlled by higher air temperatures, not by anomalous SSTs. Similar to tropical cyclones, minimal wind shear—difference in wind speed and direction over a region—as well as abundant moisture and vorticity encourages the genesis of tropical cyclone-like systems in the Mediterranean Sea. Due to the confined character of the Mediterranean and the limited capability of heat fluxes — in the case of medicanes, air-sea heat transfer — tropical cyclones with a diameter larger than cannot exist within the Mediterranean. Despite being a relatively baroclinic area with high temperature gradients, the primary energy source utilized by Mediterranean tropical cyclones is derived from underlying heat sources generated by the presence of convection—thunderstorm activity—in a humid environment, similar to tropical cyclones elsewhere outside the Mediterranean Sea. In comparison with other tropical cyclone basins, the Mediterranean Sea generally presents a difficult environment for development; although the potential energy necessary for development is not abnormally large, its atmosphere is characterized by its lack of moisture, impeding potential formation. The full development of a medicane often necessitates the formation of a large-scale baroclinic disturbance, transitioning late in its life cycle into a tropical cyclone-like system, nearly always under the influence of a deep, cut-off, cold-core low within the middle-to-upper troposphere, frequently resulting from abnormalities in a wide-spreading Rossby wave—massive meanders of upper-atmospheric winds. The development of medicanes often results from the vertical shift of air in the troposphere as well, resulting in a decrease in its temperature coinciding with an increase in relative humidity, creating an environment more conducive for tropical cyclone formation. This, in turn, leads to in an increase in potential energy, producing heat-induced air-sea instability. Moist air prevents the occurrence of convective downdrafts—the vertically downward movement of air—which often hinder the inception of tropical cyclones, and in such a scenario, wind shear remains minimal; overall, cold-core cut-off lows serve well for the later formation of compact surface flux-influenced warm-core lows such as medicanes. The regular genesis of cold-core upper-level lows and the infrequency of Mediterranean tropical cyclones, however, indicate that additional unusual circumstances are involved the emergence of the latter. Elevated sea surface temperatures, contrasting with cold atmospheric air, encourage atmospheric instability, especially within the troposphere. In general, most medicanes maintain a radius of , last between 12 hours and 5 days, travel between , develop an eye for less than 72 hours, and feature wind speeds of up to ; in addition, a majority are characterized on satellite imagery as asymmetric systems with a distinct round eye encircled by atmospheric convection. Weak rotation, similar to that in most tropical cyclones, is usually noted in a medicane's early stages, increasing with intensity; medicanes, however, often have less time to intensify, remaining weaker than most North Atlantic hurricanes and only persisting for the duration of a few days. While the entire lifetime of a cyclone may encompass several days, most will only retain tropical characteristics for less than 24 hours. Circumstances sometimes permit the formation of smaller-scale medicanes, although the required conditions differ even from those needed by other medicanes. The development of abnormally small tropical cyclones in the Mediterranean usually requires upper-level atmospheric cyclones inducing cyclogenesis in the lower atmosphere, leading to the formation of warm-core lows, encouraged by favorable moisture, heat, and other environmental circumstances. Mediterranean cyclones have been compared with polar lows—cyclonic storms which typically develop in the far regions of the Northern and Southern Hemispheres—for their similarly small size and heat-related instability; however, while medicanes nearly always feature warm-core lows, polar lows are primarily cold-core. The prolonged life of medicanes and similarity to polar lows is caused primarily by origins as synoptic-scale surface lows and heat-related instability. Heavy precipitation and convection within a developing Mediterranean tropical cyclone are usually incited by the approach of an upper-level trough—an elongated area of low air pressures—bringing downstream cold air, encircling an existing low-pressure system. After this occurs, however, a considerable reduction in rainfall rates occurs despite further organization, coinciding with a decrease in previously high lightning activity as well. Although troughs will often accompany medicanes along their track, separation eventually occurs, usually in the later part of a Mediterranean tropical cyclone's life cycle. At the same time, moist air, saturated and cooled while rising into the atmosphere, then encounters the medicane, permitting further development and evolution into a tropical cyclone. Many of these characteristics are also evident in polar lows, except for the warm core characteristic. Notable medicanes and impacts 22–27 Sep 1969 An unusually severe Mediterranean tropical cyclone developed on 23 September 1969 southeast of Malta, producing severe flooding. Steep pressure and temperature gradients above the Atlas mountain range were evident on 19 September, a result of cool sea air attempting to penetrate inland; south of the mountains, a lee depression—a low-pressure area in a mountainous region—developed. Under the influence of mountainous terrain, the low initially meandered northeastward. Following the entry of cool sea air, however, it recurved to the southeast before transitioning into a Saharan depression associated with a distinct cold front by 22 September. Along the front's path, desert air moved northward while cold air drifted in the opposite direction, and in northern Libya, warm arid air clashed with the cooler levant of the Mediterranean. The organization of the disturbance improved slightly further before emerging into the Mediterranean Sea on 23 September, upon which the system experienced immediate cyclogenesis, rapidly intensifying while southeast of Malta as a cold-core cut-off low, and acquiring tropical characteristics. In western Africa, meanwhile, several disturbances converged toward Mauritania and Algeria, while the medicane recurved southwestward back toward the coast, losing its closed circulation and later dissipating. The cyclone produced severe flooding throughout regions of northern Africa. Malta received upward of of rainfall on 23 September, Sfax measured on 24 September, Tizi Ouzou collected on 25 September, Gafsa received and Constantine measured on 26 September, Cap Bengut collected on 27 September, and Biskra received on 28 September. In Malta, a 20000-ton tanker struck a reef and split in two, while in Gafsa, Tunisia, the cyclone flooded phosphate mines, leaving over 25,000 miners unemployed and costing the government over £2 million per week. Thousands of camels and snakes, drowned by flood waters, were swept out to sea, and massive Roman bridges, which withstood all floods since the fall of the Roman Empire, collapsed. In all, the floods in Tunisia and Algeria killed almost 600 individuals, left 250,000 homeless, and severely damaged regional economies. Due to communication problems, however, flood relief funds and television appeals were not set up until nearly a month later. Leucosia (24–27 Jan 1982) The unusual Mediterranean tropical storm of January 1982, dubbed Leucosia, was first detected in waters north of Libya. The storm likely reached the Atlas mountain range as a low-pressure area by 23 January 1982, reinforced by an elongated, slowly-drifting trough above the Iberian Peninsula. Eventually, a closed circulation center developed by 1310 UTC, over parts of the Mediterranean with sea surface temperatures (SSTs) of approximately and air temperature of . A hook-shaped cloud developed within the system shortly thereafter, rotating as it elongated into a -long comma-shaped apparatus. After looping around Sicily, it drifted eastward between the island and Peloponnese, recurving on its track again, exhibiting clearly curved spiral banding before shrinking slightly. The cyclone reached its peak intensity at 1800 UTC on the following day, maintaining an atmospheric pressure of , and was succeeded by a period of gradual weakening, with the system's pressure eventually rising to . The system slightly reintensified, however, for a six-hour period on 26 January. Ship reports indicated winds of were present in the cyclone at the time, tropical storm-force winds on the Saffir–Simpson hurricane wind scale, likely near the eyewall of the cyclone, which features the highest winds in a tropical cyclone. The Global Weather Center's Cyclone Weather Center of the United States Air Force (USAF) initiated "Mediterranean Cyclone Advisories" on the cyclone at six-hour intervals starting at 1800 UTC on 27 January, until 0600 UTC on the following day. Convection was most intense in the eastern sector of the cyclone as it drifted east-northeastward. On infrared satellite imagery, the eye itself was in diameter, contracting to just one day prior to making landfall. The cyclone passed by Malta, Italy, and Greece before dissipating several days later, in the extreme eastern Mediterranean. Observations related to the cyclone, however, were inadequate, and although the system maintained numerous tropical characteristics, it is possible it was merely a compact but powerful extratropical cyclone exhibiting a clear eye, spiral banding, towering cumulonimbi, and high surface winds along the eyewall. 27 Sep – 2 October 1983 On 27 September 1983, a medicane was observed at sea between Tunisia and Sicily, looping around Sardinia and Corsica, coming ashore twice on the islands, before making landfall at Tunis early on 2 October and dissipating. The development of the system was not encouraged by baroclinic instability; rather, convection was incited by abnormally high sea surface temperatures (SSTs) at the time of its formation. It also featured a definitive eye, tall cumulonimbus clouds, intense sustained winds, and a warm core. For most of its duration, it maintained a diameter of , though it shrank just before landfall on Ajaccio to a diameter of . Celeno (14–17 Jan 1995) Among numerous documented medicanes, the cyclone of January 1995, which was dubbed Celeno, is generally considered to be the best-documented instance in the 20th century. The storm emerged from the Libyan coast and moved toward the Ionian shoreline of Greece on 13 January as a compact low-pressure area. The medicane maintained winds reaching up to as it traversed the Ionian Sea, while the German research ship Meteor recorded winds of . Upon the low's approach near Greece, it began to envelop an area of atmospheric convection; meanwhile, in the middle troposphere, a trough extended from Russia to the Mediterranean, bringing with it extremely cold temperatures. Two low-pressure areas were present along the path of the trough, with one situated above Ukraine and the other above the central Mediterranean, likely associated with a low-level cyclone over western Greece. Upon weakening and dissipation on 14 January, a second low, the system which would evolve into the Mediterranean tropical cyclone, developed in its place on 15 January. At the time of formation, high clouds indicated the presence of intense convection, and the cyclone featured an axisymmetric cloud structure, with a distinct, cloud-free eye and rainbands spiraling around the disturbance as a whole. Soon thereafter, the parent low separated from the medicane entirely and continued eastward, meandering toward the Aegean Sea and Turkey. Initially remaining stationary between Greece and Sicily with a minimum atmospheric pressure of , the newly formed system began to drift southwest-to-south in the following days, influenced by northeasterly flow incited by the initial low, now far to the east, and a high-pressure area above central and eastern Europe. The system's atmospheric pressure increased throughout 15 January due to the fact it was embedded within a large-scale environment, with its rising pressure due to the general prevalence of higher air pressures throughout the region, and was not a sign of weakening. Initial wind speeds within the young medicane were generally low, with sustained winds of merely , with the highest recorded value associated with the disturbance being at 0000 UTC on 16 January, slightly below the threshold for tropical storm on the Saffir–Simpson hurricane wind scale. Its structure now consisted of a distinct eye encircled by counterclockwise-rotating cumulonimbi with cloud top temperatures colder than , evidencing deep convection and a regular feature observed in most tropical cyclones. At 1200 UTC on 16 January, a ship recorded winds blowing east-southeast of about south-southwest about north-northeast of the cyclone's center. Intense convection continued to follow the entire path of the system as it traversed the Mediterranean, and the cyclone made landfall in northern Libya at approximately 1800 UTC on 17 January, rapidly weakening after coming ashore. As it moved inland, a minimum atmospheric pressure of was recorded, accompanied by wind speeds of as it slowed down after passing through the Gulf of Sidra. Although the system retained its strong convection for several more hours, the cyclone's cloud tops began to warm, evidencing lower clouds, before losing tropical characteristics entirely on 17 January. Offshore ship reports recorded that the medicane produced intense winds, copious rainfall, and abnormally warm temperatures. 11–13 Sep 1996 Three notable medicanes developed in 1996. The first, in mid-September 1996, was a typical Mediterranean tropical cyclone that developed in the Balearic Islands region. At the time of the cyclone's formation, a powerful Atlantic cold front and a warm front associated with a large-scale low, producing northeasterly winds over the Iberian peninsula, extended eastward into the Mediterranean, while abundant moisture gathered in the lower troposphere over the Balearic channel. On the morning of 12 September, a disturbance developed off of Valencia, Spain, dropping heavy rainfall on the coast even without coming ashore. An eye developed shortly thereafter as the system rapidly traversed across Majorca and Sardinia in its eastward trek. It made landfall upon the coast of southern Italy on the evening of 13 September with a minimum atmospheric pressure of , dissipating shortly after coming ashore, with a diameter of about . At Valencia and other regions of eastern Spain, the storm generated heavy precipitation, while six tornadoes touched down over the Balearic Islands. While approaching the coast of the Balearic Islands, the warm-core low induced a pressure drop of at Palma, Majorca in advance of the tropical cyclone's landfall. Medicanes as small as the one that formed in September 1996 are atypical, and often require circumstances different even from those required for regular Mediterranean tropical cyclone formation. Warm low-level advection–transfer of heat through air or sea–caused by a large-scale low over the western Mediterranean was a primary factor in the rise of strong convection. The presence of a mid- to upper-level cut-off cold-core low, a method of formation typical to medicanes, was also key to the development of intense thunderstorms within the cyclone. In addition, interaction between a northeastward-drifting trough, the medicane, and the large-scale also permitted the formation of tornadoes within thunderstorms generated by the cyclone after making landfall. 4–6 Oct 1996 The second of the three recorded Mediterranean tropical cyclones in 1996 formed between Sicily and Tunisia on 4 October, making landfall on both Sicily and southern Italy. The medicane generated major flooding in Sicily. In Calabria, wind gusts of up to were reported in addition to severe inundation. Cornelia (6–11 Oct 1996) The third major Mediterranean tropical cyclone of that year formed north of Algeria, and strengthened while sweeping between the Balearic Islands and Sardinia, with an eye-like feature prominent on satellite. The storm was unofficially named Cornelia. The eye of the storm was distorted and disappeared after transiting over southern Sardinia throughout the evening of 8 October, with the system weakening as a whole. On the morning of October 9, a smaller eye emerged as the system passed over the Tyrrhenian Sea, gradually strengthening, with reports from the storm's center reporting winds of . Extreme damage was reported in the Aeolian Islands after the tropical cyclone passed north of Sicily, though the system dissipated while turning southward over Calabria. Overall, the lowest estimated atmospheric pressure in the third medicane was . Both October systems featured distinctive spiral bands, intense convection, high sustained winds, and abundant precipitation. Querida (25–27 Sep 2006) A short-lived medicane, named Querida by the Free University of Berlin, developed near the end of September 2006, along the coast of Italy. The origins of the medicane can be traced to the alpine Atlas mountain range on the evening of 25 September, likely forming as a normal lee cyclone. At 0600 UTC on 26 September, European Centre for Medium-Range Weather Forecasts (ECMWF) model analyses indicated the existence of two low-pressure areas along the shoreline of Italy, one on the west coast, sweeping eastward across the Tyrrhenian Sea, while the other, slightly more intense, low was located over the Ionian Sea. As the latter low approached the Strait of Sicily, it met an eastward-moving convection-producing cold front, resulting in significant intensification, while the system simultaneously reduced in size. It then achieved a minimum atmospheric pressure of approximately after transiting north-northeastward across the -wide Salentine peninsula in the course of roughly 30 minutes at 0915 UTC the same day. Wind gusts surpassing were recorded as it passed over Salento due to a steep pressure gradient associated with it, confirmed by regional radar observations denoting the presence of a clear eye. The high winds inflicted moderate damages throughout the peninsula, though specific damage is unknown. Around 1000 UTC, both radar and satellite recorded the system's entry into the Adriatic Sea and its gradual northwestward curve back toward the Italian coast. By 1700 UTC, the cyclone made landfall in northern Apulia while maintaining its intensity, with a minimum atmospheric pressure at . The cyclone weakened while drifting further inland over the Italian mainland, eventually dissipating as it curved west-southwestward. A later study in 2008 evaluated that the cyclone possessed numerous characteristics seen in tropical cyclones elsewhere, with a spiral appearance, eye-like apparatus, rapid atmospheric pressure decreases in advance of landfall, and intense sustained winds, concentrated near the storm's eyewall; the apparent eye-like structure in the cyclone, however, was ill-defined. Since then, the medicane has been the subject of significant study as a result of the availability of scientific observations and reports related to the cyclone. In particular, the sensitivity of this cyclone to sea-surface temperatures, initial conditions, the model, and the parameterization schemes used in the simulations were analyzed. The relevance of different instability indices for the diagnosis and the prediction of these events were also studied. Rolf (6–9 Nov 2011) In November 2011, the first officially designated Mediterranean tropical cyclone by the National Oceanic and Atmospheric Administration (NOAA) formed, christened as Tropical Storm 01M by the Satellite Analysis Branch, and given the name Rolf by the Free University of Berlin (FU Berlin), despite the fact that no agency is officially responsible for monitoring tropical cyclone activity in the Mediterranean. On 4 November 2011, a frontal system associated with another low-pressure area monitored by FU Berlin, designated Quinn, spawned a second low-pressure system inland near Marseille, which was subsequently named Rolf by the university. An upper-level trough on the European mainland stalled as it approached the Pyrenees, before approaching and interacting with the low known as Rolf. Heavy rainfall consequently fell over regions of southern France and northwestern Italy, resulting in widespread landslides and flooding. On 5 November, Rolf slowed while stationed above the Massif Central, maintaining a pressure of . A stationary front, stationed between Madrid and Lisbon, approached Rolf the same day, with the cold front later encountering and becoming associated with Rolf, which would continue for a couple of days. On 6 November, the cyclone drifted toward the Mediterranean from the southern shoreline of France, with the storm's frontal structure shrinking to in length. Slightly weakening, Rolf neared the Balearic Islands on 7 November, associating with two fronts producing heavy rain throughout Europe, before separating entirely and transitioning into a cut-off low. On the same day, the NOAA began monitoring the system, designating it as 01M, marking the first time that the agency officially monitored a Medicane. A distinct eye-like feature developed while spiral banding and intense convection became evident. At its highest, the Dvorak technique classified the system as T3.0. Convection then gradually decreased, and a misalignment of the mid- and upper-level centers was noted. The cyclone made landfall on 9 November near Hyères in France. The system continued to rapidly weaken on 9 November, before advisories on the system were discontinued later that day, and FU Berlin followed suit by 10 November, removing the name Rolf from its weather maps and declaring the storm's dissipation. The deep warm core of this cyclone persisted for a longer time compared to most of the other documented tropical-like cyclones in the Mediterranean. At peak intensity, the storm's maximum sustained wind speed reached , with a minimum pressure of . During a nine-day period, from 1–9 November, Storm Quinn and Rolf dropped prolific amounts of rainfall across southwestern Europe, the vast majority of which came from Rolf, with a maximum total of of rain recorded in southern France. The storm caused at least $1.25 billion (2011 USD) in damages in Italy and France. The sum of fatalities totaled 12 people from Italy and France. Qëndresa (7–9 Nov 2014) On 6 November 2014, the low-level circulation centre of Qëndresa formed near Kerkennah Islands. As the system was moving north-northeastwards and combining with an upper-level low from Tunisia early on 7 November, the system occluded quickly and intensified dramatically with an eye-like feature, thanks to favourable conditions. Qëndresa directly hit Malta when it had lost its fronts with a more well-defined eye, with ten-minute sustained winds at and the gust at . The central pressure was presumed to be . Interacting with Sicily, the cyclone turned northeastwards and started to make an anticlockwise loop. On 8 November, Qëndresa crossed Syracuse in the morning and then significantly weakened. Turning southeastwards then moving eastwards, Qëndresa moved over Crete, before dissipating over the island on 11 November. 90M/"Trixi" (28–31 Oct 2016) Early on 28 October 2016, an extratropical cyclone began to develop to the south of Calabria, in the Ionian sea. The system quickly intensified, attaining wind speeds of as it slowly moved to the west, causing high waves and minor damage to cars near the Maltese city of Valletta, weakening the following day and beginning to move eastwards. However, later that day, it began to re-intensify and underwent a tropical transition. At 12:00 UTC on 30 October, the system showed 10-minute sustained winds of . It became a tropical storm on 31 October. After passing over Crete, the storm began to quickly weaken, with the storm degenerating into an extratropical low on 1 November. Tropical Storm 90M was also nicknamed "Medicane Trixi" by some media outlets in Europe during its duration. No fatalities or rainfall statistics have been reported for this system that was over open waters for most of the time. Numa (16–19 Nov 2017) On 11 November 2017, the remnant of Tropical Storm Rina from the Atlantic contributed to the formation of a new extratropical cyclone, west of the British Isles, which later absorbed Rina on the next day. On 12 November, the new storm was named Numa by the Free University of Berlin. On 14 November 2017, Extratropical Cyclone Numa emerged into the Adriatic Sea. On the following day, while crossing Italy, Numa began to undergo a subtropical transition, though the system was still extratropical by 16 November. The storm began to impact Greece as a strong storm on 16 November. Some computer models forecast that Numa could transition into a warm-core subtropical or tropical cyclone within the next few days. On 17 November, Numa completely lost its frontal system. On the afternoon of the same day, Météo France tweeted that Numa had attained the status of a subtropical Mediterranean depression. According to ESTOFEX, Numa showed numerous flags of 10-minute sustained winds in satellite data. Between 18:00 UTC on 17 November and 5:00 UTC on 18 November, Numa acquired evident tropical characteristics, and began to display a hurricane-like structure. ESTOFEX again reported . Later on the same day, Numa made landfall in Greece with a station at Kefalonia reporting peak winds of at . The cyclone rapidly weakened into a low-pressure area, before emerging into the Aegean Sea on 19 November. On 20 November, Numa was absorbed into another extratropical storm approaching from the north. Numa hit Greece at a time when the soil was already heavily soaked from other storm systems that did arrive before Numa. The area was forecast to receive up to more than of additional rains in an 48 hours period starting with 16 November. No rainfall forecasts or measurements are known for the following days while Numa was still battering Greece. Numa resulted in 21 reported deaths. At least 1,500 homes were flooded, and residents had to evacuate their homes. The storm caused an estimated US$100 million in damages in Europe and was the deadliest weather event Greece had experienced since 1977. Zorbas (27 Sep – 1 October 2018) A first outlook about the possible development of a shallow warm-core cyclone in the Mediterranean was issued by ESTOFEX on 25 September 2018, and a second extended outlook was issued on 26 September 2018. On 27 September 2018, an extratropical storm developed in the eastern Mediterranean Sea. Water temperatures of around supported the storm's transition into a hybrid cyclone, with a warm thermal core in the center. The storm moved northeastward toward Greece, gradually intensifying and developing characteristics of a tropical cyclone. On September 29, the storm made landfall at peak intensity in the Peloponnese, west of Kalamata, where a minimum central pressure of was reported. ESTOFEX reported on Zorbas as "Mediterranean Cyclone 2018M02", with the same pressure of at Kalamata, further estimating the minimum central pressure of the cyclone to be , with one-minute maximum sustained winds of and a Dvorak number of T4.0, which all translate into marginal Category 1 hurricane characteristics for the cyclone. It is unknown who named the system Zorbas, but the name is officially recognized for a medicane by the Deutscher Wetterdienst. Early on 1 October, Zorbas emerged into the Aegean Sea, while accelerating northeastward. On 2 October, Zorbas moved over northwestern Turkey and dissipated. A cold wake was observed in the Mediterranean Sea, with sea surface temperatures dropping along the track of Zorbas due to strong upwelling. During its formative stages, the storm caused flash flooding in Tunisia and Libya, with around of rainfall observed. The floods killed five people in Tunisia, while also damaging homes, roads, and fields. The Tunisian government pledged financial assistance to residents whose homes were damaged. In advance of the storm's landfall in Greece, the Hellenic National Meteorological Office issued a severe warning. Several flights were canceled, and schools were closed. The offshore islands of Strofades and Rhodes reported gale-force winds during the storm's passage. A private weather station in Voutsaras measured wind gusts of . The storm spawned a waterspout that moved onshore. Gale-force winds in Athens knocked down trees and power lines. A fallen tree destroyed the roof of a school in western Athens. Dozens of roads were closed due to flooding. In Ioannina, the storm damaged the minaret on the top of the Aslan Pasha Mosque, which dates to 1614. From 29 to 30 September, Zorbas produced flash flooding in Greece and parts of western Turkey, with the storm dropping as much as in Greece and spawning multiple waterspouts. Three people were reporting missing in Greece after the flash floods; one person was found dead, but the other two individuals remained missing, as of 3 October. Zorbas was estimated to have caused millions of dollars (2018 USD) in damages. Ianos (14–20 Sep 2020) On 14 September 2020, a low-pressure area began to develop over the Gulf of Sidra, quickly developing in the coming hours while slowly moving northwest with a wind speed of around . By 15 September, it had intensified to with a minimum pressure of 1010 hPa, with further development predicted over the coming days. The cyclone had strong potential to become tropical over the next several days due to warm sea temperatures of in the region. Weather models predicted that it would likely hit the west coast of Greece on 17 or 18 September. Ianos gradually intensified over the Mediterranean Sea, acquiring an eye-like feature. Ianos made landfall on Greece at peak intensity on 03:00 UTC on 18 September, with winds peaking near and a minimum central pressure estimated at , equivalent to a minimal Category 2 hurricane. Greece assigned the system the name "Ianos" (), sometimes anglicized to "Janus", while the German weather service used the name "Udine"; the Turkish used "Tulpar", and the Italians "Cassilda". As Ianos passed to the south of Italy on 16 September, it produced heavy rain across the southern part of the country and in Sicily. As much as of rain was reported in Reggio Calabria, more than the city's normal monthly rainfall. Ianos left four dead people and one missing, in addition to strong tides in Ionian islands such as Kefalonia, Zakynthos, Ithaca and Lefkada, and winds at Karditsa which brought down trees and power lines, and caused landslides. Apollo (22 Oct – 2 Nov 2021) Around 22 October 2021, an area of organized thunderstorms formed near the Balearic Islands, with the disturbance becoming more organized and developing an area of low pressure around 24 October. The low started to form a low level center the next day and moved around the Tyrrhenian Sea, and around 28 October, the low became better organized, prompting forecast offices in Europe to name it. The most commonly used name for the cyclone is Apollo, which was used by the Free University of Berlin. On the same day, the agency Meteo of the National Observatory of Athens in Greece named it Nearchus, after the voyager of the same name. Heavy rain from the cyclone and its precursor caused heavy rainfall and flooding in Tunisia, Algeria, Southern Italy, and Malta, killing seven people in total. The storm caused over US$245 million (€219 million) in damages. Blas (5–18 Nov 2021) On 5 November, the Spanish Meteorological Agency (AEMET) started tracking a low near the Balearic Islands and named it Blas. An orange alert was issued for these islands, for coastal impacts and rain. The north of Catalonia was declared an Orange Zone, as strong winds blew inland from the Spanish Navarre and Aragon. Météo-France also issued a yellow alert for Aude and Pyrénées-Orientales for wind, as well as Corsica for rain. As the system stalled between Sardinia and the Balearic Islands on 8 November, AEMET predicted a strengthening trend for the next two days and maintained its alerts. At 00:00 UTC on 11 November, the system came very close to the Balearic Islands again. On 13 November, the storm developed a spiral structure similar to those of tropical cyclones, while shedding its frontal structure. After striking the islands again, the storm then slowly weakened while drifting back southeastward. On 14 November, the cyclone turned northward, moving over Sardinia and Corsica, before curving back southwestward on 15 November and moving over Sardinia again, while restrengthening in the process. On 16 November, Blas turned eastward once again, passing just south of Sardinia and moving towards Italy, before dissipating over the Tyrrhenian Sea on 18 November. On 6 November, gusts of were recorded at Es Mercadal and at the lighthouse of Capdepera in the Balearic Islands where waves of hit the coast. Menorca was cut off from the world after the closure of the ports of Mahón and Ciutadella. On 9 and 10 November, Blas brought high winds and heavy rain again to the Balearic Islands, causing at least 36 incidents, mostly flooding, landslides and blackouts. A crew member had to be rescued after his sailboat's mast broke, leaving the boat adrift west of Sóller. On 6 November, a waterspout was reported in Melilla, a Spanish enclave on the coast of Morocco. In France, gusts of were recorded on 7 November at Cap Béar, as well as in Leucate and in Lézignan-Corbières. The storm caused severe weather on the Algerian coast, with exceptional rainfall. On 9 November, a building collapsed in Algiers, following torrential rains on the city, causing the deaths of three people. On 11 November, the heavy rain falling on Algiers caused another landslide to strike houses in the Raïs Hamidou neighborhood, causing the deaths of three other people. From 8 to 11 November, convective bands associated with the storm caused 3 deaths in Sicily, bringing the total death toll to nine people. Damage from the storm has not yet been assessed. Daniel (4–12 Sep 2023) Storm Daniel was named by the Hellenic National Meteorological Service on 4 September and was expected to bring heavy rainfall and heavy winds in Greece, especially in Greece's Thessaly region. On 5 September, the city of Volos was flooded extensively. The village of Zagora recorded 754 mm of rain in 24 hours, a record for Greece. The total rainfall reached 1,096 mm. As of 10 September, sixteen people are confirmed to have died in Greece, seven people are confirmed dead in Turkey, and four people are confirmed to have lost their lives in Bulgaria. Extensive flooding occurred in the plain of Thessaly, in Palamas, Karditsa and the city of Larisa and hundreds civilians were rescued. The flood water covered a region of about 720 square kilometers. In the Halkidiki region, several seaside villages such as Ierissos experienced damage due to the heavy wind. In the seaside village of Toroni in Halkidiki a canoeing woman got swept away by the wind but was later found. The torrential rainfall was a result of a cut-off low. Early on 9 September, the system showed signs of subtropical transition. Later on that same day, it developed a warm core while an ASCAT pass recorded sustained winds of 45 knots before making landfall near Benghazi, Libya. In Libya, the storm caused flooding in Marj, and the failure of two dams in Derna, and the Jabal al Akhdar district, as well as Benghazi, Susa, and Misrata. The resultant flooding and heavy rain caused the deaths of at least 5,900 people in the country, making it, by a very wide margin, the deadliest Mediterranean tropical-like cyclone on record, prompting a state of emergency to be declared by Libyan authorities. Other tropical-like cyclones Numerous other Mediterranean tropical-like cyclones have occurred, but few have been as well-documented as the cyclones in 1969, 1982, 1983, 1995, 1996, 2006, 2011, 2014, 2017, 2018, 2020, 2021, and 2023. These less-known system and their dates are given below. A study in 2000 revealed five notable and well-developed medicanes. A follow-up study in 2013 revealed several additional storms with their formation days and also additional information on medicanes. A third study, conducted in 2007, revealed additional storms with their formation days. A fourth study from 2013 presented several other cyclones and their days of development. A survey made by EUMETSAT resulted in many more cyclones. September 1947 September 1973 18–20 August 1976 26 March 1983 7 April 1984, 29–30 December 1984 14–18 December 1985 January 1991, 5 December 1991 21–25 October 1994 10–13 December 1996 22–27 September 1997, 30–31 October 1997, 5–8 December 1997 25–27 January 1998 19–21 March 1999, 13 September 1999 10 September 2000, 9 October 2000 27–28 May 2003, 16–19 September 2003, 27–28 September 2003, 8 October 2003 19–21 September 2004, 3–5 November 2004 August 2005, 15–16 September 2005, 22–23 October 2005, 26–28 October 2005, 14–16 December 2005 9 August 2006 19–23 March 2007 16–18 October 2007, 26 October 2007 June 2008, August 2008, September 2008, 4 December 2008 January 2009, May 2009, twice in September 2009, October 2009 12–14 October 2010, 2–4 November 2010 Twice in February 2012, 13–15 April 2012. "Scott", October 2019 "Trudy" ("Detlef"), November 2019 "Masinissa", November 2020 03M/"Elaina" ("Andira"), December 2020 "Hannelore", January 2023 Climatological statistics There have been 100 recognized tropical-like cyclones in the Mediterranean Sea between 1947 and 2021 from the databases of the Laboratory of Climatology and Atmospheric Environment, University of Athens, and METEOSAT. By steady accrual of reported and recognized occurrences of tropical-like cyclones (medicanes), the number count reached at least 89 by 15 November 2021. Unlike most northern hemisphere cyclone seasons, Mediterranean tropical-like cyclone activity peaks between the months of September and January. The numbers do not necessarily mean that all occurrences of medicanes have been fetched in particular before the end of the 1980s. With the development (and constant improvement) of satellite-based observations, the number count of clearly identified medicanes increased from the 1980s onward. There might be an additional impact from climate change in the frequency of the observed medicanes, but this is not deducible from the data. Deadly storms The following is a list of all medicanes that caused deaths. Tropical-like cyclones in the Black Sea On a number of occasions, tropical-like storms similar to the tropical-like cyclones observed in the Mediterranean have formed in the Black Sea, including storms on 21 March 2002, 7–11 August 2002, and 25–29 September 2005. The 25–29 September 2005 cyclone is particularly well-documented and investigated. See also 1996 Lake Huron cyclone 2006 Central Pacific cyclone European windstorm (fully extratropical) South Atlantic tropical cyclone Subtropical Cyclone Katie Subtropical Cyclone Lexi Subtropical Storm 96C Tropical cyclogenesis Tropical cyclone basins Tropical cyclone effects in Europe Unusual areas of tropical cyclone formation References Citations Sources External links Mediterranean Tropical Products Page – Satellite Services Division – Office of Satellite Data Processing and Distribution Northeast Atlantic and Mediterranean Imagery – NOAA Climate change and hurricanes Natural disasters Tropical cyclone meteorology Types of cyclone
Mediterranean tropical-like cyclone
Physics
10,654
2,313,845
https://en.wikipedia.org/wiki/Airport%20lounge
An airport lounge is a facility operated at many airports. Airport lounges offer, for selected passengers, comforts beyond those afforded in the airport terminal, such as more comfortable seating, quieter environments, and better access to customer service representatives. Other accommodations may include private meeting rooms, telephones, wireless internet access and other business services, along with provisions to enhance passenger comfort, such as free drinks, snacks, magazines, and showers. The American Airlines Admirals Club was the first airport lounge when it opened at New York City's La Guardia Airport, in 1939. Then AA president, C. R. Smith, conceived it as a promotional tool. Types Airline lounges Airlines operate airline lounges as a service to premium passengers, usually passengers flying first class and business class, with high level frequent flyer status, and premium travel credit card memberships. Most major carriers have one or more lounges in their hubs and focus cities as well as in the major airports they serve. The major US airlines—American (Admirals Club), Delta (Delta Sky Club), and United (United Club)—operate dozens of lounges, while smaller airlines like Alaska Airlines (Alaska Lounge) tend to only operate a handful of lounges in their hub and focus cities. Airlines outside of Australia and North America generally do not sell lounge memberships, and instead reserve lounge access exclusively for very frequent flyers and passengers in premium cabins. However, a passenger who has a lounge membership in an airline in one of the three major airline alliances (Oneworld, SkyTeam, or Star Alliance) may have access to the lounges of the other members of that alliance. For example, Qantas Club membership provides access to the Admirals Club lounges due to a reciprocal arrangement with American Airlines; similarly, a member of the United Club or other Star Alliance members can access lounges of Air Canada and Air New Zealand. It is, however, not uncommon for non-alliance members to agree individually to allow usage of each other's lounges. For example, although Alaska Airlines operates just nine Alaska Lounges, its members have access to American Airlines Admirals Club (and vice versa). While Alaska Airlines is now part of the Oneworld alliance, this arrangement predated their membership. Several credit card companies offer their own branded lounges accessible to certain cardholders. American Express operates Centurion Lounges in the United States as well as HKG. JPMorgan Chase and Capital One have announced plans to open their own lounges for cardholders. Pay-per-use lounges Private companies, such as Airport Dimensions by Collinson Group, Aspire Lounges by Swissport, Plaza Premium Lounge, and Global Lounge Network, also operate generic pay-per-use lounges. In contrast to airline lounges, these facilities are open to any traveller traversing the airport, regardless of class of ticket or airline, subject to payment of a fee. Most only offer day passes, but some also offer yearly and lifetime memberships. Access to the lounges can be booked via online platforms such as LoungeBuddy or, in limited cases, one-day passes can be purchased directly at the lounge entrance. First class airline lounges For many airlines, a first class lounge will also be offered to international first class and top-tier passengers. First class lounges are usually more exclusive and will feature extra amenities over business class that are more in line with the European/Asian concept of an airport lounge. In the few cases where an amenity is offered only in the business class lounge, first class passengers are permitted to use the business lounge if they wish. In any case, anyone with first class lounge access almost automatically has access to the business class lounge—such as if a traveling companion is not in first class and cannot be brought into the first class lounge as a guest. In most cases, airlines will offer first class passengers a free pass to their standard airport club. Some airlines offer "arrival lounges" for passengers to shower, rest, and eat after a long-haul international flight. Access to lounges Access to airport lounges may be obtained in several ways. In Australia, Canada, and the United States, a common method to gain access is by purchasing an annual or a lifetime membership, while in Asia and Europe this is usually impossible. Membership fees are sometimes discounted for elite members of an airline's frequent flyer program and may often be paid using miles. Certain high-end credit cards associated with an airline or lounge network, such as the Chase Sapphire Reserve, Delta Reserve, and United MileagePlus Club credit cards, include membership to Priority Pass and associated lounge access for as long as one owns the card. Lounge access can also be attained with an airline status card, which is common in Europe. The top frequent-flyer levels often offer access to any of an airline's lounges or partner airlines' lounges, when traveling in any class of travel on any of the partner airlines (usually it is required for the cardholder to be booked on one of the carrier's flights within the next 24 hours). Most airlines also usually offer free lounge access to anyone in their premium cabins (first class or business class) on their days of travel; in North America this is usually only available to passengers on intercontinental or transcontinental flights. Pay-per-use lounges can be accessed by anyone, irrespective of airline or flight class. Some offer further benefits when booking directly with them rather than through a reseller. Independent programs, such as Collinson's Priority Pass, offer access to selected airline lounges for an annual fee, while Go Simply, Holiday Extras, LoungePass, and some offerings by independent and airline lounge programs offer pay per use and/or prebookable access without need for membership. Premium credit and charge cards may also offer lounge programs for members. Some banks, like ABN Amro and HSBC, offer lounge access for premium clients. American Express also offers access to lounges belonging to Priority Pass and is expanding its own line of lounges. Amenities Besides offering more comfortable seating, lounges usually provide complimentary alcoholic and non-alcoholic beverages, and light snacks such as fruit, cheese, soup, pastries and breakfast items. In the United States and Canada, nearly all domestic lounges offer an open bar for domestic beer, house wine and well liquor. In the United States, premium beverages such as imported beer, top-shelf liquor, high end wines and champagne are often available for purchase. In U.S. states where open bars are prohibited by law, non-premium beverages may be sold at a token rate (e.g. $1 per drink). Other amenities typically include flight information monitors, televisions, newspapers, and magazines, plus business centers with desks, internet workstations, telephones, photocopiers and fax services. Complimentary wireless Internet access for patrons is also common. In Asia, Europe and the Middle East, lounges (especially those for first class passengers) can be quite luxurious, offering an extensive premium open bar, full hot and cold buffet meals, cigar rooms, spa and massage services, fitness centers, private cabanas, nap suites and showers. Some lounges have pool tables as amenities. Additionally, there are wireless charging stations in lounges, at some airports in London, installed by Nokia. Lounges in other modes of transport Facilities similar to airport lounges can be found in large train stations (such as Amtrak's ClubAcela lounges or the DB Lounge offered by Deutsche Bahn), mainly for first-class inter-city rail, high-speed rail or night train passengers. In the case of Frankfurt Airport and Frankfurt Airport long-distance station both the airport and the train station serving it have lounges for their respective premium passengers. Given that DB and Lufthansa offer combined air-rail alliance tickets, it is possible for the same ticket to qualify for lounge access in both. References Lounge Rooms
Airport lounge
Engineering
1,627
39,050,345
https://en.wikipedia.org/wiki/Electrospark%20deposition
Electrospark deposition is a micro-welding manufacturing process typically used to repair damage to precision or valuable mechanical components such as injection moulding tools. This process may also be referred to as "spark hardening", "electrospark toughening" or "electrospark alloying". References Welding Coatings
Electrospark deposition
Chemistry,Engineering
66
24,315,140
https://en.wikipedia.org/wiki/Microsporum%20canis
Microsporum canis is a pathogenic, asexual fungus in the phylum Ascomycota that infects the upper, dead layers of skin on domesticated cats, and occasionally dogs and humans. The species has a worldwide distribution. Taxonomy and evolution Microsporum canis reproduces by means of two conidial forms, large, spindle-shaped, multicelled macroconidia and small, single-celled microconidia. First records of M. canis date to 1902. Evolutionary studies have established that M. canis, like the very closely related sibling species M. distortum and M. equinum, is a genetic clone derived from the sexually reproducing species, Arthroderma otae. Members of Ascomycota often possess conspicuous asexual and sexual forms that can coexist in time and space. Microsporum canis exemplifies a common situation in ascomycetous fungi in which, over time, one mating type strain has undergone habitat divergence from the other and established a self-sustaining reproductive population that consists only of the asexual form. It is hypothesized that asexual lineage of Microsporum canis evolved as a result of host-specific interactions, changes in ecological niche, as well as, geographic isolation of + and – mating types of Arthroderma otae, hence making it difficult to sustain sexual reproduction. Early domestication of animals, such as cats and dogs, in Africa led to a later evolution of host-specific fungus, Microsporum canis, which is commonly associated with loose fur animals. Nearly all reported isolates of Microsporum canis represent the "–" mating strain of A. otae. Together with two closely related taxa, M. ferrugineum and M. audouinii, the clade is thought to have an African center of origin. Morphology Colony morphology Microsporum canis forms a white, coarsely fluffy spreading colony with a distinctive "hairy" or "feathery" texture. On the underside of the growth medium, a characteristic deep yellow pigment develops due to the metabolites secreted by the fungus. The intensity of this yellow pigmentation peak on the 6th day of colony growth and fades gradually making the identification of older colonies difficult. Some strains of M. canis fail to produce yellow pigment altogether, exhibit abnormally slow colony growth and form undeveloped macroconidia. Cultivation on polished rice tends to reestablish the typical growth morphology and is helpful for identification. Microscopic morphology Microsporum canis reproduces asexually by forming macroconidia that are asymmetrical, spindle-shaped and have cell walls that are thick and coarsely roughened. The interior portion of each macroconidium is typically divided into six or more compartments separated by broad cross-walls. Microsporum canis also produces microconidia that resemble those of many other dermatophytes and thus are not a useful diagnostic feature. Identification Microsporum canis produces infections of scalp and body sites, creating highly inflammatory lesions associated with hair loss. Infection by this species can often be detected clinically using Wood's lamp, which causes infected tissues to fluoresce bright green Fluorescence is attributed to metabolite pteridine, which is produced by the fungus in actively growing hairs. Infected hairs remain fluorescent for prolonged periods of time (over the years), even after the death of the fungus. Despite the frequent use of Wood's lamp in the clinical evaluation of ringworm infections, diagnosis of M. canis requires the performance of additional tests given the potential for false positives. Culture of the fungus is most commonly used to evaluate morphological and physiological parameters of growth, and confirm the identity of the agent. Growth of the fungus on Sabouraud's agar (4% glucose), Mycosel or rice medium characteristically yields the bright yellow pigment. Microscopic examination of the growth can show the presence of the typical, warted and spindle-shaped macroconidia, confirming the identity of the isolate as M. canis. The In vitro hair perforation test, commonly used to differentiate many dermatophytes, is not particularly useful for this species as it reveals the formation of "pegs" that penetrate into hair shafts - a characteristic shared widely among many zoophilic species. Genetic analyses can be useful to establish the identity of atypical strains of M. canis; however the highly characteristic appearance of this species generally obviates the need for this more sophisticated method. Most M. canis infections are caused by the "-" mating strain of its sexual progenitor, Arthroderma otae. Microsporum canis has no specific growth factor or nutrition requirements, hence it grows well on most commercially available media. In addition, M. canis exhibits rapid colony growth at 25 °C. Two growth media that help distinguish M. canis from other Microsporum spp. (notably the morphologically similar species, M. audouinii) - specifically polished rice and potato dextrose agar. On potato dextrose agar, M. canis produces a lemon-yellow pigment that is easily visualized, due to the presence of aerial hyphae, while on the polished rice, most isolates (even atypical strains) produce yellow pigment. Pathophysiology It is considered a zoophilic dermatophyte, given that it typically colonizes the outer surface of animal's body. Hence, animals, cats and dogs are believed to be the population hosts of this fungus, while humans are occasional hosts, in which the fungus can induce secondary infections. Microsporum canis has been identified as a causal agent of a ringworm infection in pets, tinea capitis and tinea corporis in humans, children in particular. Microsporum canis is among the most common dermatophytes associated with tinea capitis and tinea corporis. Unlike some dermatophyte species, M. canis typically does not cause large epidemics. Humans become infected as a result of direct or indirect contact with infected pets. Microsporum canis generally invades hair and skin; however, some nail infections have been reported. When hair shafts are infected, M. canis causes an ectothrix-type infection where the fungus envelopes the exterior of the hair shaft without the formation of internal spores. This colonization of the hair shaft causes it to become unsheathed, resulting in characteristic round or oval non-inflammatory lesions the develop on the scalp. Infection triggers an acute leukocytic reaction in subcutaneous tissues, which gradually becomes highly inflammatory and leads to hair loss, in the case of tinea capitis. Diagnosis Typically, infections caused by M. canis are associated with alopecia in the case of tinea capitis, while ringworm infections in pets produce characteristic inflammatory lesions, which may or may not result in hair loss. This species has a propensity to cause subclinical infections in some animal species, particularly long-haired cats are frequent reservoirs of infection. Isolation of the fungus from brushed pet hair can aid in detection of either an actively growing fungus or a passive carriage of fungal hyphae or arthroconidia. In asymptomatic cases it is highly recommended to perform both Wood's lamp examination and microscopic analyses of suspected areas. In the case of transient carriers, lack of clinical manifestations is accompanied with low number of M. canis colonies, number of which declines upon re-testing. Treatment Microsporum canis infections can be easily managed by topical antifungal agents; however severe cases may necessitate systemic therapy with griseofulvin, itraconazole or terbinafine. Treatment of human cases also requires the identification and elimination of the infectious reservoir, which typically involves the investigation and treatment of colonized animals and the elimination of infected bedding and other environmental reservoirs. Habitat Despite its species name ("canis" implies dogs), the natural host of M. canis is the domestic cat. However this species can colonize dogs and horses as well. In all cases, it resides on the skin and fur. Microsporum canis may also persist as dormant spores in the environment for prolonged periods. Geographic distribution Microsporum canis species have a worldwide distribution. Extremely high occurrence has been reported in Iran, while lower incidence is associated with England and Scandinavian countries, as well as South American countries. Microsporum canis is uncommon in some parts of US and Europe, and is completely absent from equatorial Africa. References Arthrodermataceae Fungi described in 1900 Parasitic fungi Fungus species
Microsporum canis
Biology
1,796
192,628
https://en.wikipedia.org/wiki/Financial%20Crimes%20Enforcement%20Network
The Financial Crimes Enforcement Network (FinCEN) is a bureau within the United States Department of the Treasury that collects and analyzes information about financial transactions to combat domestic and international money laundering, terrorist financing, and other financial crimes. Mission FinCEN's stated mission is to "safeguard the financial system from illicit activity, counter money laundering and the financing of terrorism, and promote national security through strategic use of financial authorities and the collection, analysis, and dissemination of financial intelligence." FinCEN serves as the U.S. Financial Intelligence Unit (FIU) and is one of 147 FIUs making up the Egmont Group of Financial Intelligence Units. FinCEN's self-described motto is "follow the money." It is a network bringing people and information together, by coordinating information sharing with law enforcement agencies, regulators and other partners in the financial industry. History FinCEN was established by Treasury Order 105-08 on April 25, 1990. In May 1994, its mission expanded to involve regulatory responsibilities. In October 1994, Treasury's Office of Financial Enforcement merged with FinCEN. On September 26, 2002, after passage of Title III of the PATRIOT Act, Treasury Order 180-01 designated FinCEN as an official bureau within the Department of the Treasury. Since 1995, FinCEN has employed the FinCEN Artificial Intelligence System (FAIS). In September 2012, FinCEN's information technology called FinCEN Portal and Query System, migrated with 11 years of data into FinCEN Query, a search engine similar to Google. It is a "one stop shop" accessible via the FinCEN Portal allowing broad searches across more fields than before and returning more results. Since September 2012 FinCEN generates 4 new reports: Suspicious Activity Report (SAR), Currency Transaction Report (CTR), the Designation of Exempt Person (DOEP), and Registered Money Service Business (RMSB). Organization As of November 2013, FinCEN employed approximately 340 people, mostly intelligence professionals with expertise in the financial industry, illicit finance, financial intelligence, the AML/CFT (anti-money laundering / combating the financing of terrorism) regulatory regime, computer technology, and enforcement". The majority of the staff are permanent FinCEN personnel, with about 20 long-term detailees assigned from 13 different regulatory and law enforcement agencies. FinCEN shares information with dozens of intelligence agencies, including the Bureau of Alcohol, Tobacco, and Firearms; the Drug Enforcement Administration; the Federal Bureau of Investigation; the U.S. Secret Service; the Internal Revenue Service; the Customs Service; and the U.S. Postal Inspection Service. FinCEN directors Brian M. Bruh (1990–1993) Stanley E. Morris (1994–1998) James F. Sloan (April 1999 – October 2003) William J. Fox (December 2003 – February 2006) Robert Werner Duemling (March 2006 – December 2006) James H. Freis, Jr. (March 2007 – August 2012) Jennifer Shasky Calvery (September 2012 – May 2016) Jamal El-Hindi (Acting, June 2016 – November 2017) Kenneth Blanco (November 2017 – April 2021) Michael Mosier (Acting, April 2021 – August 2021) Himamauli Das (Acting, August 2021 – September 2023) Andrea Gacki (July 2023 – Present) 314 program The 2001 USA PATRIOT Act required the Secretary of the Treasury to create a secure network for the transmission of information to enforce the relevant regulations. FinCEN's regulations under Section 314(a) enable federal law enforcement agencies, through FinCEN, to reach out to more than 45,000 points of contact at more than 27,000 financial institutions to locate accounts and transactions of persons that may be involved in terrorist financing and/or money laundering. A web interface allows the person(s) designated in §314(a)(3)(A) to register and transmit information to FinCEN. The partnership between the financial community and law enforcement allows disparate bits of information to be identified, centralized, and rapidly evaluated. Hawala In 2003, FinCEN disseminated information on "informal value transfer systems" (IVTS), including hawala, a network of people receiving money for the purpose of making the funds payable to a third party in another geographic location, generally taking place outside of the conventional banking system through non-bank financial institutions or other business entities whose primary business activity may not be the transmission of money. On September 1, 2010, FinCEN issued a guidance on IVTS referencing United States v. Banki and hawala. Virtual currencies In July 2011, FinCEN added "other value that substitutes for currency" to its definition of money services businesses in preparation for adapting the respective rule to virtual currencies. On March 18, 2013, FinCEN issued a guidance regarding virtual currencies, according to which, exchangers and administrators, but not users of convertible virtual currency are considered money transmitters, and must comply with rules to prevent money laundering/terrorist financing ("AML/CFT") and other forms of financial crime, by record-keeping, reporting and registering with FinCEN. Jennifer Shasky Calvery, director of FinCEN said, "Virtual currencies are subject to the same rules as other currencies. … Basic money services business rules apply here." At a November 2013 Senate hearing, Calvery stated, "It is in the best interest of virtual currency providers to comply with these regulations for a number of reasons. First is the idea of corporate responsibility," contrasting Bitcoin's understanding of a peer to peer system bypassing corporate financial institutions. She stated that FinCEN collaborates with the Federal Financial Institutions Examination Council, a congressionally-chartered forum called the "Bank Secrecy Act (BSA) Advisory Group" and BSA Working Group to review and discuss new regulations and guidance, with the FBI-led "Virtual Currency Emerging Threats Working Group" (VCET) formed in early 2012, the FDIC-led "Cyber Fraud Working Group", the Terrorist Financing & Financial Crimes-led "Treasury Cyber Working Group", and with a community of other financial intelligence units. According to the Department of Justice, VCET members represent the FBI, the Drug Enforcement Administration, multiple U.S. Attorney's Offices, and the Criminal Division's Asset Forfeiture and Money Laundering Section and Computer Crime and Intellectual Property Section. In 2021, amendments to the Bank Secrecy Act and the federal AML/CTF framework officially incorporated existing FinCEN guidelines on digital assets. The legislation was updated to encompass "value that substitutes for currency," reinforcing FinCEN's authority over digital assets. As a result, exchanges dealing in these assets were required to register with FinCEN and adhere to specific reporting and recordkeeping obligations for transactions involving certain types of digital assets. In 2021, FinCEN received 1,137,451 Suspicious Activity Reports (SARs) from both traditional financial institutions and cryptocurrency trading entities. Within this total, there were reports of 7,914 suspicious cyber events and 284,989 potential money laundering activities. Beneficial Ownership Information Reports FinCEN is the regulatory agency tasked with overseeing the Beneficial Ownership Information Reporting (BOIR) system in the U.S. This responsibility was established under the Corporate Transparency Act (CTA), which mandates that certain business entities must disclose information about their beneficial owners to FinCEN. CTA aims to enhance transparency and combat financial crimes by preventing the use of anonymous shell companies for illicit purposes. On December 3, 2024, the U.S. District Court for the Eastern District of Texas issued a preliminary injunction against nationwide implementation of the CTA, citing concerns about its constitutionality and impact on small businesses. Treasury filed a notice of appeal on December 5, 2024. FinCEN administers the BOIR system to collect and maintain accurate records of beneficial ownership information. This information includes details such as the names, addresses, dates of birth, and identification numbers of individuals who ultimately own or control companies. By centralizing this data, FinCEN supports law enforcement efforts to investigate and prosecute financial crimes, ensuring greater accountability and integrity within the corporate sector. Controversies In 2009, the GAO found "opportunities" to improve "interagency and state examination coordination", noting that the federal banking regulators issued an interagency examination manual, that SEC, CFTC, and their respective self-regulatory organizations developed Bank Secrecy Act (BSA) examination modules, and that FinCEN and IRS examining nonbank financial institutions issued an examination manual for money services businesses. Therefore multiple regulators examine compliance of the BSA across industries and for some larger holding companies even within the same institution. Regulators need to promote greater consistency, coordination and information-sharing, reduce unnecessary regulatory burden, and find concerns across industries. FinCEN estimated that it would have data access agreements with 80 percent of state agencies that conduct BSA examinations after 2012. Since FinCEN's inception in 1990, the Electronic Frontier Foundation in San Francisco has debated its benefits compared to its threat to privacy. FinCEN does not disclose how many Suspicious Activity Reports result in investigations, indictments or convictions, and no studies exist to tally how many reports are filed on innocent people. FinCEN and money laundering laws have been criticized for being expensive and relatively ineffective while violating Fourth Amendment rights, as an investigator may use FinCEN's database to investigate people instead of crimes. It has also been alleged that FinCEN's regulations against structuring are enforced unfairly and arbitrarily; for example, it was reported in 2012 that small businesses selling at farmers' markets have been targeted, while politically connected people like Eliot Spitzer were not prosecuted. Spitzer's reasons for structuring were described as "innocent". In February 2019, it was reported that Mary Daly, the oldest daughter of United States Attorney General William Barr, is to leave her position at the United States Deputy Attorney General's office for a FinCEN position. In September 2020, findings based on a set of 2,657 documents including 2121 suspicious activity reports (SARs) leaked from FinCEN were published as the FinCEN Files. The leaked documents showed that although both FinCEN and the banks that filed SARs knew about billions of dollars in dirty money being moved through the banks, both did very little to prevent the transactions. In popular culture The 2016 film The Accountant features a FinCEN investigation into the title character. In the first episode of the 2017 Netflix show Ozark, FinCEN is mentioned as one of the agencies (along with the DEA, ATF, and FBI) active in monitoring cartel activity in Chicago. See also Casino regulations under the Bank Secrecy Act Currency transaction report FINTRAC – Canada's equivalent to FinCEN Timeline of post-election transition following Russian interference in the 2016 United States elections Timeline of investigations into Trump and Russia (January–June 2017) Timeline of investigations into Trump and Russia (July–December 2018) Title 31 of the Code of Federal Regulations List of financial regulatory authorities by jurisdiction References External links FinCEN in the Federal Register 1990 establishments in Washington, D.C. Federal law enforcement agencies of the United States Anti-money laundering organizations Financial regulatory authorities of the United States Tax evasion in the United States Counterterrorism in the United States United States Department of the Treasury Vienna, Virginia Banking crimes Government by algorithm United States intelligence agencies
Financial Crimes Enforcement Network
Engineering
2,308
64,039,314
https://en.wikipedia.org/wiki/Lactic%20acid/citric%20acid/potassium%20bitartrate
Lactic acid/citric acid/potassium bitartrate, sold under the brand name Phexxi, is a non-hormonal combination medication used as a method of birth control. It contains lactic acid, citric acid, and potassium bitartrate. It is a gel inserted into the vagina. The most common adverse reactions include vulvovaginal burning sensation, vulvovaginal pruritus, vulvovaginal mycotic infection, urinary tract infection, vulvovaginal discomfort, bacterial vaginosis, vaginal discharge, genital discomfort, dysuria, and vulvovaginal pain. Medical uses The combination is indicated for the prevention of pregnancy in females of reproductive potential for use as an on-demand method of contraception. History The combination was approved for medical use in the United States in May 2020. References Barrier contraception Combination drugs Methods of birth control Spermicide Vagina
Lactic acid/citric acid/potassium bitartrate
Biology
196
10,646,392
https://en.wikipedia.org/wiki/Mortality%20%28computability%20theory%29
In computability theory, the mortality problem is a decision problem related to the halting problem. For Turing machines, the halting problem can be stated as follows: Given a Turing machine, and a word, decide whether the machine halts when run on the given word. In contrast, the mortality problem for Turing machines asks whether all executions of the machine, starting from any configuration, halt. In the statement above, a configuration specifies both the machine's state (not necessarily its initial state), its tape position and the contents of the tape. While we usually assume that in the starting configuration all but finitely many cells on the tape are blanks, in the mortality problem the tape can have arbitrary content, including infinitely many non-blank symbols written on it. Philip K. Hooper proved in 1966 that the mortality problem is undecidable. This is true both for a machine with a tape infinite in both directions, and for a machine with semi-infinite tape. Note that this result does not directly follow from the well-known total function problem (Does a given machine halt for every input?), since the latter problem concerns only valid computations (starting with an initial configuration). The variant in which only finite configurations are considered is also undecidable, as proved by Herman, who calls it ''the uniform halting problem''. He shows that the problem is not just undecidable, but -complete. Additional Models The problem can naturally be rephrased for any computational model in which there are notions of "configuration" and "transition". A member of the model will be mortal if there is no configuration that leads to an infinite chain of transitions. The mortality problem has been proved undecidable for: Semi-Thue systems and Markov algorithms. Counter machines Dynamical systems over or or , for , where the transition function is piecewise linear (here, an arbitrary point, e.g., the origin, is selected as a halting state). References Theory of computation Undecidable problems
Mortality (computability theory)
Mathematics,Technology
420
33,062,044
https://en.wikipedia.org/wiki/Stormwater%20fee
A stormwater fee is a charge imposed on real estate owners for pollution in stormwater drainage from impervious surface runoff. This system imposes a tax that is proportional to the total impervious area on a particular property, including concrete or asphalt driveways and roofs, that do not allow rain to infiltrate. In other words, the more area covered by impervious surfaces the more stormwater is generated and conveyed to the sewer, so the higher the stormwater fee. Germany Equivalence issues were raised concerning the imposition of sewage fees based on the usage of water supply. In 1985, in order to ensure the legal equity of charging based on the polluter pays principle, the German Federal Administrative Court and the local high court ruled that the sewage system charges should be separately collected as usage fees for rainwater exclusion and as usage fees for sewage exclusion. This ruling became a decisive motive to bring about a switch in the sewage system for rainwater in Germany. The German states took legal action to impose a stormwater tax on developers like the builders who made artificial surface in the 1990s. Many resistances were raised, but Berlin, which had been the strongest opponent, has accepted the plan since 2000. About 73 percent of cities with a population of 100,000 or more apply a separate calculation method, which divides the cost of sewage into the rainwater cost. Usually, Germany calculates concrete, asphalt, and building roofs as impervious areas and charges an annual fee of $2.6 per m2. Builders are installing rainfall storage tank and permeation facilities to receive reduction in fees. In addition, outdoor plant cultivation facilities and green business are also proposed as alternatives to the reduction of the stormwater fee. Germany has seen two effects in this regard : increasing rainwater recycling rates and reducing sewage fee and tap water usage. Rain water recycling plays a great role in preventing the city from flooding in the event of heavy rain and saving energy. Italy Such taxes have been collected in regions of Italy, acquiring international attention, and challenging Italian tax morale. Authorities in Ravenna, Italy, have imposed 3% increase on local water bills to maintain and improve drainage systems. Officials cite the severe damage inflicted by the heavy rain on infrastructure, buildings and agriculture in the Po valley, insisting that this money urgently needs to be recouped. The local water board, which wants to backdate the new tax three years, claims that the payments will save it €1 million a year. Gianluca Dradi, head of environmental policy for the Ravenna city council, likened the levy to a street cleaning tax and clarified that those paying more for their water use, such as factories, will pay proportionately more than individual households. "Including the cost in water bills is more equitable," he told the Repubblica newspaper. However, consumer organisations are opposing the move, and residents have been urged to resist the authorities and refuse to pay the tax. United States A 2023 study estimated that at least 2100 stormwater utilities have been established in 42 states and the District of Columbia. Most of the utilities require fees that are applicable to property owners. Maryland The Maryland General Assembly enacted a stormwater management fee program in House Bill 987 (April 2012), which was signed into law by then-governor Martin O'Malley. The law applies to the largest urban jurisdictions in Maryland (nine counties and the City of Baltimore) in order to meet the requirements of the federal Clean Water Act as it concerns the Chesapeake Bay watershed. The Tax Foundation stated that House Bill 987 "was passed in response to a decree by the Environmental Protection Agency (EPA) formally known as the Chesapeake Bay Total Maximum Daily Load, which identified mandatory reductions in nitrogen, phosphorus, and sediment that damage the Chesapeake Bay." The EPA TMDL requirements apply to the states of Maryland, Virginia, New York, Pennsylvania, West Virginia, and the District of Columbia. Polluted runoff is the only source in the Chesapeake Bay watershed that is still increasing, as of 2018. This fee, of course, does not tax rain but has been implemented in Maryland in varying ways at the county level, such as a flat fee per property owner, or based on impervious surface square footage. The law specifies that accrued funds must be used for specified stormwater pollution-related purposes. This law was modified in 2015 to make the county-assessed fees optional rather than mandatory while still holding the counties responsible for making progress on managing polluted runoff. Illinois The Illinois General Assembly passed Public Act 98-0335 in August 2013. The law provides DuPage and Peoria counties with the option of charging fees to residents whose property benefits from county stormwater management. HB1522 allows the counties to assess the tax in a nonuniform manner, based on their own rules, exemptions and special considerations. Home rule municipalities in Illinois have always had the ability to establish special fees under their own ordinances. The city of Elgin, Illinois was planning to assess its Stormwater Utility Tax in 2014, but public opinion and election results led to a decision by the Elgin City Council to unanimously reject the tax. Pennsylvania The city of Pittsburgh issued a stormwater fee requirement, applicable to residential and non-residential property owners, effective in 2022. Virginia To help comply with the Chesapeake Bay TMDL requirements, most large jurisdictions in Virginia have also enacted stormwater fees, including Alexandria, Arlington County, Richmond, Roanoke and Virginia Beach. References Environmental tax Stormwater management
Stormwater fee
Chemistry,Environmental_science
1,108
535,617
https://en.wikipedia.org/wiki/Category%20of%20metric%20spaces
In category theory, Met is a category that has metric spaces as its objects and metric maps (continuous functions between metric spaces that do not increase any pairwise distance) as its morphisms. This is a category because the composition of two metric maps is again a metric map. It was first considered by . Arrows The monomorphisms in Met are the injective metric maps. The epimorphisms are the metric maps for which the domain of the map has a dense image in the range. The isomorphisms are the isometries, i.e. metric maps which are injective, surjective, and distance-preserving. As an example, the inclusion of the rational numbers into the real numbers is a monomorphism and an epimorphism, but it is clearly not an isomorphism; this example shows that Met is not a balanced category. Objects The empty metric space is the initial object of Met; any singleton metric space is a terminal object. Because the initial object and the terminal objects differ, there are no zero objects in Met. The injective objects in Met are called injective metric spaces. Injective metric spaces were introduced and studied first by , prior to the study of Met as a category; they may also be defined intrinsically in terms of a Helly property of their metric balls, and because of this alternative definition Aronszajn and Panitchpakdi named these spaces hyperconvex spaces. Any metric space has a smallest injective metric space into which it can be isometrically embedded, called its metric envelope or tight span. Products and functors The product of a finite set of metric spaces in Met is a metric space that has the cartesian product of the spaces as its points; the distance in the product space is given by the supremum of the distances in the base spaces. That is, it is the product metric with the sup norm. However, the product of an infinite set of metric spaces may not exist, because the distances in the base spaces may not have a supremum. That is, Met is not a complete category, but it is finitely complete. There is no coproduct in Met. The forgetful functor Met → Set assigns to each metric space the underlying set of its points, and assigns to each metric map the underlying set-theoretic function. This functor is faithful, and therefore Met is a concrete category. Related categories Met is not the only category whose objects are metric spaces; others include the category of uniformly continuous functions, the category of Lipschitz functions and the category of quasi-Lipschitz mappings. The metric maps are both uniformly continuous and Lipschitz, with Lipschitz constant at most one. See also References . . . Metric spaces Metric geometry
Category of metric spaces
Mathematics
575
35,046,060
https://en.wikipedia.org/wiki/Abell%201991
Abell 1991 is a galaxy cluster in the Abell catalogue. See also Abell catalogue List of Abell clusters References 1991 Galaxy clusters Abell richness class 1
Abell 1991
Astronomy
35
5,459,797
https://en.wikipedia.org/wiki/Thai%20Institute%20of%20Chemical%20Engineering%20and%20Applied%20Chemistry
The Thai Institute of Chemical Engineering and Applied Chemistry (TIChE) () is a professional organization for chemical engineers. TIChE was established in 1996 to distinguish chemical engineers as a profession independent of chemists and mechanical engineers. History TIChE was established to force the chemical engineering professional certificate isolated from the industrial engineering. The conference in 1990 was the first effort to establish the organization by the cooperation of Department of Chemical Engineering and Department of Chemical Technology, Chulalongkorn University, and Department of Chemical Engineering, King Mongkut's University of Technology Thonburi. In the 4th conference at Khon Kaen University, 1994, TIChE was formally established and permitted by law on November 15, 1996. Now, TIChE composes 18 university members. The Objectives of TIChE To promote and support the chemical engineering and chemical technology profession. To promote and support the educational standard of chemical engineering and chemical technology. To encourage cooperation and industrial development including research and knowledge. To disseminate knowledge and consulting in chemical engineering and chemical technology. To be an agent of chemical engineering and chemical technology profession to cooperate with other organizations. University Members (sorted alphabetically) Burapha University Department of Chemical Engineering Chiang Mai University Department of Industrial Chemistry Chulalongkorn University Department of Chemical Engineering Department of Chemical Technology The Petroleum and Petrochemical College Kasetsart University Department of Chemical Engineering Khon Kaen University Department of Chemical Engineering King Mongkut's Institute of Technology Ladkrabang Department of Chemical Engineering King Mongkut's University of Technology North Bangkok Department of Chemical Engineering Department of Industrial Chemistry King Mongkut's University of Technology Thonburi Department of Chemical Engineering Mahanakorn University of Technology Department of Chemical Engineering Mahidol University Department of Chemical Engineering Prince of Songkla University Department of Chemical Engineering Rajamangala University of Technology Thanyaburi (Klong 6) Department of Chemical Engineering Rangsit University Department of Chemical and Material Engineering Silpakorn University Department of Chemical Engineering Srinakharinwirot University Department of Chemical Engineering Suranaree University of Technology School of Chemical Engineering Thammasat University Department of Chemical Engineering Ubon Ratchathani University Department of Chemical Engineering List of Conference Meetings 7th International Thai Institute of Chemical Engineering and Applied Chemistry Conference (ITIChE 2017) and The 27th National Thai Institute of Chemical Engineering and Applied Chemistry Conference (TIChE 2017) 18–20 October 2017 Shangri-La Hotel, Bangkok The 18th Thailand Chemical Engineering and Applied Chemistry Conference. 20–21 October 2008 at Jomtien Palm Beach Resort, Cholburi. Host: Department of Chemical Engineering, Mahidol University, Nakhon Pathom. The 17th Thailand Chemical Engineering and Applied Chemistry Conference. 29–30 October 2007 at The Empress Hotel, Chiang Mai. Host: Department of Industrial Chemistry, Chiang Mai University, Chiang Mai. The 16th Thailand Chemical Engineering and Applied Chemistry Conference. 26–27 October 2006 at Rama Garden Hotel, Bangkok. Host: Department of Chemical Engineering, Kasetsart University, Bangkok. The 15th Thailand Chemical Engineering and Applied Chemistry Conference. 27–28 October 2005 at Jomtien Palm Beach Resort, Cholburi. Host: Department of Chemical Engineering, Burapha University, Cholburi. The 14th Thailand Chemical Engineering and Applied Chemistry Conference. Host: Department of Chemical Engineering, King Mongkut's Institute of Technology North Bangkok, Bangkok. The 13th Thailand Chemical Engineering and Applied Chemistry Conference. 30–31 October 2003 at Royal Hill Resort and Golf Course, Nakhon Nayok. Host: Department of Chemical Engineering, Srinakharinwirot University, Nakhon Nayok. Website References Chemical engineering organizations Professional associations based in Thailand Thai Institute of Chemical Engineering Research institutes established in 1994 Scientific organizations established in 1994 1994 establishments in Thailand
Thai Institute of Chemical Engineering and Applied Chemistry
Chemistry,Engineering
769
75,383,520
https://en.wikipedia.org/wiki/Basima%20Abdulrahman
Basima Abdulrahman (born 1986/1987) is a Kurdish Iraqi structural engineer and the founder of KESK (meaning Green in Kurdish), an Iraqi company specialized in eco-friendly architecture. Early life and education Abdulrahman's parents moved to Baghdad, Iraq from southern Turkey; she was born in Iraq, and has both Turkish and Kurdish heritage. In 2006, the Iraqi conflict drove her family to relocate to the Kurdistan region of northern Iraq. As a result, Abdulrahman learned more about and became closer to her Kurdish heritage. As a child, Abdulrahman's family encouraged her to become a doctor, but she disliked biology, instead preferring math and physics. In 2011, Abdulrahman applied for a Fulbright Scholarship to study in the United States. Abdulrahman attended Auburn University in the United States, where she earned a master's degree in structural and civil engineering, graduating in 2014. She returned to the United States in 2016, where she completed a program by the US Green Building Council to become an accredited professional. Career When she returned to Iraq in 2015, Abdulrahman initially worked as a structural engineer for the United Nations. In 2017, Abdulrahman founded KESK Green Building Consulting, the first Iraqi company to focus on "green" architecture. It took Abdulrahman nine months before she was able to find her first client. KESK combines modern environmentally-friendly building techniques with ancient techniques, such as building dome-shaped homes from clay bricks. The company also seeks to provide alternative energy sources to communities, particularly solar energy, in response to Iraq's unstable power grid. The company was also founded in part to assist with reconstruction following the war against the Islamic State, which began in 2014. Abdulrahman also works for the UN's Food and Agriculture Organization as a national consultant and project manager, and as vice curator for the Global Shapers Erbil Hub, an initiative of the World Economic Forum. Recognition In 2021, Abdulrahman was one of eight entrepreneurs who won the Cartier Women's Initiative Award, with Abdulrahman representing the "Middle East & North Africa" category. She received $100,000 in prize money. In November 2023, Abdulrahman was named to the BBC's 100 Women list. Personal life As of 2019, Abdulrahman is based in Erbil. References 1980s births Living people 21st-century businesswomen 21st-century engineers 21st-century Iraqi people 21st-century Iraqi women 21st-century women engineers Auburn University alumni Iraqi businesspeople Iraqi civil engineers Iraqi environmentalists Iraqi Kurdish people Iraqi Kurdish women Iraqi people of Turkish descent People from Baghdad People from Erbil Structural engineers Women environmentalists
Basima Abdulrahman
Engineering
543
28,867,648
https://en.wikipedia.org/wiki/DO-178C
DO-178C, Software Considerations in Airborne Systems and Equipment Certification is the primary document by which the certification authorities such as FAA, EASA and Transport Canada approve all commercial software-based aerospace systems. The document is published by RTCA, Incorporated, in a joint effort with EUROC and replaces DO-178B. The new document is called DO-178C/ED-12C and was completed in November 2011 and approved by the RTCA in December 2011. It became available for sale and use in January 2012. Except for FAR 33/JAR E, the Federal Aviation Regulations do not directly reference software airworthiness. On 19 Jul 2013, the FAA approved AC 20-115C, designating DO-178C a recognized "acceptable means, but not the only means, for showing compliance with the applicable FAR airworthiness regulations for the software aspects of airborne systems and equipment certification." Background Since the release of DO-178B, there had been strong calls by FAA Designated Engineering Representatives (DERs) for clarification/refinement of the definitions and boundaries between the key DO-178B concepts of high-level requirements, low-level requirements, and derived requirements and a better definition of the exit/entry criteria between systems requirements and system design (see ARP4754) and that of software requirements and software design (which is the domain of DO-178B). Other concerns included the meaning of verification in a model-based development paradigm and considerations for replacing some or all software testing activities with model simulation or formal methods. The release of DO-178C and the companion documents DO-278A (Ground Systems), DO-248C (Additional information with rationale for each DO-178C objective), DO-330 (Tool Qualification), DO-331 (Modeling), DO-332 (Object Oriented), and DO-333 (Formal Methods) were created to address the issues noted. The SC-205 members worked with the SAE S-18 committee to ensure that ARP4754A and the above noted DO-xxx documents provide a unified and linked process with complementary criteria. Overall, DO-178C keeps most of the DO-178B text, which has raised concerns that issues with DO-178B, such as the ambiguity about the concept of low-level requirements, may not be fully resolved. Committee organization The RTCA/EUROCAE joint committee work was divided into seven Subgroups: SG1: SCWG Document Integration SG2: Issues and Rationale SG3: Tool Qualification SG4: Model Based Development and Verification SG5: Object-Oriented Technology SG6: Formal Methods SG7: Safety Related Considerations The Model Based Development and Verification subgroup (SG4) was the largest of the working groups. All work is collected and coordinated via a web-site that is a collaborative work management mechanism. Working artifacts and draft documents were held in a restricted area available to group members only. The work was focused on bringing DO-178B/ED-12B up to date with respect to current software development practices, tools, and technologies. Software level The Software Level, also known as the Development Assurance Level (DAL) or Item Development Assurance Level (IDAL) as defined in ARP4754 (DO-178C only mentions IDAL as synonymous with Software Level), is determined from the safety assessment process and hazard analysis by examining the effects of a failure condition in the system. The failure conditions are categorized by their effects on the aircraft, crew, and passengers. Catastrophic - Failure may cause deaths, usually with loss of the aircraft. Hazardous - Failure has a large negative impact on safety or performance, or reduces the ability of the crew to operate the aircraft due to physical distress or a higher workload, or causes serious or fatal injuries among the passengers. Major - Failure significantly reduces the safety margin or significantly increases crew workload. May result in passenger discomfort (or even minor injuries). Minor - Failure slightly reduces the safety margin or slightly increases crew workload. Examples might include causing passenger inconvenience or a routine flight plan change. No Effect - Failure has no impact on safety, aircraft operation, or crew workload. DO-178C alone is not intended to guarantee software safety aspects. Safety attributes in the design and as implemented as functionality must receive additional mandatory system safety tasks to drive and show objective evidence of meeting explicit safety requirements. The certification authorities require and DO-178C specifies the correct DAL be established using these comprehensive analyses methods to establish the software level A-E. "The software level establishes the rigor necessary to demonstrate compliance" with DO-178C. Any software that commands, controls, and monitors safety-critical functions should receive the highest DAL - Level A. The number of objectives to be satisfied (some with independence) is determined by the software level A-E. The phrase "with independence" refers to a separation of responsibilities where the objectivity of the verification and validation processes is ensured by virtue of their "independence" from the software development team. For objectives that must be satisfied with independence, the person verifying the item (such as a requirement or source code) may not be the person who authored the item and this separation must be clearly documented. Processes and documents Processes are intended to support the objectives, according to the software level (A through D—Level E was outside the purview of DO-178C). Processes are described as abstract areas of work in DO-178C, and it is up to the planners of a real project to define and document the specifics of how a process will be carried out. On a real project, the actual activities that will be done in the context of a process must be shown to support the objectives. These activities are defined by the project planners as part of the Planning process. This objective-based nature of DO-178C allows a great deal of flexibility in regard to following different styles of software life cycle. Once an activity within a process has been defined, it is generally expected that the project respect that documented activity within its process. Furthermore, processes (and their concrete activities) must have well defined entry and exit criteria, according to DO-178C, and a project must show that it is respecting those criteria as it performs the activities in the process. The flexible nature of DO-178C's processes and entry/exit criteria make it difficult to implement the first time, because these aspects are abstract and there is no "base set" of activities from which to work. The intention of DO-178C was not to be prescriptive. There are many possible and acceptable ways for a real project to define these aspects. This can be difficult the first time a company attempts to develop a civil avionics system under this standard, and has created a niche market for DO-178C training and consulting. For a generic DO-178C based process, Stages of Involvements (SOI) are the minimum gates that a Certification Authority gets involved in reviewing a system or sub-system as defined by EASA on the Certification Memorandum SWCEH – 002: SW Approval Guidelines and FAA on the Order 8110.49: SW Approval Guidelines. Traceability DO-178 requires documented bidirectional connections (called traces) between the certification artifacts. For example, a Low Level Requirement (LLR) is traced up to a High Level Requirement (HLR) it is meant to satisfy, while it is also traced to the lines of source code meant to implement it, the test cases meant to verify the correctness of the source code with respect to the requirement, the results of those tests, etc. A traceability analysis is then used to ensure that each requirement is fulfilled by the source code, that each functional requirement is verified by test, that each line of source code has a purpose (is connected to a requirement), and so forth. Traceability analysis accesses the system's completeness. The rigor and detail of the certification artifacts is related to the software level. Differences with DO-178B SC-205/WG-12 was responsible for revising DO-178B/ED-12B to bring it up to date with respect to current software development and verification technologies. The structure of the document remains largely the same from B to C. Example changes include: Provide clearer language and terminology, provide more consistency More objectives (for Levels A, B, and C) Clarified the "hidden objective", applicable to Level A, which was implied by DO-178B in section 6.4.4.2b but not listed in the Annex A tables. This objective is now explicitly listed in DO-178C, Annex A, Table A-7, Objective 9: "Verification of additional code, that cannot be traced to Source Code, is achieved." Parameter Data Item Files - Provides separate information that influences the behavior of an executable object code (without changing it). An example would be a configuration file that sets up the schedule and major time frames of a partitioned operating system. The parameter data item file must be verified together with the executable object code, or else it must be tested for all possible ranges of the parameter data items. DO-330 "Software Tool Qualification Considerations", a new "domain independent, external document", was developed to provide guidance for an acceptable tool qualification process. While DO-178B was used as the basis of the development of this new document, the text was adapted to be directly and separately applicable to tool development and expanded to address all tool aspects. As a domain-independent, stand-alone document, DO-330 is intended for use not only in support of DO-178C/ED-12C, but DO-278/ED-109, DO-254/ED-80, and DO-200 as well, even for non-aviation applications, e.g., ISO 26262 or ECSS. Consequently, tool qualification guidance was removed in DO-178C, replaced therein with guidance for deciding when to apply DO-330 tool qualification guidance to tools used in a DO-178C context. Technology supplements were added to extend the guidance of the DO-178C document to specific techniques. Rather than expanding the prior text to account for all current and future software development techniques, supplements are made available to explicitly add, delete, or otherwise modify the guidance of the core standard for application to specific techniques or technologies. All guidance in these supplements are written in the context of the affected guidance elements in DO-178C and so should be considered as at the same level of authority as that core document. DO-331 "Model-Based Development and Verification Supplement to DO-178C and DO-278A" - addressing Model-Based Development (MBD) and verification and the ability to use modeling techniques to improve development and verification while avoiding pitfalls inherent in some modeling methods DO-332 "Object-Oriented Technology and Related Techniques Supplement to DO-178C and DO-278A" - addressing object-oriented software and the conditions under which it may be used DO-333 "Formal Methods Supplement to DO-178C and DO-278A" - addressing formal methods to complement (but not replace) testing Guidelines vs. guidance DO-178B was not completely consistent in the use of the terms guidelines and guidance within the text. "Guidance" conveys a slightly stronger sense of obligation than "guidelines". As such, with the DO-178C, the SCWG has settled on the use of "guidance" for all the statements that are considered as "recommendations", replacing the remaining instances of "guidelines" with "supporting information" and using that phrase wherever the text is more "information" oriented than "recommendation" oriented. The entire DO-248C/ED-94C document, Supporting Information for DO-178C and DO-278A, falls into the "supporting information" category, not guidance. Sample text difference between DO-178B and DO-178C Chapter 6.1 defines the purpose for the software verification process. DO-178C adds the following statement about the Executable Object Code: "The Executable Object Code satisfies the software requirements (that is, intended function), and provides confidence in the absence of unintended functionality." "The Executable Object Code is robust with respect to the software requirements that it can respond correctly to abnormal inputs and conditions." As a comparison, DO-178B states the following with regard to the Executable Object Code: "The Executable Object Code satisfies the software requirements." The additional Revision C clarification filled a gap that a software developer could have encountered when interpreting the Revision B document. See also DO-178B DO-248C, Supporting Information for DO-178C and DO-278A Modified condition/decision coverage References External links DO-178C Glossary, 2023 2011 introductions RTCA standards Computer standards Avionics Safety engineering Software requirements
DO-178C
Technology,Engineering
2,651
46,231,793
https://en.wikipedia.org/wiki/Penicillium%20indonesiae
Penicillium indonesiae is a species of the genus of Penicillium. References indonesiae Fungi described in 1980 Fungus species
Penicillium indonesiae
Biology
29
3,321,017
https://en.wikipedia.org/wiki/Virtual%20POS
Virtual Point of Sale (vPOS) systems represent a significant leap in retail technology, allowing businesses to process transactions via cloud-based platforms. By integrating software with internet-enabled devices, vPOS systems enhance flexibility, operational efficiency, and customer experiences. This article explores their functionality, advantages, challenges, and the evolving landscape of virtual payment solutions. Overview of Virtual POS Systems Unlike traditional point-of-sale setups, virtual POS systems eliminate the need for dedicated hardware, relying instead on software and internet connectivity. These systems are widely used across industries, from small businesses to global retailers, to streamline transactions and integrate with broader business operations with broader business operations. Types of Virtual POS Systems Virtual POS systems cater to diverse business needs with several configurations: Mobile POS: Smartphone-based solutions, often paired with card readers, are ideal for small-scale and mobile businesses. Tablet POS: These systems, featuring larger screens, are suitable for cafes and boutiques requiring advanced features like inventory tracking. Self-Service Kiosks: Used in environments like fast food or retail, these systems allow customers to complete transactions without staff assistance. Cashierless Systems: Examples such as Tesco’s GetGo stores employ apps and sensors to enable fully automated shopping experiences. Multichannel POS: Integrates online and offline sales, ensuring inventory synchronization and consistent customer experiences. Benefits of Virtual POS Systems Cost Efficiency: Reduces reliance on expensive hardware, making it accessible for small businesses. Enhanced Customer Experience: Offers diverse payment options, including digital wallets and cryptocurrencies, catering to consumer preferences. Operational Integration: Combines inventory management, sales analytics, and customer engagement tools, providing a centralized system. Data-Driven Insights: Real-time analytics support strategic decisions, such as demand forecasting and inventory control. Challenges and Considerations Cybersecurity Risks: Virtual transactions are vulnerable to hacking and require robust data protection measures. Adoption Costs: Initial setup and employee training may be barriers for smaller businesses. System Integration: Compatibility with legacy systems can complicate adoption for established businesses. Future Trends Emerging technologies like Augmented Reality (AR) and Virtual Reality (VR) are poised to redefine vPOS functionalities. For instance, Zara's AR apps demonstrate how retail can integrate immersive experiences with seamless payment options.Additionally, the increasing popularity of cryptocurrency further diversifies payment mechanisms. Conclusion Virtual POS systems are reshaping the retail landscape by offering adaptable, data-driven, and customer-centric transaction solutions. As businesses continue to adopt cashless and digital-first approaches, vPOS systems stand as a cornerstone of modern commerce. References Retail point of sale systems
Virtual POS
Technology
551
32,959,753
https://en.wikipedia.org/wiki/HVTN%20505
HVTN 505 is a clinical trial testing an HIV vaccine regimen on research participants. The trial is conducted by the HIV Vaccine Trials Network and sponsored by the National Institute of Allergy and Infectious Diseases. Vaccinations were stopped in April 2013 due to initial results showing that the vaccine was ineffective in preventing HIV infections and lowering viral load among those participants who had become infected with HIV. All study participants will continue to be monitored for safety and any long-term effects. Organizers The study is sponsored by the National Institute of Allergy and Infectious Diseases (NIAID) and the HIV Vaccine Trials Network (HVTN) is conducting the trial. The Vaccine Research Center (VRC) developed the vaccines being researched in the trial. The research sites were in the following places: Annandale, Virginia Atlanta Aurora, Colorado Bethesda, Maryland Birmingham, Alabama Boston Chicago Cleveland Dallas Decatur, Georgia Houston Los Angeles Orlando Nashville New York City Philadelphia Rochester, New York San Francisco Seattle Purpose HVTN 505 is being conducted to determine the safety and efficacy of a Vaccine Research Center DNA/rAd5 vaccine regimen in healthy males and male-to-female transgender persons who have sex with men. All participants must be fully circumcised, and must have no evidence of previous infection with Adenovirus 5, which is a common virus that causes colds and respiratory infections. Potential participants were tested for antibodies to Adenovirus 5 as part of the screening process to determine their eligibility. When the study began the primary outcome being measured was whether the vaccine decreased the viral load, which is amount of HIV in the blood of study participants who received the vaccine then later became infected with HIV. At that time, researchers stated that the vaccine was very unlikely to provide any protection from HIV infection. In August 2011 because of new data from other clinical trials, NIAID shifted the focus of the study to determine whether vaccination was also able to prevent HIV infection. As a result of this change to the research questions, NIAID also announced an expansion in the desired enrollment to a total of 2200 participants. The study was further expanded to 2500 participants in 2012 to ensure that there would be enough data to meaningfully answer the research questions. Study design HVTN 505 is a phase IIB, randomized, placebo-controlled, double-blind clinical trial. The original 2009 design was for 1,350 volunteers to participate and for half to get the experimental vaccine and half to get placebo. The study's enrollment target was expanded to 2,200 in 2011 to gather additional data which would allow researchers to determine the extent to which the vaccine regimen also protected against infection. When the vaccinations were stopped on April 23, 2013, the study had enrolled 2,504 volunteers at 21 sites in 19 cities in the United States. Volunteers wanting to join the trial had to meet the following criteria: must be a man who has sex with men or a trans woman (with or without sex reassignment surgery) who has sex with men, between 18 and 50 years old, HIV negative, fully circumcised, and without detectable antibodies to adenovirus type 5 (which would mean that the person had no evidence of prior adenovirus type 5 infection). The criteria about circumcision and adenovirus antibodies were added as a precaution in light of the results of the prior STEP study. In STEP, uncircumcised men with Ad 5 antibodies contracted HIV more often than the control group, and HVTN 505 researchers responded by only recruiting circumcised men with no Ad 5 antibodies. The study regimen started with a set of three immunizations over eight weeks. These three injections were with a DNA vaccine which was intended to prime the immune system. This vaccine contained genetic material artificially modeled after - but not containing or derived from - surface and internal structures of HIV. 24 weeks (6 months) after a volunteer began the study regimen, that person would get a single injection of the study vaccine. This vaccine was a recombinant DNA vaccine based on adenovirus 5 as a live vector vaccine which was carrying artificial genetic material matching HIV antigens of the three major HIV subtypes. Vaccinations stopped On 22 April 2013 the independent data safety monitoring board (DSMB) conducted a scheduled interim review of unblinded data from the study. They concluded that the vaccine regimen had met the definitions for futility that were stated in the study protocol. As a result, they recommend that researchers should no longer administer any study injections and the HVTN and NIAID agreed. Vaccinations were halted the following day, April 23, 2013. In addition, HVTN and NIAID felt that the participants should be told whether they had received the experimental vaccine regimen (unblinded), and the study sites began contacting participants on April 26 to provide this information. The DSMB review also noted that more persons who received the vaccine became infected than those in the control group - 41 persons among the vaccinated and 30 among the placebo recipients. Further consideration only of participants who were diagnosed after having been in the study 28 weeks - which is the time required for the vaccine regimen to reach its potential - found that the vaccine group had 27 HIV infections compared to 21 infections in the placebo group. These differences are not statistically significant, but all participants were asked to remain in the study for the full-time planned so that researchers can monitor their safety and continue to learn as much as possible. Reaction When the results were announced, persons who called the result disappointing include Anthony Fauci, Mitchell Warren of the AIDS Vaccine Advocacy Coalition, and HVTN 505 study leader Scott Hammer. One of the participants, Josh Robbins, who became infected during the study reported that he was happy to have been in the study because it allowed him to become diagnosed and receive treatment more quickly than is typical. He also released a statement saying, this is just a finding, not a failure. Some media organizations speculated that a possible cause for the failure in the study is its use of an adenovirus 5 booster, which was also used in an earlier trial called the STEP study. In the STEP study, participants who received the vaccine contracted HIV at a rate that was significantly higher than placebo recipients. In the HVTN 505 study participants who received the vaccine contracted HIV at a rate that was slightly (i.e. not statistically significantly) higher number than those in the placebo arm of the study. The vaccines used in HVTN 505 are different from the ones used in the Step Study. Additionally, the studies were done with different populations and in different geographical areas. Scientific results Even though the vaccines did not work the way the study designers hoped, there continues to be valuable scientific information gleaned from information and specimens gathered as a part of this study. For example, people with one type of immune response to the vaccines that was particularly strong seemed to be less likely to become HIV infected than those who did not have this kind of immune response. This makes us think that the vaccines did make a difference, even though they did not prevent HIV infection overall. The raw datasets used in published analysis from HVTN 505 are now publicly available. These include data used in the original efficacy analysis, as well as recent studies describing the effects of the vaccines used on the immune responses of participants. References External links official page at ClinicalTrials.gov HVTN's 505 flyer a page of Frequently Asked Questions about the study HIV vaccine research Clinical trials related to HIV Clinical trials sponsored by NIAID 2010s in science
HVTN 505
Chemistry
1,576
30,871,833
https://en.wikipedia.org/wiki/AN/FLR-9
The AN/FLR-9 is a type of very large circularly disposed antenna array, built at eight locations during the Cold War for HF/DF direction finding of high priority targets. The worldwide network, known collectively as "Iron Horse", could locate HF communications almost anywhere on Earth. Because of the exceptionally large size of its outer reflecting screen (1056 vertical steel wires supported by 96 towers), the FLR-9 was commonly referred to by the nickname "Elephant Cage." Constructed in the early to mid 1960s, in May 2016 the last operational FLR-9 at Joint Base Elmendorf-Richardson in Alaska was decommissioned. It can be confused with the US Navy's AN/FRD-10, which also used a circularly disposed antenna array. Description The AN/FLR-9 Operation and Service Manual describes the array as follows: The antenna array is composed of three concentric rings of antenna elements. Each ring of elements receives RF signals for an assigned portion of the 1.5 to 30-MHz radio spectrum. The outer ring normally covers the 2 to 6-MHz range (band A), but also provides reduced coverage down to 1.5 MHz. The center ring covers the 6 to 18-MHz range (band B) and the inner ring covers the 18 to 30-MHz range (band C). Band A contains 48 sleeve monopole elements spaced apart (7.5 degrees). Band B contains 96 sleeve monopole elements spaced 37.5 feet (11.43 m) apart (3.75 degrees). Band C contains 48 antenna elements mounted on wooden structures placed in a circle around the central building. Bands A and B elements are vertically polarized. Band C elements consist of two horizontally polarized dipole antenna subelements electrically tied together, and positioned one above the other. The array is centered on a ground screen 1,443 feet (439.8 m) in diameter. The arrangement permits accurate direction finding of signals from up to 4000 nautical miles (7408 km) away. FLR-9s were constructed at the following places: USASA Field Station Augsburg (Gablingen Kaserne), Germany Chicksands, England Clark AB, Philippines Joint Base Elmendorf-Richardson, Alaska, USA (formerly designated as Elmendorf AFB) Karamursel, Turkey 7th Radio Research Field Station/Ramasun Station, Udon Thani Province, Thailand Misawa AB, Japan, built 1963 to 1965, demolished beginning in 2014. San Vito dei Normanni Air Station, Italy (near Brindisi, Italy) Advances in technology have made the FLR-9 obsolete. In 1997, the FLR-9 at the former Clark AB in the Philippines was converted into a 35,000-seat fabric-covered amphitheatre. In early May 2002, systematic dismantling of the FLR-9 at San Vito began, and it was totally deconstructed by the end of that month. Although the markings of where the array stood remain in the ground, the structure is completely gone. Demolition of the FLR-9 at Misawa began in October 2014. A decommissioning ceremony for the last active FLR-9, at Joint Base Elmendorf-Richardson, was held on May 25, 2016. See also Signals intelligence High-frequency direction finding List of military electronics of the United States References External links AN/FLR-9 Operation and Service manual USASA Field Station Augsburg – history of the Augsburg site Extreme close-up hi-res aerial photo of FLR-9 at Chicksands just before being dismantled in 1996 FLR-9 at Ramasun Station, Udon Thani, Thailand FLR-9 at Elmendorf AFB, Alaska FLR-9 at Gablingen, Augsburg, Germany FLR-9 at Misawa, Honshū Island, Japan FLR-9 site at Chicksands, Bedfordshire, UK FLR-9 site at Clark AFB, Philippines FLR-9 site at Karamursel AS, Turkey FLR-9 site at San Vito dei Normanni AS, Italy AN/FLR-9 information on FAS.org FLR-9 photo collection on FTVA.org Automatic identification and data capture Military radio systems of the United States Radio frequency antenna types Radio-frequency identification Surveillance Military electronics of the United States Antennas (radio) Telecommunications equipment of the Cold War
AN/FLR-9
Technology,Engineering
901
47,229,205
https://en.wikipedia.org/wiki/Penicillium%20quebecense
Penicillium quebecense is a species of fungus in the genus Penicillium. References quebecense Fungi described in 2011 Fungus species
Penicillium quebecense
Biology
30
9,731,945
https://en.wikipedia.org/wiki/Probability%20matching
Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, then the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances. The optimal Bayesian decision strategy (to maximize the number of correct predictions, see ) in such a case is to always predict "positive" (i.e., predict the majority category in the absence of other information), which has 60% chance of winning rather than matching which has 52% of winning (where p is the probability of positive realization, the result of matching would be , here ). The probability-matching strategy is of psychological interest because it is frequently employed by human subjects in decision and classification studies (where it may be related to Thompson sampling). The only case when probability matching will yield same results as Bayesian decision strategy mentioned above is when all class base rates are the same. So, if in the training set positive examples are observed 50% of the time, then the Bayesian strategy would yield 50% accuracy (1 × .5), just as probability matching (.5 ×.5 + .5 × .5). References Shanks, D. R., Tunney, R. J., & McCarthy, J. D. (2002). A re‐examination of probability matching and rational choice. Journal of Behavioral Decision Making, 15(3), 233-250. Statistical classification Machine learning Decision-making Cognitive science Cognitive biases
Probability matching
Engineering
356
5,228,690
https://en.wikipedia.org/wiki/Environmental%20impact%20design
Environmental impact design (EID) is the design of development projects so as to achieve positive environmental objectives that benefit the environment and raise the stock of public goods. Examples Examples of EID include: Habitat creation as a result of afforestation projects that can "expand forest resources and reduce the gap between timber production and consumption." An example is the China Afforestation Project. Coastal management projects that strengthens biodiversity and promotes sustainable use of biological resources. Flood defense projects that improve livability in flood-prone areas by reducing future losses. Flood preparedness and mitigation systems can aid in handling periodic flooding. Bridge designs such as concrete bridges that are sustainable, recyclable, durable and can be built quickly, reducing greenhouse gas emissions caused by traffic delays and construction equipment. Types Environmental impact design impacts can be broken down into three types: Direct impacts: caused by the project and building process, such as land consumption, erosion and loss of vegetation. Indirect impacts: side-effects of a project such as degradation of surface water quality from erosion of land cleared as a result of a project. Over time, indirect impacts can affect larger geographical areas. Cumulative impacts: synergistic effects such as the impairment of water regulation and filtering capabilities of wetland systems due to construction. Environmental impacts of design must consider the site of the project. Environmental Impact Design should address issues revealed by Environmental impact assessments (EIA). EID looks for ways to minimize costs to the developer, while maximizing the benefit to the environment. Construction Historically in construction, the needs of the owner were paramount, as constrained by local laws and policies, such as building safety and zoning. EID broadens those concerns to encompass environmental impacts. Low impact development and ecologically focused building practices originated in Germany following World War II. The widespread destruction and a large homeless population gave Germans the chance to refocus building practices. Prefabrication was adopted in both East and West Germany where, in the 1950s and 60s, modular construction systems were developed for residential buildings. International programs In 1992, at the Earth Summit, policy makers adopted Agenda 21, which focused on sustainable development. In 1996, the UN Conference on Human Settlements Habitat II discussed transferring sustainable building practices to an urban scale. From 1999 to 2003, the U.S. Green Building Council kick-started the Leadership in Energy and Environmental Design or (LEED) which is now the most well-known standard for green building. Building life cycle The "building life cycle" is an approach to design that considers environmental impacts such as pollution and energy consumption over the life of the building. This theory evolved into the idea of cradle-to-cradle design, which adds the notion that at the end of a building's life, it should be disposed of without environment impact. The Triple Zero standard requires lowering energy, emissions and waste to zero. A successful life cycle building adopts approaches such as the use of recycled materials in the construction process as well as green energy. See also Environmental impact assessment Hydropower Sustainability Assessment Protocol Landscape planning Phytoremediation References Environmental design Environmental terminology Design
Environmental impact design
Engineering
628
11,100,985
https://en.wikipedia.org/wiki/Equipossibility
Equipossibility is a philosophical concept in possibility theory that is a precursor to the notion of equiprobability in probability theory. It is used to distinguish what can occur in a probability experiment. For example, it is the difference between viewing the possible results of rolling a six sided dice as {1,2,3,4,5,6} rather than {6, not 6}. The former (equipossible) set contains equally possible alternatives, while the latter does not because there are five times as many alternatives inherent in 'not 6' as in 6. This is true even if the die is biased so that 6 and 'not 6' are equally likely to occur (equiprobability). The Principle of Indifference of Laplace states that equipossible alternatives may be accorded equal probabilities if nothing more is known about the underlying probability distribution. However, it is a matter of contention whether the concept of equipossibility, also called equispecificity (from equispecific), can truly be distinguished from the concept of equiprobability. In Bayesian inference, one definition of equipossibility is "a transformation group which leaves invariant one's state of knowledge". Equiprobability is then defined by normalizing the Haar measure of this symmetry group. This is known as the principle of transformation groups. References External links Book Chapter by Henry E. Kyburg Jr. on equipossibility, with the 6/not-6 example above Quotes on equipossibility in classical probability Probability interpretations Possibility
Equipossibility
Mathematics
337
216,884
https://en.wikipedia.org/wiki/Inline%20expansion
In computing, inline expansion, or inlining, is a manual or compiler optimization that replaces a function call site with the body of the called function. Inline expansion is similar to macro expansion, but occurs during compilation, without changing the source code (the text), while macro expansion occurs prior to compilation, and results in different text that is then processed by the compiler. Inlining is an important optimization, but has complicated effects on performance. As a rule of thumb, some inlining will improve speed at very minor cost of space, but excess inlining will hurt speed, due to inlined code consuming too much of the instruction cache, and also cost significant space. A survey of the modest academic literature on inlining from the 1980s and 1990s is given in Peyton Jones & Marlow 1999. Overview Inline expansion is similar to macro expansion as the compiler places a new copy of the function in each place it is called. Inlined functions run a little faster than the normal functions as function-calling-overheads are saved, however, there is a memory penalty. If a function is inlined 10 times, there will be 10 copies of the function inserted into the code. Hence inlining is best for small functions that are called often. In C++ the member functions of a class, if defined within the class definition, are inlined by default (no need to use the inline keyword); otherwise, the keyword is needed. The compiler may ignore the programmer’s attempt to inline a function, mainly if it is particularly large. Inline expansion is used to eliminate the time overhead (excess time) when a function is called. It is typically used for functions that execute frequently. It also has a space benefit for very small functions, and is an enabling transformation for other optimizations. Without inline functions, the compiler decides which functions to inline. The programmer has little or no control over which functions are inlined and which are not. Giving this degree of control to the programmer allows for the use of application-specific knowledge in choosing which functions to inline. Ordinarily, when a function is invoked, control is transferred to its definition by a branch or call instruction. With inlining, control drops through directly to the code for the function, without a branch or call instruction. Compilers usually implement statements with inlining. Loop conditions and loop bodies need lazy evaluation. This property is fulfilled when the code to compute loop conditions and loop bodies is inlined. Performance considerations are another reason to inline statements. In the context of functional programming languages, inline expansion is usually followed by the beta-reduction transformation. A programmer might inline a function manually through copy and paste programming, as a one-time operation on the source code. However, other methods of controlling inlining (see below) are preferable, because they do not precipitate bugs arising when the programmer overlooks a (possibly modified) duplicated version of the original function body, while fixing a bug in the inlined function. Effect on performance The direct effect of this optimization is to improve time performance (by eliminating call overhead), at the cost of worsening space usage (due to duplicating the function body). The code expansion due to duplicating the function body dominates, except for simple cases, and thus the direct effect of inline expansion is to improve time at the cost of space. However, the primary benefit of inline expansion is to allow further optimizations and improved scheduling, due to increasing the size of the function body, as better optimization is possible on larger functions. The ultimate impact of inline expansion on speed is complicated, due to multiple effects on performance of the memory system (primarily instruction cache), which dominates performance on modern processors: depending on the specific program and cache, inlining particular functions can increase or decrease performance. The impact of inlining varies by programming language and program, due to different degrees of abstraction. In lower-level imperative languages such as C and Fortran it is typically a 10–20% speed boost, with minor impact on code size, while in more abstract languages it can be significantly more important, due to the number of layers inlining removes, with an extreme example being Self, where one compiler saw improvement factors of 4 to 55 by inlining. The direct benefits of eliminating a function call are: It eliminates instructions required for a function call, both in the calling function and in the callee: placing arguments on stack or in registers, the function call itself, the function prologue, then at return the function epilogue, the return statement, and then getting the return value back, and removing arguments from stacks and restoring registers (if necessary). Due to not needing registers to pass arguments, it reduces register spilling. It eliminates having to pass references and then dereference them, when using call by reference (or call by address, or call by sharing). The primary benefit of inlining, however, is the further optimizations it allows. Optimizations that cross function boundaries can be done without requiring interprocedural optimization (IPO): once inlining has been performed, additional intraprocedural optimizations ("global optimizations") become possible on the enlarged function body. For example: A constant passed as an argument can often be propagated to all instances of the matching parameter, or part of the function may be "hoisted out" of a loop (via loop-invariant code motion). Register allocation can be done across the larger function body. High-level optimizations, such as escape analysis and tail duplication, can be performed on a larger scope and be more effective, particularly if the compiler implementing those optimizations is primarily relying on intra-procedural analysis. These can be done without inlining, but require a significantly more complicated compiler and linker (in case caller and callee are in separate compilation units). Conversely, in some cases a language specification may allow a program to make additional assumptions about arguments to procedures that it can no longer make after the procedure is inlined, preventing some optimizations. Smarter compilers (such as Glasgow Haskell Compiler) will track this, but naive inlining loses this information. A further benefit of inlining for the memory system is: Eliminating branches and keeping code that is executed close together in memory improves instruction cache performance by improving locality of reference (spatial locality and sequentiality of instructions). This is smaller than optimizations that specifically target sequentiality, but is significant. The direct cost of inlining is increased code size, due to duplicating the function body at each call site. However, it does not always do so, namely in case of very short functions, where the function body is smaller than the size of a function call (at the caller, including argument and return value handling), such as trivial accessor methods or mutator methods (getters and setters); or for a function that is only used in one place, in which case it is not duplicated. Thus inlining may be minimized or eliminated if optimizing for code size, as is often the case in embedded systems. Inlining also imposes a cost on performance, due to the code expansion (due to duplication) hurting instruction cache performance. This is most significant if, prior to expansion, the working set of the program (or a hot section of code) fit in one level of the memory hierarchy (e.g., L1 cache), but after expansion it no longer fits, resulting in frequent cache misses at that level. Due to the significant difference in performance at different levels of the hierarchy, this hurts performance considerably. At the highest level this can result in increased page faults, catastrophic performance degradation due to thrashing, or the program failing to run at all. This last is rare in common desktop and server applications, where code size is small relative to available memory, but can be an issue for resource-constrained environments such as embedded systems. One way to mitigate this problem is to split functions into a smaller hot inline path (fast path), and a larger cold non-inline path (slow path). Inlining hurting performance is primarily a problem for large functions that are used in many places, but the break-even point beyond which inlining reduces performance is difficult to determine and depends in general on precise load, so it can be subject to manual optimization or profile-guided optimization. This is a similar issue to other code expanding optimizations such as loop unrolling, which also reduces number of instructions processed, but can decrease performance due to poorer cache performance. The precise effect of inlining on cache performance is complicated. For small cache sizes (much smaller than the working set prior to expansion), the increased sequentiality dominates, and inlining improves cache performance. For cache sizes close to the working set, where inlining expands the working set so it no longer fits in cache, this dominates and cache performance decreases. For cache sizes larger than the working set, inlining has negligible impact on cache performance. Further, changes in cache design, such as load forwarding, can offset the increase in cache misses. Compiler support Compilers use a variety of mechanisms to decide which function calls should be inlined; these can include manual hints from programmers for specific functions, together with overall control via command-line options. Inlining is done automatically by many compilers in many languages, based on judgment of whether inlining is beneficial, while in other cases it can be manually specified via compiler directives, typically using a keyword or compiler directive called inline. Typically this only hints that inlining is desired, rather than requiring inlining, with the force of the hint varying by language and compiler. Typically, compiler developers keep the above performance issues in mind, and incorporate heuristics into their compilers that choose which functions to inline so as to improve performance, rather than worsening it, in most cases. Implementation Once the compiler has decided to inline a particular function, performing the inlining operation itself is usually simple. Depending on whether the compiler inlines functions across code in different languages, the compiler can do inlining on either a high-level intermediate representation (like abstract syntax trees) or a low-level intermediate representation. In either case, the compiler simply computes the arguments, stores them in variables corresponding to the function's arguments, and then inserts the body of the function at the call site. Linkers can also do function inlining. When a linker inlines functions, it may inline functions whose source is not available, such as library functions (see link-time optimization). A run-time system can inline function as well. Run-time inlining can use dynamic profiling information to make better decisions about which functions to inline, as in the Java Hotspot compiler. Here is a simple example of inline expansion performed "by hand" at the source level in the C programming language: int pred(int x) { if (x == 0) return 0; else return x - 1; } Before inlining: int func(int y) { return pred(y) + pred(0) + pred(y+1); } After inlining: int func(int y) { int tmp; if (y == 0) tmp = 0; else tmp = y - 1; /* (1) */ if (0 == 0) tmp += 0; else tmp += 0 - 1; /* (2) */ if (y+1 == 0) tmp += 0; else tmp += (y + 1) - 1; /* (3) */ return tmp; } Note that this is only an example. In an actual C application, it would be preferable to use an inlining language feature such as parameterized macros or inline functions to tell the compiler to transform the code in this way. The next section lists ways to optimize this code. Inlining by assembly macro expansion Assembler macros provide an alternative approach to inlining whereby a sequence of instructions can normally be generated inline by macro expansion from a single macro source statement (with zero or more parameters). One of the parameters might be an option to alternatively generate a one-time separate subroutine containing the sequence and processed instead by an inlined call to the function. Example: MOVE FROM=array1,TO=array2,INLINE=NO Heuristics A range of different heuristics have been explored for inlining. Usually, an inlining algorithm has a certain code budget (an allowed increase in program size) and aims to inline the most valuable callsites without exceeding that budget. In this sense, many inlining algorithms are usually modeled after the Knapsack problem. To decide which callsites are more valuable, an inlining algorithm must estimate their benefit—i.e. the expected decrease in the execution time. Commonly, inliners use profiling information about the frequency of the execution of different code paths to estimate the benefits. In addition to profiling information, newer just-in-time compilers apply several more advanced heuristics, such as: Speculating which code paths will result in the best reduction in execution time (by enabling additional compiler optimizations as a result of inlining) and increasing the perceived benefit of such paths. Adaptively adjusting the benefit-per-cost threshold for inlining based on the size of the compilation unit and the amount of code already inlined. Grouping subroutines into clusters, and inlining entire clusters instead of singular subroutines. Here, the heuristic guesses the clusters by grouping those methods for which inlining just a proper subset of the cluster leads to a worse performance than inlining nothing at all. Benefits Inline expansion itself is an optimization, since it eliminates overhead from calls, but it is much more important as an enabling transformation. That is, once the compiler expands a function body in the context of its call site—often with arguments that may be fixed constants—it may be able to do a variety of transformations that were not possible before. For example, a conditional branch may turn out to be always true or always false at this particular call site. This in turn may enable dead code elimination, loop-invariant code motion, or induction variable elimination. In the C example in the previous section, optimization opportunities abound. The compiler may follow this sequence of steps: The tmp += 0 statements in the lines marked (2) and (3) do nothing. The compiler can remove them. The condition 0 == 0 is always true, so the compiler can replace the line marked (2) with the consequent, tmp += 0 (which does nothing). The compiler can rewrite the condition y+1 == 0 to y == -1. The compiler can reduce the expression (y + 1) - 1 to y. The expressions y and y+1 cannot both equal zero. This lets the compiler eliminate one test. In statements such as if (y == 0) return y the value of y is known in the body, and can be inlined. The new function looks like: int func(int y) { if (y == 0) return 0; if (y == -1) return -2; return 2*y - 1; } Limitations Complete inline expansion is not always possible, due to recursion: recursively inline expanding the calls will not terminate. There are various solutions, such as expanding a bounded amount, or analyzing the call graph and breaking loops at certain nodes (i.e., not expanding some edge in a recursive loop). An identical problem occurs in macro expansion, as recursive expansion does not terminate, and is typically resolved by forbidding recursive macros (as in C and C++). Comparison with macros Traditionally, in languages such as C, inline expansion was accomplished at the source level using parameterized macros. Use of true inline functions, as are available in C99, provides several benefits over this approach: In C, macro invocations do not perform type checking, or even check that arguments are well-formed, whereas function calls usually do. In C, a macro cannot use the return keyword with the same meaning as a function would do (it would make the function that asked the expansion terminate, rather than the macro). In other words, a macro cannot return anything which is not the result of the last expression invoked inside it. Since C macros use mere textual substitution, this may result in unintended side-effects and inefficiency due to re-evaluation of arguments and order of operations. Compiler errors within macros are often difficult to understand, because they refer to the expanded code, rather than the code the programmer typed. Thus, debugging information for inlined code is usually more helpful than that of macro-expanded code. Many constructs are awkward or impossible to express using macros, or use a significantly different syntax. Inline functions use the same syntax as ordinary functions, and can be inlined and un-inlined at will with ease. Many compilers can also inline expand some recursive functions; recursive macros are typically illegal. Bjarne Stroustrup, the designer of C++, likes to emphasize that macros should be avoided wherever possible, and advocates extensive use of inline functions. Selection methods Many compilers aggressively inline functions wherever it is beneficial to do so. Although it can lead to larger executables, aggressive inlining has nevertheless become more and more desirable as memory capacity has increased faster than CPU speed. Inlining is a critical optimization in functional languages and object-oriented programming languages, which rely on it to provide enough context for their typically small functions to make classical optimizations effective. Language support Many languages, including Java and functional languages, do not provide language constructs for inline functions, but their compilers or interpreters often do perform aggressive inline expansion. Other languages provide constructs for explicit hints, generally as compiler directives (pragmas). In the Ada programming language, there exists a pragma for inline functions. Functions in Common Lisp may be defined as inline by the inline declaration as such: (declaim (inline dispatch)) (defun dispatch (x) (funcall (get (car x) 'dispatch) x)) The Haskell compiler GHC tries to inline functions or values that are small enough but inlining may be noted explicitly using a language pragma: key_function :: Int -> String -> (Bool, Double) {-# INLINE key_function #-} C and C++ C and C++ have an inline keyword which serves as a hint that inlining may be beneficial; however, in newer versions, its primary purpose is instead to alter the visibility and linking behavior of the function. See also Macro Partial evaluation Tail-call elimination Code outlining Notes References External links "Eliminating Virtual Function Calls in C++ Programs" by Gerald Aigner and Urs Hölzle "Reducing Indirect Function Call Overhead In C++ Programs" by Brad Calder and Dirk Grumwald ALTO - A Link-Time Optimizer for the DEC Alpha "Advanced techniques" by John R. Levine "Whole Program Optimization with Visual C++ .NET" by Brandon Bray Compiler optimizations Articles with example C code Subroutines Articles with example Lisp (programming language) code Articles with example Haskell code
Inline expansion
Technology
4,009
61,225,867
https://en.wikipedia.org/wiki/Selinexor
Selinexor sold under the brand name Xpovio among others, is a selective inhibitor of nuclear export used as an anti-cancer medication. It works by blocking the action of exportin 1 and thus blocking the transport of several proteins involved in cancer-cell growth from the cell nucleus to the cytoplasm, which ultimately arrests the cell cycle and leads to apoptosis. It is the first drug with this mechanism of action. The most common side effects include nausea (feeling sick), vomiting, decreased appetite, weight loss, diarrhea, tiredness, thrombocytopenia (low blood-platelet counts), anaemia (low red-blood cell counts), low levels of white blood cells and hyponatraemia (low blood sodium levels). Selinexor was granted accelerated approval by the U.S. Food and Drug Administration (FDA) in July 2019, for use in combination with the corticosteroid dexamethasone for the treatment of adults with relapsed refractory multiple myeloma (RRMM) who have received at least four prior therapies and whose disease is resistant to several other forms of treatment, including at least two proteasome inhibitors, at least two immunomodulatory agents, and an anti-CD38 monoclonal antibody. In December 2020, selinexor was approved by the FDA in combination with bortezomib and dexamethasone for the treatment of adults with multiple myeloma who have received at least one prior therapy. In clinical trials, it was associated with a high incidence of severe side effects, including low platelet counts and low blood sodium levels. The U.S. Food and Drug Administration (FDA) considers it to be a first-in-class medication. Selinexor was approved for medical use in the European Union in March 2021. Medical uses Selinexor is approved in combination with bortezomib and dexamethasone for the treatment of adults with multiple myeloma who have received at least one prior therapy. Selinexor is also approved for use in combination with the steroid dexamethasone in people with relapsed or refractory multiple myeloma who have received at least four prior therapies and whose disease is refractory to at least two proteosome inhibitors, at least two immunomodulatory agents, and an anti-CD38 monoclonal antibody (so-called "quad-refractory" or "penta-refractory" myeloma), for whom no other treatment options are available. It is the first drug to be approved for this indication. In June 2020, the U.S. Food and Drug Administration (FDA) approved an additional indication for selinexor to treat adults with relapsed or refractory diffuse large B-cell lymphoma (DLBCL), not otherwise specified, including DLBCL arising from follicular lymphoma, after at least two lines of systemic therapy. In the European Union, selinexor is indicated in combination with dexamethasone for the treatment of multiple myeloma in adults who have received at least four prior therapies and whose disease is refractory to at least two proteasome inhibitors, two immunomodulatory agents and an anti-CD38 monoclonal antibody, and who have demonstrated disease progression on the last therapy. Adverse effects In the clinical study (the BOSTON study) used to support FDA approval in patients with multiple myeloma after at least one prior therapy (once-weekly selinexor in combination with once-weekly bortezomib and dexamethasone),the most common adverse reactions were cytopenias, along with gastrointestinal and constitutional symptoms and were consistent with those previously reported from other selinexor studies. Most adverse reactions were manageable with dose modifications and/or standard supportive care. The most common non-hematologic adverse reactions were fatigue (59%), nausea (50%), decreased appetite (35%), and diarrhea (32%) and were mostly Grade 1 and 2 events. The most common Grade 3 and 4 adverse reactions were thrombocytopenia (43%), lymphopenia (38%), fatigue (28%) and anemia (17%). The most common adverse reactions (incidence ≥20%) in people with diffuse large B-cell lymphoma (DLBCL), excluding laboratory abnormalities, were fatigue, nausea, diarrhea, appetite decrease, weight decrease, constipation, vomiting, and pyrexia. Grade 3-4 laboratory abnormalities in ≥15% were thrombocytopenia, lymphopenia, neutropenia, anemia, and hyponatremia. Serious adverse reactions occurred in 46% of people, most often from infection. Thrombocytopenia was the leading cause of dose modifications. Gastrointestinal toxicity developed in 80% of people and any grade hyponatremia developed in 61%. Central neurological adverse reactions occurred in 25% of people, including dizziness and mental status changes. The prescribing information provides warnings and precautions for thrombocytopenia, neutropenia, gastrointestinal toxicity, hyponatremia, serious infection, neurological toxicity, and embryo-fetal toxicity. Mechanism of action Like other selective inhibitors of nuclear export (SINEs), selinexor works by binding to exportin 1 (also known as XPO1 or CRM1). XPO1 is a karyopherin which performs nuclear transport of several proteins, including tumor suppressors, oncogenes, and proteins involved in governing cell growth, from the cell nucleus to the cytoplasm; it is often overexpressed and its function misregulated in several types of cancer. By inhibiting the XPO1 protein, SINEs lead to a buildup of tumor suppressors in the nucleus of malignant cells and reduce levels of oncogene products which drive cell proliferation. This ultimately leads to cell cycle arrest and death of cancer cells by apoptosis. In vitro, this effect appeared to spare normal (non-malignant) cells. Inhibiting XPO1 affects many different cells in the body which may explain the incidence of adverse reactions to selinexor. Thrombocytopenia, for example, is a mechanistic and dose-dependent effect, occurring because selinexor causes a buildup of the transcription factor STAT3 in the nucleus of hematopoietic stem cells, preventing their differentiation into mature megakaryocytes (platelet-producing cells) and thus slowing production of new platelets. Chemistry Selinexor is a fully synthetic small-molecule compound, developed by means of a structure-based drug design process known as induced-fit docking. It binds to a cysteine residue in the nuclear export signal groove of exportin 1. Although this bond is covalent, it is slowly reversible. History Selinexor was developed by Karyopharm Therapeutics, a pharmaceutical company focused on the development of drugs that target nuclear transport. It was approved in the United States in July 2019, on the basis of a single-arm Phase IIb clinical trial. The FDA decided to grant accelerated approval despite a previous recommendation from an FDA Advisory Committee Panel which had voted 8–5 to delay approving the drug until the results from an ongoing Phase III study were known. Selinexor in combination with dexamethasone was granted accelerated approval and was granted orphan drug designation. The FDA granted the approval of Xpovio to Karyopharm Therapeutics. In June 2020, the U.S. Food and Drug Administration (FDA) approved an additional indication for selinexor to treat adults with relapsed or refractory diffuse large B-cell lymphoma (DLBCL), not otherwise specified, including DLBCL arising from follicular lymphoma, after at least two lines of systemic therapy. Approval was based on SADAL (KCP-330-009; NCT02227251), a multicenter, single-arm, open-label trial in participants with DLBCL after two to five systemic regimens. Participants received selinexor 60 mg orally on days one and three of each week. In December 2020, the FDA expanded selinexor's approved indication to include its combination with bortezomib and dexamethasone for the treatment of adults with multiple myeloma who have received at least one prior therapy. Society and culture Legal status On 28 January 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a conditional marketing authorization for the medicinal product Nexpovio intended for the treatment of relapsed and refractory multiple myeloma. The applicant for this medicinal product is Karyopharm Europe GmbH. Selinexor was approved for medical use in the European Union in March 2021. Research Under the codename KPT-330, selinexor was tested in several preclinical animal models of cancer, including pancreatic cancer, breast cancer, non-small-cell lung cancer, lymphomas, and acute and chronic leukemias. In humans, early clinical trials (phase I) have been conducted in non-Hodgkin lymphoma, blast crisis, and a wide range of advanced or refractory solid tumors, including colon cancer, head and neck cancer, melanoma, ovarian cancer, and prostate cancer. Compassionate use in patients with acute myeloid leukemia has also been reported. The pivotal clinical trial which served to support approval of selinexor for people with relapsed/refractory multiple myeloma was an open-label study of 122 patients known as the STORM trial. In all of the enrolled patients, patients had been treated with a median of seven prior treatment regimens including conventional chemotherapy, targeted therapy with bortezomib, carfilzomib, lenalidomide, pomalidomide, and a monoclonal antibody (daratumumab or isatuximab); nearly all had also undergone hematopoietic stem cell transplantation but had disease that continued to progress. The overall response rate was 26%, including two stringent complete responses; 39% of patients had a minimal response or better. The median duration of response was 4.4 months, median progression-free survival was 3.7 months, and median overall survival was 8.6 months. As of 2019, phase I/II and III trials are ongoing, including the use of selinexor in other cancers and in combinations with other drugs used for multiple myeloma. In November 2020, results from the multi-center, Phase III, randomized study (NCT03110562) which evaluated 402 participants with relapsed or refractory multiple myeloma who had received one to three prior lines of therapy were published in The Lancet. The study was designed to compare the efficacy, safety and certain health-related quality of life parameters of once-weekly selinexor in combination with once-weekly Velcade® (bortezomib) plus low-dose dexamethasone (SVd) versus twice-weekly Velcade® plus low-dose dexamethasone (Vd). The primary endpoint of the study was progression-free survival (PFS) and key secondary endpoints included overall response rate (ORR), rate of peripheral neuropathy, and others. Additionally, the BOSTON study allowed for patients on the Vd control arm to crossover to the SVd arm following objective (quantitative) progression of disease verified by an Independent Review Committee (IRC). The BOSTON study was conducted at over 150 clinical sites internationally. Although the study had one of the highest proportions of patients with high-risk cytogenetics (~50%) as compared with other Velcade-based studies in previously treated myeloma, the median PFS in the SVd arm was 13.93 months compared to 9.46 months in the Vd arm, representing a 4.47 month (47%) increase in median PFS (hazard ratio[HR]=0.70; p=0.0075). The SVd group also demonstrated a significantly greater ORR compared to the Vd group (76.4% vs. 62.3%, p=0.0012). Patients who had received only one prior line of therapy also demonstrated a higher ORR on the SVd arm as compared to the Vd arm (80.8% vs. 65.7%, p=0.0082). Importantly, SVd therapy compared to Vd therapy showed consistent PFS benefit and higher ORR across several important subgroups. In 2020, selinexor underwent a clinical trial for treatment of COVID-19. In this phase 2 randomized placebo-controlled single-blind trial named XPORT-CoV-1001 with a total of 190 participants with severe COVID-19, treatment with selinexor resulted in higher mortality (16% vs. 9%) and more serious adverse events (23% vs. 16%) than placebo. References External links Antineoplastic drugs Hydrazides Orphan drugs Pyrazines Teratogens Triazoles Trifluoromethyl compounds
Selinexor
Chemistry
2,852
18,527,411
https://en.wikipedia.org/wiki/Sump%20buster
A sump buster is a device installed within a bus route to limit that thoroughfare to buses. It discourages traffic from entering a lane by promising to destroy the oil pan of any vehicle with insufficient ground clearance to get over it, making them similar in use (but not design) to rising bollards. A sump buster can also be known as a "sump breaker" or "sump trap". Sump busters were first used in the 1980s. Function The sump buster uses a non-mechanical solid mass of concrete, or sometimes other aggregates or metal, to demobilise a vehicle when access to a restricted area is attempted. When a vehicle attempts to traverse the sump buster, the device will demolish the vehicle's oil pan (literally "busting the sump"). The track and ground clearance on permitted vehicles, usually buses, is such that they may clear the device with ease. In some cases, advisory or mandatory speed limits are given. Impact on the community A major purpose of the sump buster is to avoid road systems to be used as rat runs and, to a certain extent, joyriding. For this reason, devices have been vandalised (either through annoyance at their existence or to attempt to gain passage), resulting in accidents (and injuries) to legitimate road users. In January 2005, Devon County Council dismissed an application by the Stagecoach Group for the installation of a sump buster on Tan Lane (a restricted access road) in Exeter. The Exeter Highways and Traffic Orders Committee stated that "...[using a sump buster] is not an option that the County Council could support [as] it would not differentiate between high clearance vehicles and for example cars and vans that are authorised to use the link under the current Traffic Regulation Order". Sump busters have led to serious injuries to scooter drivers and cyclists who fail to notice them. Municipalities in the Netherlands have been sued for tort after damage or injuries caused by insufficiently marked sump busters. See also Bus trap References Road infrastructure Transportation planning Road transport Road hazards
Sump buster
Technology
429
15,970,956
https://en.wikipedia.org/wiki/Haagerup%20property
In mathematics, the Haagerup property, named after Uffe Haagerup and also known as Gromov's a-T-menability, is a property of groups that is a strong negation of Kazhdan's property (T). Property (T) is considered a representation-theoretic form of rigidity, so the Haagerup property may be considered a form of strong nonrigidity; see below for details. The Haagerup property is interesting to many fields of mathematics, including harmonic analysis, representation theory, operator K-theory, and geometric group theory. Perhaps its most impressive consequence is that groups with the Haagerup Property satisfy the Baum–Connes conjecture and the related Novikov conjecture. Groups with the Haagerup property are also uniformly embeddable into a Hilbert space. Definitions Let be a second countable locally compact group. The following properties are all equivalent, and any of them may be taken to be definitions of the Haagerup property: There is a proper continuous conditionally negative definite function . has the Haagerup approximation property, also known as Property : there is a sequence of normalized continuous positive-definite functions which vanish at infinity on and converge to 1 uniformly on compact subsets of . There is a strongly continuous unitary representation of which weakly contains the trivial representation and whose matrix coefficients vanish at infinity on . There is a proper continuous affine isometric action of on a Hilbert space. Examples There are many examples of groups with the Haagerup property, most of which are geometric in origin. The list includes: All compact groups (trivially). Note all compact groups also have property (T). The converse holds as well: if a group has both property (T) and the Haagerup property, then it is compact. SO(n,1) SU(n,1) Groups acting properly on trees or on -trees Coxeter groups Amenable groups Groups acting properly on CAT(0) cubical complexes Sources Representation theory Geometric group theory
Haagerup property
Physics,Mathematics
416
21,193,777
https://en.wikipedia.org/wiki/Opsi
Opsi (open PC server integration) is a software distribution and management system for Microsoft Windows clients, based on Linux servers. Opsi is developed and maintained by uib GmbH from Mainz, Germany. The main parts of Opsi are open-source licensed under the GNU Affero General Public License. Features The key features of opsi are: Automated operating system installation (OS deployment) Software distribution Patch management Inventory (hardware and software) License Management / Software Asset Management Support of multiple locations A tool for automated installations is important and necessary for standardization, maintainability and cost saving of larger PC networks. Opsi supports the client operating systems MS Windows XP, Server 2003, Windows Vista, Server 2008, Windows 7, Server 2008R2, Server 2012, Windows 8.1, Server 2012 R12 and Windows 10. The 32- and the 64-bit versions are supported. For the installation of an opsi-server there are packages available for the Linux distributions Debian, Ubuntu, SLES, Univention Corporate Server, CentOS, RHEL and OpenSuse. Automated operating system installation Via management interface a client may be selected for OS-Installation. If the client boots via PXE it loads a boot image from the opsi-depotserver. This bootimage prepares the hard disk, copies the required installation files, drivers and the opsi client agent and starts finally an unattended OS-Installation. Opsi uses the automatic detection of the necessary drivers for PCI-, HD-Audio- and USB-Devices. OS-installation via Disk image is also supported. Software distribution For the automatic software distribution some software, the opsi-client-agent, has to be installed on each client. Every time the client boots the opsi-client-agent connects to the opsi-server and asks if there is anything to install (default). If this shall be done a script driven installation program (opsi-winst) starts and installs the required software on the client. During the installation process the user login can be blocked for integrity reasons. To integrate a new software packet into the software deployment system, a script must be written to specify the installation process. This script provides all the information on how this software packet has to be installed silent or unattended or by using tools like AutoIt or Autohotkey. With the opsi-winst steps like copy files or edit the registry can be done. The opsi-client-agent can also be triggered by other events or via push-installation from the opsi-server. Patch-Management The mechanism of the software deployment can also be used to deploy software patches and hotfixes. Inventory (hardware and software) The hardware and software inventory also uses the opsi-client-agent. The hardware information is collected via calls to WMI while the software information is gathered from the registry. The inventory data are sent back to the opsi-server by a web service. The inventory data may imported via a web service to a CMDB e.g. in OTRS. License management / Software Asset Management The opsi License Management module supports the administration of different kinds of licenses like Retail, OEM and Volume licenses. It counts the licenses that are used with the software deployment. Using the combination of the License Management and the software inventory, Software Asset Management reports on the number of free and installed licenses can be generated. The License Management module is part of a Co-funding Project and not released as open source yet. Support of multiple locations The software to be installed can be deposited bandwidth saving on several depot server. The configuration data can be stored and edited on one single server. opsi-server The opsi-server provides the following services: The configuration-server stores the configuration data of the clients and provides the methods to manage these data via a web service or the command-line. The data can be stored in files, in OpenLDAP or in a MySQL Database. The depot-server stores software packages that may be installed on the clients. To provide support for multiple locations, multiple depot-servers may be controlled by one configuration-server. A TFTP-Server provides the boot images for the OS-Installations. A DHCP-Server may be integrated in the opsi-server. Management interface For managing opsi a graphical user interface is available as an application or as a browser Applet. Management is also possible with a command line tool or via web service. License The opsi core features are open-source according to the GNU General Public License Version 3 and are free of charge. The core features are software distribution (or software deployment), OS deployment and hard- and software-inventory. These free components can be supplemented with closed source add-ons that require the payment of a fee. They are called Co-funding Projects. Co-funding projects Even though opsi is open source, there are some components which are not for free at the moment. These components are developed in a co-funding project. This means, that these parts are only available for those customers who paid a contribution to the cost of development. As soon as the development of a co-funding project is refinanced, the component will be part of the free opsi-version and can be used free of charge. It will be open source (as long as not prevented caused by technical reasons). The first of these co-funding Projects was the opsi support for Windows Vista/Windows 7. It was completed on 1 February 2008 and is free of charge since 1 March 2010. The source code was divided from the not yet paid parts and is open source since 30 November 2010. At the moment (January 2011) there are three co-funding projects: Treeview builds hierarchical groups of clients to manage; MySQL as backend for all data; and the license management module. The main focus of co-funding projects is to create software once for a pool of purchasers who share the cost and make it open source as soon as it paid in full. References and sources External links Free software distributions Free network management software Configuration management Free software programmed in Python Software using the GNU Affero General Public License
Opsi
Engineering
1,268
6,426,950
https://en.wikipedia.org/wiki/Suxethonium%20chloride
Suxethonium (trade name: Brevidil-E) is a depolarising muscle relaxant which is presented as a dry powder in an ampoule. This is re-constituted with sterile water prior to use. It is available in Australia as a schedule 4 drug, and the US. Advantages It has some advantages over suxamethonium: Less K+ Possibly less muscle pain after use Storage does not require refrigeration The last advantage was what keeps it in occasional use. Previous use In the UK, it was used by the Obstetric Flying Squads at patient's homes to resuscitate mothers during major obstetric complications (mostly major haemorrhage) during home births. In Australia, it was included in "Resuscitation drugs packs" in hospitals. These packs were sealed boxes containing all the drugs required for an in-hospital resuscitation. They were prepared by the hospital pharmacy and because these were sealed (usually just sticky tape) the drug contents were guaranteed to be there for use in a cardiac arrest. These packs contained ampoules of powdered suxethonium so a relaxant was available to facilitate intubation. Suxamethonium could not be used in these packs because of the requirement for refrigeration. This was certainly an issue in Queensland as it could be quite warm and hospitals wards were in the past generally not air-conditioned. It can still be used where the storage issue is a concern (e.g. by the military, rural locations, 3rd world countries). The most recent reference to it in Medline is 1976, but it is occasionally mentioned in passing in other journals. References Muscle relaxants Nicotinic agonists Chlorides Quaternary ammonium compounds Esters
Suxethonium chloride
Chemistry
370
15,072,028
https://en.wikipedia.org/wiki/Kua-UEV
Ubiquitin-conjugating enzyme E2 variant 1, also known as Kua-UEV, is a human gene. The Kua-UEV mRNA is an infrequent but naturally occurring co-transcribed product of the neighboring Kua and UBE2V1 genes. Ubiquitin-conjugating E2 enzyme variant proteins constitute a distinct subfamily within the E2 protein family. They have sequence similarity to other ubiquitin-conjugating enzymes but lack the conserved cysteine residue that is critical for the catalytic activity of E2s. Two alternative transcripts encoding different isoforms have been described. The proteins produced by these transcripts have UEV1 B domains but the proteins are localized to the cytoplasm rather than to the nucleus. The significance of these co-transcribed mRNAs and the function of their protein products have not yet been determined. References Further reading
Kua-UEV
Chemistry
189
2,342,881
https://en.wikipedia.org/wiki/Balloon%20rocket
A balloon rocket is a rubber balloon filled with air or other gases. Besides being simple toys, balloon rockets are a widely used as a teaching device to demonstrate basic physics. How it works To launch a simple rocket, the untied opening of an inflated balloon is released. The elasticity of the balloon contracts the air out through the opening with sufficient force and the resulting pressure creates a thrust which propels the balloon forward as it deflates. It is usual for the balloon to be propelled somewhat uncontrollably (or fly in and unstable centre of mass), as well as turbulence that occur in the opening as the air escapes, causing it to flap rapidly and disperses air outwards in random direction. Near the end of its deflation, the balloon may suddenly shoot quickly in the air shortly before it drops down, due to the rubber rapidly squeezes out the remaining air inside as it reaches the inclination to return to its uninflated size. The flight altitude only amounts to some metres, with larger or lighter balloons often achieving longer flights. In addition, a cylindrical-shaped (or "airship") balloon may have a more stable flight when released. If the balloon is inflated with helium or other lighter than air gases, it tends to fly in an inclined trajectory (usually going upwards), due to the light nature of the gas. In physics The balloon rocket can be used easily to demonstrate simple physics, namely Newton’s third law of motion. A common experiment with a balloon rocket consists in adding other objects such as a string or fishing line, a drinking straw and adhesive tape to the balloon itself. The string is threaded through the straw and is attached at both ends to fixed objects. The straw is then taped to the side of the inflated balloon, with the mouth of the balloon touching the object it is pointed. When the balloon is released, it propels itself along the length of the string. Alternatively, a balloon rocket car can be built. Rocket balloon There is also dedicated toy known as a rocket balloon, usually tubular-shaped and inflated with a special pump (pictured). These balloons, when released, propel in a more stable direction because of a steadier thrust of air and elongated shape, unlike ordinary round balloons which often launch uncontrollably. Aside from the shape, rocket balloons are also characterized by their distinctive loud buzzing or screaming noises due to the tight, reed-like opening designed to make noise as the air rushes through. They are also known as noisemaker balloons, due to the aforementioned noise As cartoon gags The fact that an untied toy balloon flies away when released sometimes has become a staple recurring gag and comedic effect in most cartoons. For instance, when an object or a character is comically being "inflated" and then deflates, it flies away uncontrollably, in a similar fashion to a balloon itself. In addition, the noises that a balloon creates when deflating is sometimes used in conjunction with this comedic effect. References External links NasaQuest: Teacher Information on balloon rockets YesMag: Balloon rockets Science Museum of Minnesota: Rocket Balloon Rocket Video Balloons Inflatable manufactured goods
Balloon rocket
Chemistry
641
10,332,907
https://en.wikipedia.org/wiki/Acylfulvene
Acylfulvene is a class of cytotoxic semi-synthetic derivatives of illudin, a natural product that can be extracted from the jack o'lantern mushroom (Omphalotus olearius). One important acylfulvene, 6-hydroxymethylacylfulvene (irofulven), has been evaluated for the treatment of a wide assortment of cancers and tumors. It is thought that acylfulvene compounds kill cancer cells by DNA alkylation (see DNA methylation). References Enones Tertiary alcohols Spiro compounds Cyclopropanes
Acylfulvene
Chemistry
130
31,380,244
https://en.wikipedia.org/wiki/Wafer%20bond%20characterization
Wafer bond characterization refers to the process of evaluating the quality and strength of a bond between two semiconductor wafers. The wafer bond characterization is based on different methods and tests. Considered a high importance of the wafer are the successful bonded wafers without flaws. Those flaws can be caused by void formation in the interface due to unevenness or impurities. The bond connection is characterized for wafer bond development or quality assessment of fabricated wafers and sensors. Overview Wafer bonds are commonly characterized by three important encapsulation parameters: bond strength, hermeticity of encapsulation and bonding induced stress. The bond strength can be evaluated using double cantilever beam or chevron respectively micro-chevron tests. Other pull tests as well as burst, direct shear tests or bend tests enable the determination of the bond strength. The packaging hermeticity is characterized using membrane, He-leak, resonator/pressure tests. Three additional possibilities to evaluate the bond connection are optical, electron and Acoustical measurements and instrumentation. At first, optical measurement techniques are using an optical microscope, IR transmission microscopy and visual inspection. Secondly, the electron measurement is commonly applied using an electron microscope, e.g. scanning electron microscopy (SEM), high voltage transmittance electron microscopy (HVTEM) and high resolution scanning electron microscopy (HRSEM). And finally, typical acoustic measurement approaches are scanning acoustic microscope (SAM), scanning laser acoustic microscope (SLAM) and C-mode scanning acoustic microscope (C-SAM). The specimen preparation is sophisticated and the mechanical, electronic properties are important for the bonding technology characterization and comparison. Infrared (IR) transmission microscopy Infrared (IR) void imaging is possible if the analyzed materials are IR transparent, i.e. silicon. This method gives a rapid qualitative examination and is very suitable due to its sensitivity to the surface and to the buried interface. It obtains information on chemical nature of surface and interface. Infrared transmitted light is based on the fact that silicon is translucent at wavelength ≥ 1.2 μm. The equipment consists of an infrared lamp as light source and an infrared video system (compare to figure "Schematic infrared transmission microscopy setup"). The IR imaging system enables the analysis of the bond wave and additionally micro mechanical structures as well as deformities in the silicon. This procedure allows also to analyze multiple layer bonds. The image contrast depends on the distance between the wafers. Usually if using monochromatic color IR the center of the wafers is display brighter based on the vicinity. Particles in the bond interface generate highly visible spots with differing contrast because of the interference (wave propagation) fringes. Unbonded areas can be shown if the void opening (height) is ≥ 1 nm. Fourier transform infrared (FT-IR) spectroscopy The Fourier transform infrared (FT-IR) spectroscopy is a non-destructive hermeticity characterization method. The radiation absorption enables the analysis with a specific wavelength for gases. Ultrasonic microscopy Ultrasonic microscopy uses high frequency sound waves to image bonded interfaces. Deionized water is used as the acoustic interconnect medium between the electromagnetic acoustic transducer and the wafer. This method works with an ultrasonic transducer scanning the wafer bond. The reflected sound signal is used for the image creation. The lateral resolutions depends on the ultrasonic frequency, the acoustic beam diameter and the signal-to-noise ratio (contrast). Unbonded areas, i.e. impurities or voids, do not reflect the ultrasonic beam like bonded areas, therefore a quality assessment of the bond is possible. Double cantilever beam (DCB) test Double cantilever beam test, also referred to as crack opening or razor blade method, is a method to define the strength of the bond. This is achieved by determining the energy of the bonded surfaces. A blade of a specific thickness is inserted between the bonded wafer pair. This leads to a split-up of the bond connection. The crack length equals the distance between the blade tip and the crack tip and is determined using IR transmitted light. The IR light is able to illuminate the crack, when using materials transparent to IR or visible light. If the fracture surface toughness is very high, it is very difficult to insert the blade and the wafers are endangered to break at the slide in of the blade. The DCB test characterizes the time dependent strength by mechanical fracture evaluation and is therefore well suited for lifetime predictions. A disadvantage of this method is, that between the entering of the blade and the time to take the IR image, the results can be influenced. In addition, the measurement inaccuracy increases with a high surface fracture toughness resulting in a smaller crack length or broken wafers at the blade insertion as well as the influence of the fourth power of the measured crack length. The measured crack length determines surface energy in relation to a rectangular, beam-shaped specimen. Thereby is the Young's modulus, the wafer thickness, the blade thickness and the measured crack length. In literature different DCB models are mentioned, i.e. measurement approaches by Maszara, Gillis and Gilman, Srawley and Gross, Kanninen or Williams. The most commonly used approaches are by Maszara or Gillis and Gilman. Maszara model The Maszara model neglects shear stress as well as stress in the un-cleaved part for the obtained crack lengths. The compliance of a symmetric DCB specimen is described as follows: The compliance is determined out of the crack length , the width and the beam thickness . defines the Young's modulus. The surface fracture energy is: with as load-point displacement. Gillis and Gilman model The Gillis and Gilman approach considers bend and shear forces in the beam. The compliance equation is: The first term describes the strain energy in the cantilever due to bending. The second term is the contribution from elastic deformations in the un-cleaved specimen part and the third term considers the shear deformation. Therefore, and are dependent on the conditions of the fixed end of the cantilever. The shear coefficient is dependent on the cross-section geometry of the beam. Chevron test The chevron test is used to determine the fracture toughness of brittle construction materials. The fracture toughness is a basic material parameter for analyzing the bond strength. The chevron test uses a special notch geometry for the specimen that is loaded with an increasing tensile force. The chevron notch geometry is commonly in shape of a triangle with different bond patterns. At a specific tensile load the crack starts at the chevron tip and grows with continuous applied load until a critical length is reached. The crack growth becomes unstable and accelerates resulting in a fracture of the specimen. The critical length depends only on the specimen geometry and the loading condition. The fracture toughness commonly is determined by measuring the recorded fracture load of the test. This improves the test quality and accuracy and decreases measurement scatter. Two approaches, based on energy release rate or stress intensity factor , can be used for explaining the chevron test method. The fracture occurs when or reach a critical value, describing the fracture toughness or . The advantage using chevron notch specimen is due to the formation of a specified crack of well-defined length. The disadvantage of the approach is that the gluing required for loading is time consuming and may induce data scatter due to misalignment. Micro chevron (MC) test The micro chevron (MC) test is a modification of the chevron test using a specimen of defined and reproducible size and shape. The test allows the determination of the critical energy release rate and the critical fracture toughness . It is commonly used to characterize the wafer bond strength as well as the reliability. The reliability characterization is determined based on the fracture mechanical evaluation of critical failure. The evaluation is determined by analyzing the fracture toughness as well as the resistance against crack propagation. The fracture toughness allows comparison of the strength properties independent on the particular specimen geometry. In addition, bond strength of the bonded interface can be determined. The chevron specimen is designed out of bonded stripes in shape of a triangle. The space of the tip of the chevron structure triangle is used as lever arm for the applied force. This reduces the force required to initiate the crack. The dimensions of the micro chevron structures are in the range of several millimeters and usually an angle of 70 ° chevron notch. This chevron pattern is fabricated using wet or reactive ion etching. The MC test is applied with special specimen stamp glued onto the non-bonded edge of the processed structures. The specimen is loaded in a tensile tester and the load is applied perpendicular to the bonded area. When the load equals the maximum bearable conditions, a crack is initiated at the tip of the chevron notch.´ By increasing the mechanical stress by means of a higher loading, two opposing effects can be observed. First, the resistance against the crack expansion increases based on the increasing bonding of the triangular shaped first half of the chevron pattern. Second, the lever arm is getting longer with increased crack length . From the critical crack length an instable crack expansion and the destruction of the specimen is initiated. The critical crack length corresponds to the maximum force in a force-length-diagram and a minimum of the geometric function . The fracture toughness can be calculated with maximum force, width and thickness : The maximum force is determined during the test and the minimal stress intensity coefficient is determined by FE Simulation. In addition, the energy release rate can be determined with as modulus of elasticity and as Poisson's ratio in the following way.´ The advantage of this test is the high accuracy compared to other tensile or bend tests. It is an effective, reliable and precise approach for the development of wafer bonds as well as for the quality control of the micro mechanical device production. Bond testing Bond strength measurement or bond testing is performed in two basic methods: pull testing and shear testing. Both can be done destructively, which is more common (also on wafer level), or non destructively. They are used to determine the integrity of materials and manufacturing procedures, and to evaluate the overall performance of the bonding frame, as well as to compare various bonding technologies with each other. The success or failure of the bond is based on measuring the applied force, the failure type due to the applied force and the visual appearance of the residual medium used. A development in bond strength testing of adhesively bonded composite structures is laser bond inspection (LBI). LBI provides a relative strength quotient derived from the fluence level of the laser energy delivered onto the material for the strength test compared to the strength of bonds previously mechanically tested at the same laser fluence. LBI provides nondestructive testing of bonds that were adequately prepared and meet engineering intent. Pull testing Measuring bond strength by pull testing is often the best way to get the failure mode in which you are interested. Additionally, and unlike a shear test, as the bond separates, the fracture surfaces are pulled away from each other, cleanly enabling accurate failure mode analysis. To pull a bond requires the substrate and interconnect to be gripped; because of size, shape and material properties, this can be difficult, particularly for the interconnection. In these cases, a set of accurately formed and aligned tweezer tips with precision control of their opening and closing is likely to make the difference between success and failure. The most common type of pull tests is a Wire Pull test. Wire Pull testing applies an upward force under the wire, effectively pulling it away from the substrate or die. Shear testing Shear testing is the alternative method to determine the strength a bond can withstand. Various variants of shear testing exist. Like with pull testing, the objective is to recreate the failure mode of interest in the test. If that is not possible, the operator should focus on putting the highest possible load on the bond. White Light Interferometers White light interferometry is commonly used for detecting deformations of the wafer surface based on optical measurements. Low-coherence light from a white light source passes through the optical top wafer, e.g. glass wafer, to the bond interface. Usually there are three different white light interferometers: diffraction grating interferometers vertical scanning or coherence probe interferometers white light scatter plate interferometers For the white light interferometer the position of zero order interference fringe and the spacing of the interference fringes needs to be independent of wavelength. White light interferometry is utilized to detect deformations of the wafer. Low coherence light from a white light source passes through the top wafer to the sensor. The white light is generated by a halogen lamp and modulated. The spectrum of the reflected light of the sensor cavity is detected by a spectrometer. The captured spectrum is used to obtain the cavity length of the sensor. The cavity length corresponds to the applied pressure and is determined by the spectrum of the reflection of the light of the sensor. This pressure value is subsequently displayed on a screen. The cavity length is determined using with as refractive index of the sensor cavity material, and as adjacent peaks in the reflection spectrum. The advantage of using white light interferometry as characterization method is the influence reduction of the bending loss. References Electronics manufacturing Packaging (microfabrication) Semiconductor technology Wafer bonding Microtechnology Semiconductor device fabrication
Wafer bond characterization
Materials_science,Engineering
2,775
67,451,399
https://en.wikipedia.org/wiki/Organization%20for%20Ethical%20Source
The Organization for Ethical Source (OES) is a non-profit organization founded by Coraline Ada Ehmke in December 2020, to support the ethical source movement, which promotes that "software freedom must always be in service of human freedom". The organization is dedicated to "giving technologists tools and resources to ensure that their work is being used for social good and to minimize harm". It develops tools to "promote fair, ethical, and pro-social outcomes for those who contribute to, or are affected by, open source technologies". The organization aims to support the ethical source movement, promoting ethics and social responsibility in open source. The movement has facilitated a new kind of license, the Hippocratic License, inspired by the medical Hippocratic Oath. The license has been criticized as non-enforceable and non-open source, including by Bruce Perens, co-founder of the Open Source Initiative and author of the Open Source Definition. The license has triggered debate within the open source movement. The Hippocratic License has been classified as non-free by the Free Software Foundation, while the Open Source Initiative stated, on Twitter, that the license is not an open source software license and that software distributed under such license is not open source. During the 2021 controversy around Richard Stallman returning to the FSF board, after his resignation in 2019, the OES issued a statement against it, and was one of the signatory organizations of an open letter with thousands of signatures. See also Contributor Covenant Inclusive language Open source movement Women in Computing References External links Free and open-source software organizations Organizations established in 2020 Non-profit technology Intellectual property activism Digital rights organizations Charities based in Switzerland
Organization for Ethical Source
Technology
342
17,580,984
https://en.wikipedia.org/wiki/Cedarlane%20Laboratories
Cedarlane is a Canadian private corporation headquartered in Burlington, Ontario, Canada, that manufactures and distributes life science research products. Cedarlane's manufactured products include monoclonal antibodies, polyclonal antibodies, cell separation media, complement for tissue typing, and immunocolumns. Cedarlane is an ISO 9001:2008 and ISO 13485:2003 registered company. Cedarlane has become a multi-national corporation with over 100 employees in Canada and the United States. The two main locations are in Burlington, Ontario, Canada and coincidentally, in Burlington, North Carolina, US. In recent years, Cedarlane has partnered with a number of charitable Canadian organizations to raise funding for cancer research, economically impoverished children, men's health initiatives and other causes. Cedarlane has partnered with the likes of the Canadian Cancer Society, Canadian Breast Cancer Foundation, SickKids Foundation, and others. History Cedarlane was incorporated in 1975 by three Canadian researchers originating from the University of Toronto and Ontario Cancer Institute; Dr. S. Abrahams, Dr. A.J. Farmilo and R.C. Course. In 2006, Cedarlane opened a branch office in Burlington, North Carolina, in the United States. In July 2007, Cedarlane became the exclusive distributor of ATCC products in Canada. In November, Cedarlane acquired CELLutions Biosystems Inc., a company founded by the University of Toronto Innovations Foundation. Products Cedarlane sells density-gradient cell separation media under the Lympholyte trade name. Cedarlane offers cell line platforms and various marker details (under CELLutions™) for academic and commercial research programs. Cedarlane also distributes over 5 million products on behalf of more than 1400 global Life Science manufacturing companies. References External links Cedarlane company website Biotechnology companies of Canada Privately held companies of Canada Life sciences industry
Cedarlane Laboratories
Biology
371
16,671,437
https://en.wikipedia.org/wiki/Meripilus%20giganteus
Meripilus giganteus is a polypore fungus in the family Meripilaceae. It causes a white rot in various types of broadleaved trees, particularly beech (Fagus), but also Abies, Picea, Pinus, Quercus and Ulmus species. This bracket fungus, commonly known as the giant polypore or black-staining polypore, is often found in large clumps at the base of trees, although fruiting bodies are sometimes found some distance away from the trunk, parasitizing the roots. M. giganteus has a circumboreal distribution in the northern Hemisphere, and is widely distributed in Europe. In the field, it is recognizable by the large, multi-capped fruiting body, as well as its pore surface that quickly darkens black when bruised or injured. Description The basidiocarps consist of numerous rosette-like flattened fan-shaped pilei; they are typically , rarely in diameter and , rarely high. The individual caps, up to , rarely in diameter and thick, arise from a common basal stem. The weight is , but the heaviest specimen can reach . The color of the cap surface is pale tan to dull chestnut brown in young specimens but darkens in age to become concentric zones (zonate) of various shades of brown. The surface is also finely fibrillose with tiny scales (squamules). There are 3 to 6 pores per millimeter on the underside; the pore surface bruises brown and black, helping to distinguish it from the similar species Grifola frondosa. Infection of a tree is often through a dead tap root, and decay is largely restricted to roots, and then mainly on the underside. Infected trees often show thinning of the outer crown due to impaired root function. Tree failure is due to brittle fracture of degraded lateral roots. Microscopic features Spores are roughly spherical to ovoid or ellipsoid in shape, with typical dimensions of 6–6.5 × 5.5–6 μm. Under a microscope, they appear translucent (hyaline), smooth, and nonamyloid, meaning that they do not absorb stain from Melzer's reagent. The basidia—the spore-bearing cells—are club-shaped, 4-spored, and are 22–40 by 7–8 μm. Polypore fungi may be further distinguished by the type of hyphae that makes up their fruiting body. M. giganteus has a so-called monomitic hyphal system, as its fruiting body is composed of only vegetative hyphae. Edibility The giant polypore was previously considered inedible, due to its very coarse flesh and mildly acidic taste, but more recent sources list it as edible. Younger specimens may be more palatable; one author notes that it is "eaten in Japan". Also, it may be mistakenly consumed because of its resemblance with the edible species commonly known as Hen of the Woods (Grifola frondosa) which is regarded as much better tasting. Habitat and distribution This mushroom can be found growing on hardwoods, more rarely on conifers. According to Ryvarden and Gilbertson in their monograph on the polypores of Europe, M. giganteus grows especially on Quercus and Fagus tree species, but it has also been collected on the hardwoods Acer, Aesculus, Alnus, Betula, Castanea, Celtis, Corylus, Eucalyptus, Laurus, Myrica, Persea, Pittosporum, Platanus, Populus, Prunus, Pyrus, Tilia, Ulmus; it has also been found growing on the coniferous species Abies, Larix, and Pinus. M. giganteus has a circumboreal distribution in the northern hemisphere. It has been collected from Europe, Scandinavia, the area formerly known as the USSR, Iran and Turkey. Although many field guides list it as occurring in North America, this is due to confusion with the related M. sumstinei; M. giganteus is not found in North America. A study of the frequency of occurrence of wood-decay fungi on street trees and park trees in Hamburg, Germany found that M. giganteus was the most common species. Similar species The polypore fungus Grifola frondosa is similar in overall appearance, but may be distinguished by its more greyish cap, and larger pores. Bondarzewia berkeleyi or "Berkeley's polypore" is often confused with M. giganteus (or M. sumstinei) in eastern North America but can be distinguished by its lack of black-bruising and much larger pores. References Cited literature External links Index Fungorum synonyms USDA ARS Fungal Database Meripilus: A new perspective Mushroom Expert Fungi on Wood Edible fungi Fungi described in 1794 Fungal tree pathogens and diseases Meripilaceae Fungi of Europe Taxa named by Christiaan Hendrik Persoon Fungus species
Meripilus giganteus
Biology
1,043
21,320,564
https://en.wikipedia.org/wiki/B%26O%20Supprettes
B&O Supprettes is the brand name for a prescription medication containing powdered opium and belladonna alkaloids in a suppository form. They are indicated for the treatment of moderate to severe pain from urethral spasm, and for extending the interval(s) between injections of opiates. The drug has various "off label" uses, including renal colic, intestinal cramps, tenesmus and diarrhea. They are also often prescribed after urinary bladder surgery. B&O Supprettes was unique in the United States because they were the only drug containing opium that is for suppository use sold in the US and, in fact, one of the very few medications that contains opium in any form in the US along with paregoric and opium tincture (laudanum). History B&O Supprettes (the name is derived from the generic term 'belladonna/opium suppository') is an "unapproved" drug according to the Food and Drug Administration (FDA) – that is, the drug existed before the Food, Drug and Cosmetic Act of 1938. Accordingly, the compound has never undergone specific medical trials, and its efficacy has never been required to be demonstrated. The FDA has put pressure on the manufacturers of this drug for this reason. The original manufacturer of the Supprettes, Eli Lilly and Company, has long since lost any patent to the drug. Amerifit, which manufactured generic Supprettes prior to 2008, was cautioned by the FDA due to the unapproved nature of the drug combination. Since 2008, Paddock Laboratories has manufactured a generic version of the Supprettes after working with the FDA on marketing issues related to the unapproved nature of the drug. Dosage and administration The drug and its generic counterparts are supplied in packages of 12, and available in two strengths: Each B&O Supprettes suppository #15 A contains 16.2 mg (1/4 grain) of belladonna and 30 mg (1/2 grain) of opium. Each B&O Supprettes suppository #16 A contains 16.2 mg of belladonna and 60 mg (1 grain) of opium. The usual dose is one suppository rectally once or twice daily PRN - (as needed), not to exceed four Supprettes in a 24-hour period. In the United States, B&O Supprettes is a Schedule II drug under the Controlled Substances Act of 1970; a written prescription is mandatory, and no refills are permitted. Refrigerated storage is preferable, but not required. Most pharmacies consider B&O Supprettes to be a "special order" item, and as such are not normally kept in inventory. Compounding pharmacies have the capability of producing a generic form of the medication, and can modify the dosage(s) of the active ingredients (for pediatric or elderly patients, and those with chronic kidney disease) or the carrier (usually substituting cocoa butter) to best meet the needs of the patient at the request of the prescriber. References Drug brand names Drugs developed by Eli Lilly and Company Opiates
B&O Supprettes
Chemistry
681
2,171,631
https://en.wikipedia.org/wiki/Apple%20Design%20Awards
The Apple Design Awards (ADAs) is an event hosted by Apple Inc. at its annual Worldwide Developers Conference. The purpose of the event is to recognize the best and most innovative Macintosh and iOS software and hardware produced by independent developers, as well as the best and most creative uses of Apple's products. The ADAs are awarded in categories that vary each year. The awards have been presented annually since 1997. For the first two years of their existence, they were known as the "Human Interface Design Excellence Awards" (HIDE Awards). Since 2003, the physical award given to those recognized at the awards event bore an Apple logo that would glow when touched. The trophy is a long aluminum cube which weighs . These were engineered and built by Sparkfactor Design. Winners 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 Student Scholarship Design Award Winners Louis Harboe Bryan Keller Puck Meerburg 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 References External links Apple Inc. industrial design Design awards Apple Worldwide Developers Conference Video game awards Computer-related awards
Apple Design Awards
Engineering
224
52,335,682
https://en.wikipedia.org/wiki/Solenocyte
In biology, solenocytes are elongated, flagellated cells commonly found in lower invertebrates, such as flatworms (phylum Platyhelminthes), chordates (sub-phylum Cephalochordata) and several other animal species. In terms of function, solenocytes play a significant role in the excretory systems of their host organism(s). For example, the lancelets, also referred to as amphioxus (genus Branchiostoma), utilize solenocytic protonephridia to perform excretion. In addition to excretion, these cells contribute to ion regulation and osmoregulation. With this in mind, solenocytes form subtypes of protonephridium and are often compared to another specialized excretory cell type, i.e., flame cells. Solenocytes have flagella, while flame cells are generally ciliated. Cellular structure and configuration Solenocytes are mesoderm-derived and morphologically diverse cells containing a cytoplasmic cap or enclosed cell body with a nucleus residing in its core. A long tubule is attached to the cell body, and within its intracellular lumen lies either one or two long flagella. The continuously moving vibratile flagella extend from a protein structure, referred to as the basal body, found at the base of the flagellar structure. Extending through the length of the tubule, the flagella are able to protrude into the protonephridium lumen rather designedly (see Figure 1). The tubule wall structure is composed of thin, pillar-like rods perforated by tiny openings. These pore spaces are likely the site of interstitial fluid filtration. A nephridium contains approximately 500 solenocytes, each of which is roughly 50 microns in length (this measure includes the nucleated cell body and tubule). The excretory organ of Amphioxus (genus Branchiostoma) belcheri contains clusters of solenocytes (the majority of which are situated along the ligamentum denticulatum coelomic surface). These clusters are composed at patterned intervals, generating groups amongst the renal tubules of B. belcheri, which in a way, resemble mesothelial cells surrounding the human body's internal organs. Additional studies indicate a resemblance to vertebrate podocytes, as vascular fluid within the ligamentum denticulatum may travel into the coelom through the narrow network of solenocyte gaps or foot processes. Function and mechanistic aspects In regards to function, flagella play a significant role in the excretory nature of solenocytes. These motile appendages extend from the solenocyte membrane and utilize the support of an axial filament (or axoneme), basal body, as well as numerous microtubules. That said, the stability of the flagellum is crucial to its motility. The basal body, composed of nine triplet microtubules, functions to anchor the flagella in place (acting as a modified centriole). Situated at the center of each flagellum is the highly conserved axoneme, which contains nine doublet microtubules encircling a pair of singlet microtubules (generating a 9+2 pattern). Thousands of walking dynein motors are attached to the axoneme doublets, resulting in the hydrolysis of adenosine triphosphate (ATP), which fuels flagellar motility. More specifically, the dyneins anchor onto one doublet within the outer microtubule ring, and as they "walk" towards an adjacent doublet, the entire flagellar structure is able to bend and beat (see Figure 2). In sum, flagellar motility enables solenocytes to waft excretory materials and coelomic fluid down the intracellular tubule lumen. In several lower invertebrates, solenocyte clusters project directly into coelomic canals, where they are submerged in coelomic fluid. This fluid contains a variety of materials, including salts, proteins, and corpuscles (e.g., leucocytes, phagocytes, eleocytes, mucocytes, etc.). In that respect, solenocytes play a major role in osmoregulation, ion regulation, and homeostasis through the movement of coelomic fluid. Branchiostoma nephridia also have tiny blood vessels, and the protonephridia function to absorb nitrogenous waste from coelomic fluid, as well as the blood sinuses via diffusion. Implications of research In addition to a greater understanding of excretory organs within other invertebrates, further research on solenocyte composition and function can advance current knowledge on renal function, human health, and even certain genetic diseases within the vertebrate world. The cephalochordate amphioxus (see Figure 3) can contribute to this research as a close relative to vertebrates. Hatschek’s nephridium Compared to paired series of protonephridia, Hatschek's nephridium is a large unpaired excretory structure found within Branchiostoma virginiae. The nephridium, along with its collection tubule, is located to the left of the notochord and beside the left anterior aorta. Hatschek's nephridium is like a protonephridium with a single, bent branch consisting of numerous solenocytes. The anterior end of this structure sits directly in front of Hatschek's pit, while the posterior end (at the rear of the velum) opens into the endodermal pharynx. Flagellated filtration cells called cyrtopodocytes occupy the length of the collection tubule. These filtration cells closely resemble solenocyte structure and function. Renal function Research suggests that coelomic myoepithelial cells in amphioxus (Branchiostoma) may have significance in renal function. Located along the coelom, myoepithelial cells have both thick (18-25 nm in diameter) and thin (5-7 nm in diameter) microfilaments. That said, these microfilaments appear to be more abundant in myoepithelial cells that are in close proximity to solenocytes attached to the ligamentum denticulatum coelomic surface. The beating of solenocyte flagella to propel coelomic fluid throughout excretory tubules leads to the idea that myoepithelial cells near solenocyte clusters can impact renal function by regulating fluid motility within the coelomic cavity. Human health and genetic diseases Within the vertebrate lineage, significant genome duplications took place after the divergence of Branchiostoma, thus making it a potentially valuable model for gaining insight into vertebrate biological mechanisms. Branchiostoma has use for investigating human health and genetic disease. Along with signaling pathways, numerous homologs of vertebrate organs share cellular, developmental, and physiological parameters with their vertebrate equivalents. On that premise, solenocyte function within Branchiostoma could provide insight into metabolic diseases, such as renal cell carcinoma (RCC). References Wikipedia Student Program Cell biology Flagellates
Solenocyte
Biology
1,535
208,850
https://en.wikipedia.org/wiki/Quotient%20of%20a%20formal%20language
In mathematics and computer science, the right quotient (or simply quotient) of a language with respect to language is the language consisting of strings w such that wx is in for some string x in Formally: In other words, for all the strings in that have a suffix in , the suffix is removed. Similarly, the left quotient of with respect to is the language consisting of strings w such that xw is in for some string x in . Formally: In other words, we take all the strings in that have a prefix in , and remove this prefix. Note that the operands of are in reverse order: the first operand is and is second. Example Consider and Now, if we insert a divider into an element of , the part on the right is in only if the divider is placed adjacent to a b (in which case i ≤ n and j = n) or adjacent to a c (in which case i = 0 and j ≤ n). The part on the left, therefore, will be either or ; and can be written as Properties Some common closure properties of the quotient operation include: The quotient of a regular language with any other language is regular. The quotient of a context free language with a regular language is context free. The quotient of two context free languages can be any recursively enumerable language. The quotient of two recursively enumerable languages is recursively enumerable. These closure properties hold for both left and right quotients. See also Brzozowski derivative References Formal languages
Quotient of a formal language
Mathematics
331
14,429,047
https://en.wikipedia.org/wiki/TAAR2
Trace amine-associated receptor 2 (TAAR2), formerly known as G protein-coupled receptor 58 (GPR58), is a protein that in humans is encoded by the TAAR2 gene. TAAR2 is co-expressed with Gα proteins; however, its signal transduction mechanisms have not been determined. Tissue distribution Human TAAR2 (hTAAR2) is expressed in the cerebellum, olfactory sensory neurons in the olfactory epithelium, and leukocytes (i.e., white blood cells), among other tissues. hTAAR1 and hTAAR2 are both required for white blood cell activation by trace amines in granulocytes. Using brain histochemistry staining of mice with LacZ insertion into TAAR2 gene histochemical reaction was found in the glomerular layer of the olfactory bulb, but intensive staining was found in the deeper layer as well. The histochemical reaction was observed in the fibers of the olfactory nerve, in the glomeruli of the glomerular layer, several short axon (SA) cells (outer plexiform layer or granular layer) and neuronal projections that were visualized throughout the depth of the olfactory bulb. Furthermore, LacZ staining was observed in the limbic areas of the brain receiving olfactory input, i.e., piriform cortex molecular area, hippocampus (CA1 field, pyramidal layer), hypothalamic lateral zone (zone incerta) and lateral habenula. In addition, a histochemical reaction was found in the midbrain raphe nuclei and primary somatosensory area of the cortex (layer 5). Real-time quantitative PCR with reverse transcription confirmed TAAR2 gene expression in the mouse brain areas such as the frontal cortex, hypothalamus, and brainstem. Involvement in the functioning of monoamine systems TAAR2 knockout mice have significantly higher level of dopamine in the striatum tissue than wild-type littermates and lower level of norepinephrine in hippocampus. Also, they have lower levels of MAO-B expression in midbrain and striatum. A significantly higher number of the dopamine neurons was detected in TAAR2-KO mice in the substantia nigra pars compacta. TAAR2 knockout mice have significantly higher level of horizontal activity and lower immobilization time in forced swim test. Involvement in adult neurogenesis It has been found that TAAR2 knockout mice have an increased number of neuroblast-like and proliferating cells in both subventricular and subgranular zones of the dentate gyrus in comparison to wild type animals. Furthermore, TAAR2 knockout mice have an increased the brain-derived neurotrophic factor (BDNF) level in the striatum. A single nucleotide polymorphism nonsense mutation of the TAAR2 gene is associated with schizophrenia. TAAR2 is a probable pseudogene in 10–15% of Asians as a result of a polymorphism that produces a premature stop codon at amino acid 168. Involvement in immune cell migration and function TCells, B Cells and Peripheral Mononuclear cells express TAAR2 mRNA. Migration toward TAAR1 ligands required both TAAR1 and TAAR2 expression based on siRNA experiments. In T cells, the same stimuli triggered cytokine secretion while in B cells Immunoglobulin secretion is triggered. Possible Ligands 3‐Iodothyronamine (T1AM) was identified as a non-selective ligand for TAAR2. Additional TAAR1 ligands, tyramine and phenethylamine trigger TAAR2 dependant actions, though direct binding has not been demonstrated. See also Trace amine Trace amine-associated receptor References G protein-coupled receptors
TAAR2
Chemistry
825
11,709,087
https://en.wikipedia.org/wiki/Turbine%20engine%20failure
A turbine engine failure occurs when a gas turbine engine unexpectedly stops producing power due to a malfunction other than fuel exhaustion. It often applies for aircraft, but other turbine engines can also fail, such as ground-based turbines used in power plants or combined diesel and gas vessels and vehicles. Reliability Turbine engines in use on today's turbine-powered aircraft are very reliable. Engines operate efficiently with regularly scheduled inspections and maintenance. These units can have lives ranging in the tens of thousands of hours of operation. However, engine malfunctions or failures occasionally occur that require an engine to be shut down in flight. Since multi-engine airplanes are designed to fly with one engine inoperative and flight crews are trained to fly with one engine inoperative, the in-flight shutdown of an engine typically does not constitute a serious safety of flight issue. The Federal Aviation Administration (FAA) was quoted as stating turbine engines have a failure rate of one per 375,000 flight hours, compared to of one every 3,200 flight hours for aircraft piston engines. Due to "gross under-reporting" of general aviation piston engines in-flight shutdowns (IFSD), the FAA has no reliable data and assessed the rate "between 1 per 1,000 and 1 per 10,000 flight hours". Continental Motors reports the FAA states general aviation engines experience one failures or IFSD every 10,000 flight hours, and states its Centurion engines is one per flight hours, lowering to one per flight hours in 2013–2014. The General Electric GE90 has an in-flight shutdown rate (IFSD) of one per million engine flight-hours. The Pratt & Whitney Canada PT6 is known for its reliability with an in-flight shutdown rate of one per hours from 1963 to 2016, lowering to one per hours over 12 months in 2016. Emergency landing Following an engine shutdown, a precautionary landing is usually performed with airport fire and rescue equipment positioned near the runway. The prompt landing is a precaution against the risk that another engine will fail later in the flight or that the engine failure that has already occurred may have caused or been caused by other as-yet unknown damage or malfunction of aircraft systems (such as fire or damage to aircraft flight controls) that may pose a continuing risk to the flight. Once the aircraft lands, fire department personnel assist with inspecting the aircraft to ensure it is safe before it taxis to its parking position. Rotorcraft Turboprop-powered aircraft and turboshaft-powered helicopters are also powered by turbine engines and are subject to engine failures for many similar reasons as jet-powered aircraft. In the case of an engine failure in a helicopter, it is often possible for the pilot to enter autorotation, using the unpowered rotor to slow the aircraft's descent and provide a measure of control, usually allowing for a safe emergency landing even without engine power. Shutdowns that are not engine failures Most in-flight shutdowns are harmless and likely to go unnoticed by passengers. For example, it may be prudent for the flight crew to shut down an engine and perform a precautionary landing in the event of a low oil pressure or high oil temperature warning in the cockpit. However, passengers in a jet powered aircraft may become quite alarmed by other engine events such as a compressor surge — a malfunction that is typified by loud bangs and even flames from the engine's inlet and tailpipe. A compressor surge is a disruption of the airflow through a gas turbine jet engine that can be caused by engine deterioration, a crosswind over the engine's inlet, ice accumulation around the engine inlet, ingestion of foreign material, or an internal component failure such as a broken blade. While this situation can be alarming, the engine may recover with no damage. Other events that can happen with jet engines, such as a fuel control fault, can result in excess fuel in the engine's combustor. This additional fuel can result in flames extending from the engine's exhaust pipe. As alarming as this would appear, at no time is the engine itself actually on fire. Also, the failure of certain components in the engine may result in a release of oil into bleed air that can cause an odor or oily mist in the cabin. This is known as a fume event. The dangers of fume events are the subject of debate in both aviation and medicine. Possible causes Engine failures can be caused by mechanical problems in the engine itself, such as damage to portions of the turbine or oil leaks, as well as damage outside the engine such as fuel pump problems or fuel contamination. A turbine engine failure can also be caused by entirely external factors, such as volcanic ash, bird strikes or weather conditions like precipitation or icing. Weather risks such as these can sometimes be countered through the usage of supplementary ignition or anti-icing systems. Failures during takeoff A turbine-powered aircraft's takeoff procedure is designed around ensuring that an engine failure will not endanger the flight. This is done by planning the takeoff around three critical V speeds, V1, VR and V2. V1 is the critical engine failure recognition speed, the speed at which a takeoff can be continued with an engine failure, and the speed at which stopping distance is no longer guaranteed in the event of a rejected takeoff. VR is the speed at which the nose is lifted off the runway, a process known as rotation. V2 is the single-engine safety speed, the single engine climb speed. The use of these speeds ensure that either sufficient thrust to continue the takeoff, or sufficient stopping distance to reject it will be available at all times. Failure during extended operations In order to allow twin-engined aircraft to fly longer routes that are over an hour from a suitable diversion airport, a set of rules known as ETOPS (Extended Twin-engine Operational Performance Standards) is used to ensure a twin turbine engine powered aircraft is able to safely arrive at a diversionary airport after an engine failure or shutdown, as well as to minimize the risk of a failure. ETOPS includes maintenance requirements, such as frequent and meticulously logged inspections and operation requirements such as flight crew training and ETOPS-specific procedures. Contained and uncontained failures Engine failures may be classified as either as "contained" or "uncontained". A contained engine failure is one in which all internal rotating components remain within or embedded in the engine's case (including any containment wrapping that is part of the engine), or exit the engine through the tail pipe or air inlet. An uncontained engine event occurs when an engine failure results in fragments of rotating engine parts penetrating and escaping through the engine case. The very specific technical distinction between a contained and uncontained engine failure derives from regulatory requirements for design, testing, and certification of aircraft engines under Part 33 of the U.S. Federal Aviation Regulations, which has always required turbine aircraft engines to be designed to contain damage resulting from rotor blade failure. Under Part 33, engine manufacturers are required to perform blade off tests to ensure containment of shrapnel if blade separation occurs. Blade fragments exiting the inlet or exhaust can still pose a hazard to the aircraft, and this should be considered by the aircraft designers. A nominally contained engine failure can still result in engine parts departing the aircraft as long as the engine parts exit via the existing openings in the engine inlet or outlet, and do not create new openings in the engine case containment. Fan blade fragments departing via the inlet may also cause airframe parts such as the inlet duct and other parts of the engine nacelle to depart the aircraft due to deformation from the fan blade fragment's residual kinetic energy. The containment of failed rotating parts is a complex process which involves high energy, high speed interactions of numerous locally and remotely located engine components (e.g., failed blade, other blades, containment structure, adjacent cases, bearings, bearing supports, shafts, vanes, and externally mounted components). Once the failure event starts, secondary events of a random nature may occur whose course and ultimate conclusion cannot be precisely predicted. Some of the structural interactions that have been observed to affect containment are the deformation and/or deflection of blades, cases, rotor, frame, inlet, casing rub strips, and the containment structure. Uncontained turbine engine disk failures within an aircraft engine present a direct hazard to an airplane and its crew and passengers because high-energy disk fragments can penetrate the cabin or fuel tanks, damage flight control surfaces, or sever flammable fluid or hydraulic lines. Engine cases are not designed to contain failed turbine disks. Instead, the risk of uncontained disk failure is mitigated by designating disks as safety-critical parts, defined as the parts of an engine whose failure is likely to present a direct hazard to the aircraft. Notable uncontained engine failure accidents National Airlines Flight 27: a McDonnell Douglas DC-10 flying from Miami to San Francisco in 1973 had an overspeed failure of a General Electric CF6-6, resulting in one fatality. Two LOT Polish Airlines flights, both Ilyushin Il-62s, suffered catastrophic uncontained engine failures in the 1980s. The first was in 1980 on LOT Polish Airlines Flight 7 where flight controls were destroyed, killing all 87 on board. In 1987, on LOT Polish Airlines Flight 5055, the failure of the aircraft's inner left (#2) engine damaged the outer left (#1) engine, setting both on fire and causing loss of flight controls, leading to a crash that killed all 183 people on board. In both cases, the turbine shaft in engine #2 disintegrated due to production defects in the engines' bearings, which were missing rollers. The Tu-154 crash near Krasnoyarsk was a major aircraft crash that occurred on Sunday, December 23, 1984, in the vicinity of Krasnoyarsk. The Tu-154B-2 airliner of the 1st Krasnoyarsk united aviation unit (Aeroflot) performed passenger flight SU-3519 on the Krasnoyarsk-Irkutsk route, but during the climb, engine No. 3 failed. The crew decided to return to the airport of departure, but during the landing approach a fire broke out, which destroyed the control systems and as a result, the plane crashed to the ground 3200 meters from the threshold of the runway of the Yemelyanovo airport and collapsed. Of the 111 people on board (104 passengers and 7 crew members), one survived. The cause of the catastrophe was the destruction of the disk of the first stage of the low pressure circuit of engine No. 3, which occurred due to the presence of fatigue cracks. The cracks were caused by a manufacturing defect – the inclusion of a titanium-nitrogen compound that has a higher microhardness than the original material. The methods used at that time for the manufacture and repair of disks, as well as the means of control, were found to be partially obsolete, which is why they did not ensure the effectiveness of control and detection of such a defect. The defect itself arose probably due to accidental ingestion of a titanium sponge or charge for smelting an ingot of a piece enriched with nitrogen. Cameroon Airlines Flight 786: a Boeing 737 flying between Douala and Garoua, Cameroon in 1984 had a failure of a Pratt & Whitney JT8D-15 engine. Two people died. British Airtours Flight 28M: a Boeing 737 flying from Manchester to Corfu in 1985 suffered an uncontained engine failure and fire on takeoff. The takeoff was aborted and the plane turned onto a taxiway and began evacuating. Fifty-five passengers and crew were unable to escape and died of smoke inhalation. The accident led to major changes to improve the survivability of aircraft evacuations. United Airlines Flight 232: a McDonnell Douglas DC-10 flying from Denver to Chicago in 1989. The failure of the rear General Electric CF6-6 engine caused the loss of all hydraulics, forcing the pilots to attempt a landing using differential thrust. There were 111 fatalities. Prior to this crash, the probability of a simultaneous failure of all three hydraulic systems was considered as low as one in a billion. However, statistical models did not account for the position of the number-two engine, mounted at the tail close to hydraulic lines, nor the results of fragments released in many directions. Since then, aircraft engine designs have focused on keeping shrapnel from puncturing the cowling or ductwork, increasingly utilizing high-strength composite materials to achieve penetration resistance while keeping the weight low. Baikal Airlines Flight 130: a starter of engine No. 2 on a Tu-154 heading from Irkutsk to Domodedovo, Moscow in 1994, failed to stop after engine startup and continued to operate at over 40,000 rpm with open bleed valves from engines, which caused an uncontained failure of the starter. A detached turbine disk damaged fuel and oil supply lines (which caused fire) and hydraulic lines. The fire-extinguishing system failed to stop the fire, and the plane diverted back to Irkutsk. However, due to loss of hydraulic pressure the crew lost control of the plane, which subsequently crashed into a dairy farm killing all 124 on board and one on the ground. ValuJet 597: A DC-9-32 taking off from Hartsfield Jackson Atlanta International Airport on June 8, 1995, suffered an uncontained engine failure of the 7th stage high pressure compressor disk due to inadequate inspection of the corroded disk. The resulting rupture caused jet fuel to flow into the cabin and ignite, and the fire caused the jet to be a write-off. Delta Air Lines Flight 1288: a McDonnell Douglas MD-88 flying from Pensacola, Florida to Atlanta in 1996 had a cracked compressor rotor hub failure on one of its Pratt & Whitney JT8D-219 engines. Two died. TAM Flight 9755: a Fokker 100, departing Recife/Guararapes–Gilberto Freyre International Airport for São Paulo/Guarulhos International Airport on 15 September 2001, suffered an uncontained engine failure (Rolls-Royce RB.183 Tay) in which fragments of the engine shattered three cabin windows, causing decompression and pulling a passenger partly out of the plane. Another passenger held the passenger in until the aircraft landed, but the passenger blown out of the window died. Qantas Flight 32: an Airbus A380 flying from London Heathrow to Sydney (via Singapore) in 2010 had an uncontained failure in a Rolls-Royce Trent 900 engine. The failure was found to have been caused by a misaligned counter bore within a stub oil pipe leading to a fatigue fracture. This in turn led to an oil leakage followed by an oil fire in the engine. The fire led to the release of the Intermediate Pressure Turbine (IPT) disc. The airplane, however, landed safely. This led to the grounding of the entire Qantas A380 fleet. British Airways Flight 2276: a Boeing 777-200ER flying from Las Vegas to London in 2015 suffered an uncontained engine failure on its #1 GE90 engine during takeoff, resulting in a large fire on its port side. The aircraft successfully aborted takeoff and the plane was evacuated with no fatalities. American Airlines Flight 383: a Boeing 767-300ER flying from Chicago to Miami in 2016 suffered an uncontained engine failure on its #2 engine (General Electric CF6) during takeoff resulting in a large fire which destroyed the outer right wing. The aircraft aborted takeoff and was evacuated with 21 minor injuries, but no fatalities. Air France Flight 66: an Airbus A380, registration F-HPJE performing flight from Paris, France, to Los Angeles, United States, was en route about southeast of Nuuk, Greenland, when it suffered a catastrophic engine failure in 2017 (General Electric / Pratt & Whitney Engine Alliance GP7000). The crew descended the aircraft and diverted to Goose Bay, Canada, for a safe landing about two hours later. References This article contains text from a publication of the United States National Transportation Safety Board. which can be found here As a work of the United States Federal Government, the source is in the public domain and may be adapted freely per USC Title 17; Chapter 1; §105 (see Wikipedia:Public Domain). Turbines Jet engines Aviation safety Aviation risks Emergency aircraft operations Aircraft engines
Turbine engine failure
Chemistry,Technology
3,379
24,225,740
https://en.wikipedia.org/wiki/Popular%20Astronomy%20%28US%20magazine%29
Popular Astronomy is an American magazine published by John August Media, LLC and hosted at TechnicaCuriosa.com for amateur astronomers. Prior to its revival in 2009, the title was published between 1893 and 1951. It was the successor to The Sidereal Messenger, which was published from March 1882 to 1892. The first issue of Popular Astronomy appeared in September 1893. Each yearly volume of Popular Astronomy contained 10 issues, for a total of 59 volumes. The first editor, from 1893–1909, was William W. Payne of Carleton College, with Charlotte R. Willard as co-editor 1893–1905. Payne was followed by Herbert C. Wilson, who served in the post between 1909 and 1926. Dr. Curvin Henry Gingrich, Professor of Mathematics and Astronomy at Carleton, served as the final editor for the initial publication run, which ended with his sudden death (by heart attack) in 1951. Dr. Gingrich received a six page eulogy written by Dr. Frederick C. Leonard, in the August 1951 issue of the magazine. The magazine played an important role in the development of amateur variable star observing in the United States. In 2017 Popular Astronomy has returned as part of TechnicaCuriosa.com, along with sister titles Popular Electronics and Mechanix Illustrated. Writers Jane MacArthur FRAS, a British planetary scientist References 1893 establishments in the United States 1951 disestablishments in the United States Science and technology magazines published in the United States Astronomy magazines Defunct science fiction magazines published in the United States Magazines established in 1893 Magazines disestablished in 1951 Magazines published in Minnesota
Popular Astronomy (US magazine)
Astronomy
324
545,288
https://en.wikipedia.org/wiki/Spectral%20leakage
The Fourier transform of a function of time, s(t), is a complex-valued function of frequency, S(f), often referred to as a frequency spectrum. Any linear time-invariant operation on s(t) produces a new spectrum of the form H(f)•S(f), which changes the relative magnitudes and/or angles (phase) of the non-zero values of S(f). Any other type of operation creates new frequency components that may be referred to as spectral leakage in the broadest sense. Sampling, for instance, produces leakage, which we call aliases of the original spectral component. For Fourier transform purposes, sampling is modeled as a product between s(t) and a Dirac comb function. The spectrum of a product is the convolution between S(f) and another function, which inevitably creates the new frequency components. But the term 'leakage' usually refers to the effect of windowing, which is the product of s(t) with a different kind of function, the window function. Window functions happen to have finite duration, but that is not necessary to create leakage. Multiplication by a time-variant function is sufficient. Spectral analysis The Fourier transform of the function is zero, except at frequency ±ω. However, many other functions and waveforms do not have convenient closed-form transforms. Alternatively, one might be interested in their spectral content only during a certain time period. In either case, the Fourier transform (or a similar transform) can be applied on one or more finite intervals of the waveform. In general, the transform is applied to the product of the waveform and a window function. Any window (including rectangular) affects the spectral estimate computed by this method. The effects are most easily characterized by their effect on a sinusoidal s(t) function, whose unwindowed Fourier transform is zero for all but one frequency. The customary frequency of choice is 0 Hz, because the windowed Fourier transform is simply the Fourier transform of the window function itself (see ): When both sampling and windowing are applied to s(t), in either order, the leakage caused by windowing is a relatively localized spreading of frequency components, with often a blurring effect, whereas the aliasing caused by sampling is a periodic repetition of the entire blurred spectrum. Choice of window function Windowing of a simple waveform like causes its Fourier transform to develop non-zero values (commonly called spectral leakage) at frequencies other than ω. The leakage tends to be worst (highest) near ω and least at frequencies farthest from ω. If the waveform under analysis comprises two sinusoids of different frequencies, leakage can interfere with our ability to distinguish them spectrally. Possible types of interference are often broken down into two opposing classes as follows: If the component frequencies are dissimilar and one component is weaker, then leakage from the stronger component can obscure the weaker one's presence. But if the frequencies are too similar, leakage can render them unresolvable even when the sinusoids are of equal strength. Windows that are effective against the first type of interference, namely where components have dissimilar frequencies and amplitudes, are called high dynamic range. Conversely, windows that can distinguish components with similar frequencies and amplitudes are called high resolution. The rectangular window is an example of a window that is high resolution but low dynamic range, meaning it is good for distinguishing components of similar amplitude even when the frequencies are also close, but poor at distinguishing components of different amplitude even when the frequencies are far away. High-resolution, low-dynamic-range windows such as the rectangular window also have the property of high sensitivity, which is the ability to reveal relatively weak sinusoids in the presence of additive random noise. That is because the noise produces a stronger response with high-dynamic-range windows than with high-resolution windows. At the other extreme of the range of window types are windows with high dynamic range but low resolution and sensitivity. High-dynamic-range windows are most often justified in wideband applications, where the spectrum being analyzed is expected to contain many different components of various amplitudes. In between the extremes are moderate windows, such as Hann and Hamming. They are commonly used in narrowband applications, such as the spectrum of a telephone channel. In summary, spectral analysis involves a trade-off between resolving comparable strength components with similar frequencies (high resolution / sensitivity) and resolving disparate strength components with dissimilar frequencies (high dynamic range). That trade-off occurs when the window function is chosen. Discrete-time signals When the input waveform is time-sampled, instead of continuous, the analysis is usually done by applying a window function and then a discrete Fourier transform (DFT). But the DFT provides only a sparse sampling of the actual discrete-time Fourier transform (DTFT) spectrum. Figure 2, row 3 shows a DTFT for a rectangularly-windowed sinusoid. The actual frequency of the sinusoid is indicated as "13" on the horizontal axis. Everything else is leakage, exaggerated by the use of a logarithmic presentation. The unit of frequency is "DFT bins"; that is, the integer values on the frequency axis correspond to the frequencies sampled by the DFT. So the figure depicts a case where the actual frequency of the sinusoid coincides with a DFT sample, and the maximum value of the spectrum is accurately measured by that sample. In row 4, it misses the maximum value by bin, and the resultant measurement error is referred to as scalloping loss (inspired by the shape of the peak). For a known frequency, such as a musical note or a sinusoidal test signal, matching the frequency to a DFT bin can be prearranged by choices of a sampling rate and a window length that results in an integer number of cycles within the window. Noise bandwidth The concepts of resolution and dynamic range tend to be somewhat subjective, depending on what the user is actually trying to do. But they also tend to be highly correlated with the total leakage, which is quantifiable. It is usually expressed as an equivalent bandwidth, B. It can be thought of as redistributing the DTFT into a rectangular shape with height equal to the spectral maximum and width B. The more the leakage, the greater the bandwidth. It is sometimes called noise equivalent bandwidth or equivalent noise bandwidth, because it is proportional to the average power that will be registered by each DFT bin when the input signal contains a random noise component (or is just random noise). A graph of the power spectrum, averaged over time, typically reveals a flat noise floor, caused by this effect. The height of the noise floor is proportional to B. So two different window functions can produce different noise floors, as seen in figures 1 and 3. Processing gain and losses In signal processing, operations are chosen to improve some aspect of quality of a signal by exploiting the differences between the signal and the corrupting influences. When the signal is a sinusoid corrupted by additive random noise, spectral analysis distributes the signal and noise components differently, often making it easier to detect the signal's presence or measure certain characteristics, such as amplitude and frequency. Effectively, the signal-to-noise ratio (SNR) is improved by distributing the noise uniformly, while concentrating most of the sinusoid's energy around one frequency. Processing gain is a term often used to describe an SNR improvement. The processing gain of spectral analysis depends on the window function, both its noise bandwidth (B) and its potential scalloping loss. These effects partially offset, because windows with the least scalloping naturally have the most leakage. Figure 3 depicts the effects of three different window functions on the same data set, comprising two equal strength sinusoids in additive noise. The frequencies of the sinusoids are chosen such that one encounters no scalloping and the other encounters maximum scalloping. Both sinusoids suffer less SNR loss under the Hann window than under the Blackman-Harris window. In general (as mentioned earlier), this is a deterrent to using high-dynamic-range windows in low-dynamic-range applications. Symmetry The formulas provided at produce discrete sequences, as if a continuous window function has been "sampled". (See an example at Kaiser window.) Window sequences for spectral analysis are either symmetric or 1-sample short of symmetric (called periodic, DFT-even, or DFT-symmetric). For instance, a true symmetric sequence, with its maximum at a single center-point, is generated by the MATLAB function hann(9,'symmetric'). Deleting the last sample produces a sequence identical to hann(8,'periodic'). Similarly, the sequence hann(8,'symmetric') has two equal center-points. Some functions have one or two zero-valued end-points, which are unnecessary in most applications. Deleting a zero-valued end-point has no effect on its DTFT (spectral leakage). But the function designed for  + 1 or  + 2 samples, in anticipation of deleting one or both end points, typically has a slightly narrower main lobe, slightly higher sidelobes, and a slightly smaller noise-bandwidth. DFT-symmetry The predecessor of the DFT is the finite Fourier transform, and window functions were "always an odd number of points and exhibit even symmetry about the origin". In that case, the DTFT is entirely real-valued. When the same sequence is shifted into a DFT data window, the DTFT becomes complex-valued except at frequencies spaced at regular intervals of Thus, when sampled by an -length DFT, the samples (called DFT coefficients) are still real-valued. An approximation is to truncate the +1-length sequence (effectively ), and compute an -length DFT. The DTFT (spectral leakage) is slightly affected, but the samples remain real-valued. The terms DFT-even and periodic refer to the idea that if the truncated sequence were repeated periodically, it would be even-symmetric about and its DTFT would be entirely real-valued. But the actual DTFT is generally complex-valued, except for the DFT coefficients. Spectral plots like those at , are produced by sampling the DTFT at much smaller intervals than and displaying only the magnitude component of the complex numbers. Periodic summation An exact method to sample the DTFT of an +1-length sequence at intervals of is described at . Essentially, is combined with (by addition), and an -point DFT is done on the truncated sequence. Similarly, spectral analysis would be done by combining the and data samples before applying the truncated symmetric window. That is not a common practice, even though truncated windows are very popular. Convolution The appeal of DFT-symmetric windows is explained by the popularity of the fast Fourier transform (FFT) algorithm for implementation of the DFT, because truncation of an odd-length sequence results in an even-length sequence. Their real-valued DFT coefficients are also an advantage in certain esoteric applications where windowing is achieved by means of convolution between the DFT coefficients and an unwindowed DFT of the data. In those applications, DFT-symmetric windows (even or odd length) from the Cosine-sum family are preferred, because most of their DFT coefficients are zero-valued, making the convolution very efficient. Some window metrics When selecting an appropriate window function for an application, this comparison graph may be useful. The frequency axis has units of FFT "bins" when the window of length N is applied to data and a transform of length N is computed. For instance, the value at frequency "bin" is the response that would be measured in bins k and k + 1 to a sinusoidal signal at frequency k + . It is relative to the maximum possible response, which occurs when the signal frequency is an integer number of bins. The value at frequency is referred to as the maximum scalloping loss of the window, which is one metric used to compare windows. The rectangular window is noticeably worse than the others in terms of that metric. Other metrics that can be seen are the width of the main lobe and the peak level of the sidelobes, which respectively determine the ability to resolve comparable strength signals and disparate strength signals. The rectangular window (for instance) is the best choice for the former and the worst choice for the latter. What cannot be seen from the graphs is that the rectangular window has the best noise bandwidth, which makes it a good candidate for detecting low-level sinusoids in an otherwise white noise environment. Interpolation techniques, such as zero-padding and frequency-shifting, are available to mitigate its potential scalloping loss. See also Knife-edge effect, spatial analog of truncation Gibbs phenomenon Notes Page citations References Fourier analysis Digital signal processing Spectrum (physical sciences)
Spectral leakage
Physics
2,703
25,862,206
https://en.wikipedia.org/wiki/Crisis%20camp
A crisis camp is a BarCamp gathering of IT professionals, software developers, and computer programmers to aid in the relief efforts of a major crisis such as those caused by earthquakes, floods, or hurricanes. Projects that crisis camps often work on include setting up social networks for people to locate missing friends and relatives, creating maps of affected areas, and creating inventories of needed items such as food and clothing. Previous efforts of crisis camps reveal common themes such as the use of mobility, the use of the Internet as a common coordination platform, the requirement of volunteers, and the need for alternative community communication access areas. This initiative is reported to have a unique format that features free or nominal attendance fees as well as agenda that are created in real time by the participants. This format has also been referred to as "unconference", which reject one-size-fits-all presentations in favor of innovative gathering with no predetermined speaker or sessions as activities are led by participants themselves. The emergence of EdCamp, which is a user-generated gathering for educators has been modeled after BarCamp. Following the 2010 Haiti earthquake, many crisis camps were set up around the world, often under the name "Crisis Camp Haiti", to help with the relief effort. Due to the 2011 Tōhoku earthquake and tsunami, the Crisis Commons volunteer community was mobilized and part of the effort is being coordinated by Japanese students at U.S. universities. The first Crisis camp was held in Washington, DC on June 12–14, 2009. References External links CrisisCommons.org Technology in society Natural disasters Emergency organizations Unconferences
Crisis camp
Physics
327
1,421,479
https://en.wikipedia.org/wiki/Stage%20clothes
Stage clothes is a term for any clothes used by performers on stage. The term is sometimes used only for those clothes which are specially made for the stage performance by a costume designer or picked out by a costume coordinator. Theatrical costumes can help actors portray characters' age, gender role, profession, social class, personality, and even information about the historical period/era, geographic location, time of day, as well as the season or weather of the theatrical performance. Stage clothes may be used to portray a historical look or they can be used to exaggerate some aspect of a character. Description Any clothing used by performers (singers, actors, or dancers) on stage may be referred to as stage clothes. More specifically, the term is sometimes used only for those clothes which are specially made for the stage performance by a costume designer or picked out by a costume coordinator. However, many performers also pick up regular clothes and make them their "trademark look" on stage. Use In combination with other aspects, theatrical costumes can help actors portray characters' age, gender role, profession, social class, personality, and even information about the historical period/era, geographic location and time of day, as well as the season or weather of the theatrical performance. Often, stylized theatrical costumes can exaggerate some aspect of a character; for example Harlequin and Pantaloon in the traditional commedia dell'arte. In certain cases, duplicates of the same stage clothes are prepared for a production, such as when performing in stunts involving bullet hit squibs. Usually, in costume, historical accuracy is combined with a certain vision. The character that the costumer is dressing is also an important aspect, and a lot of the time the attitudes of the character is not exactly in line with the time period. For example, they may be more bright and colorful, or they may be more dull. A movie or stage production which emphasize the use of correct clothes and settings for a specific time period is called a costume drama. Stage clothes often follow the evolving fashion but in a more extravagant way. Clothes worn by popular performers can often spark new fashions by themselves, as fans of performers want to look like their idols. Gallery References External links Fashion Plates of 18th, 19th and 20th Century Theater Costumes from The Metropolitan Museum of Art Libraries Costume design Performing arts
Stage clothes
Engineering
471
15,227,089
https://en.wikipedia.org/wiki/CD5L
CD5 antigen-like is a protein (also known as AIM Apoptosis Inhibitor of Macrophages) that in humans is encoded by the CD5L gene. References External links Further reading
CD5L
Chemistry
41
1,777,155
https://en.wikipedia.org/wiki/SPI-4.2
SPI-4.2 is a version of the System Packet Interface published by the Optical Internetworking Forum. It was designed to be used in systems that support OC-192 SONET interfaces and is sometimes used in 10 Gigabit Ethernet based systems. SPI-4 is an interface for packet and cell transfer between a physical layer (PHY) device and a link layer device, for aggregate bandwidths of OC-192 Asynchronous Transfer Mode and Packet over SONET/SDH (POS), as well as 10 Gigabit Ethernet applications. SPI-4 has two types of transfers—Data when the RCTL signal is deasserted; Control when the RCTL signal is asserted. The transmit and receive data paths include, respectively, (TDCLK, TDAT[15:0],TCTL) and (RDCLK, RDAT[15:0], RCTL). The transmit and receive FIFO status channels include (TSCLK, TSTAT[1:0]) and (RSCLK, RSTAT[1:0]) respectively. A typical application of SPI-4.2 is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace. The interface consists of (per direction): sixteen LVDS pairs for the data path one LVDS pair for control one LVDS pair for clock at half of the data rate two FIFO status lines running at 1/8 of the data rate one status clock The clocking is source-synchronous and operates around 700 MHz. Implementations of SPI-4.2 have been produced which allow somewhat higher clock rates. This is important when overhead bytes are added to incoming packets. PMC-Sierra made the original OIF contribution for SPI-4.2. That contribution was based on the PL-4 specification that was developed by PMC-Sierra in conjunction with the SATURN Development Group. The physical layer of SPI-4.2 is very similar to the HyperTransport 1.x interface, although the logical layers are very different. External links OIF Interoperability Agreements Network protocols
SPI-4.2
Technology
453
2,211,475
https://en.wikipedia.org/wiki/Anaerobic%20lagoon
An anaerobic lagoon or manure lagoon is a man-made outdoor earthen basin filled with animal waste that undergoes anaerobic respiration as part of a system designed to manage and treat refuse created by concentrated animal feeding operations (CAFOs). Anaerobic lagoons are created from a manure slurry, which is washed out from underneath the animal pens and then piped into the lagoon. Sometimes the slurry is placed in an intermediate holding tank under or next to the barns before it is deposited in a lagoon. Once in the lagoon, the manure settles into two layers: a solid or sludge layer and a liquid layer. The manure then undergoes the process of anaerobic respiration, whereby the volatile organic compounds are converted into carbon dioxide and methane. Anaerobic lagoons are usually used to pretreat high strength industrial wastewaters and municipal wastewaters. This allows for preliminary sedimentation of suspended solids as a pretreatment process. Anaerobic lagoons have been shown to harbor and emit substances which can cause adverse environmental and health effects. These substances are emitted through two main pathways: gas emissions and lagoon overflow. Gas emissions are continuous (though the amount may vary based on the season) and are a product of the manure slurry. The most prevalent gasses emitted by the lagoon are: ammonia, hydrogen sulfide, methane, and carbon dioxide. Lagoon overflow is caused by faulty lagoons, such as breaches or improper construction, or adverse weather conditions, such as increased rainfall or strong winds. These overflows release harmful substances into the surrounding land and water such as: antibiotics, estrogens, bacteria, pesticides, heavy metals, and protozoa. In the U.S., the Environmental Protection Agency (EPA) has responded to environmental and health concerns by strengthening regulation of CAFOs under the Clean Water Act. Some states have imposed their own regulations as well. Because of repeated overflows and resultant health concerns, North Carolina banned the construction of new anaerobic lagoons in 1999. There has also been a significant push for the research, development and implementation of environmentally sound technologies which would allow for safer containment and recycling of CAFO waste. Background Beginning in the 1950s with poultry production, and then later in the 1970s and 1980s with cattle and swine, meat producers in the United States have turned to CAFO as a way to more efficiently produce large quantities of meat. This switch has decreased the price of meat. However, the increase in livestock has generated an increase in manure. In 2006, for example, livestock operations in the United States produced of manure. Unlike manure produced in a conventional farm, CAFO manure cannot all be used as direct fertilizer on agricultural land because of the poor quality of the manure. Moreover, CAFOs produce a high volume of manure. A feeding operation with 800,000 pigs could produce over of waste per year. The high quantity of manure produced by a CAFO must be dealt with in some way, as improper manure management can result in water, air and soil damage. As a result, manure collection and disposal has become an increasing problem. In order to manage their waste, CAFOs have developed agricultural wastewater treatment plans. To save on manual labor, many CAFOs handle manure waste as a liquid. In this system, the animals are kept in pens with grated floors so the waste and spray water can be drained from underfloor gutters and piped to storage tanks or anaerobic lagoons. Once at a lagoon, the purpose is to treat the waste and make it suitable for spreading on agricultural fields. There are three main types of lagoon: anaerobic, which is inhibited by oxygen; aerobic, which requires oxygen; and facultative, which is maintained with or without oxygen. Aerobic lagoons provide a higher degree of treatment with less odor production, though they require a significant amount of space and maintenance. Because of this demand, almost all livestock lagoons are anaerobic lagoons. Design Description Anaerobic lagoons are earthen basins with a usual depth of , though greater depths are more beneficial to digestion as they minimize oxygen diffusion from the surface. To minimize leakage of animal waste into the ground water, newer lagoons are generally lined with clay Studies have shown that in fact the lagoons typically leak at a rate of approximately per day, with or without a clay liner, because it is the sludge deposited at the base of the lagoon that limits the leakage rate, not the clay liner or underlying native soil. Anaerobic lagoons are not heated, aerated or mixed. Anaerobic lagoons are most effective in warmer temperatures; anaerobic bacteria are ineffective below Lagoons must be separated from other structures by a certain distance to prevent contamination. States regulate this separation distance. The overall size of the lagoon is determined by addition of four components: minimum design volume, volume of manure storage between periods of disposal, dilution volume and the volume of sludge accumulation between periods of sludge removal. Process The lagoon is divided into two distinct layers: sludge and liquid. The sludge layer is a more solid layer formed by the stratification of sediments from the manure. After a while, this solid layer accumulates and eventually needs to be cleaned out. The liquid level is composed of grease, scum and other particulates. The liquid level CAFO wastewater enters at the bottom of the lagoon so that it can mix with the active microbial mass in the sludge layer. These anaerobic conditions are uniform throughout the lagoon, except in a small surface level. Sometimes aeration is applied to this level to dampen the odors emitted by the lagoons. If surface aeration is not applied, a crust will form that will trap heat and odors. Anaerobic lagoons should retain and treat wastewater from 20 to 150 days. Lagoons should be followed by aerobic or facultative lagoons to provide further required treatment. The liquid layer is periodically drained and used for fertilizer. In some instances, a cover can be provided to trap methane, which is used for energy. Anaerobic lagoons work through a process called anaerobic digestion. Decomposition of the organic matter begins shortly after the animals void. Lagoons become anaerobic because of the high biological oxygen demand (BOD) of the feces, which contains a high level of soluble solids, resulting in higher BOD. Anaerobic microorganisms convert organic compounds into carbon dioxide and methane through acid formation and methane production. Advantages of construction Manure can be easily manipulated with water using flushing systems, sewer lines, pumps and irrigation systems Stabilization of the waste through digestion minimizes odor when manure is finally used as fertilizer Manure is able to be stored long-term at a low cost Manure is all in one area, instead of spread across a large area of land (This is called W.E.S., Waste Enlargement System). Disadvantages of construction Requires relatively large area of land Produces strong undesirable odors especially during spring and fall Take a fairly long time for organic stabilization because of the slow rate of sludge digestion and slow growth rate of methane formers Manure used as fertilizer is of lower quality because of low nutrient availability Wastewater seepage may occur if the tanks break or are improperly constructed Weather and other environmental elements can strongly affect the safety and efficacy of anaerobic lagoons Environmental and health impacts Gas emissions Rates of asthma in children living near a CAFO are consistently elevated. The process of anaerobic digestion has been shown to release over 400 volatile compounds from lagoons. The most prevalent of these are: ammonia, hydrogen sulfide, methane, and carbon dioxide. Ammonia In the United States, 80 percent of ammonia emissions come from livestock production. A lagoon can vaporize up to 80 percent of its nitrogen through the reaction: NH4+-N -> NH3 + H+. As pH or temperature increases, so does the amount of volatilized ammonia. Once ammonia has been volatilized, it can travel as far as 300 miles, and at closer ranges it is a respiratory irritant. Acidification and eutrophication of the ecosystem surrounding the lagoons could be caused by prolonged exposure to volatilized ammonia. This volatilized ammonia has been implicated in widespread ecological damage in Europe and is of growing concern for the United States. Hydrogen sulfide With averages greater than 30ppb, lagoons have high concentration of hydrogen sulfide, which is highly toxic. A study by the Minnesota Pollution Control Agency has found that concentrations of hydrogen sulfide near lagoons have exceeded the state standard, even as far away as 4.9 miles. Hydrogen sulfide is recognizable for its unpleasant rotten-egg odor. Because hydrogen sulfide is heavier than air, it tends to linger around lagoons even after ventilation. Levels of hydrogen sulfide are at their highest after agitation and during manure removal. Methane Methane is an odorless, tasteless, and colorless gas. Lagoons produce about 2,300,000 tonnes per year, with around 40 percent of this mass coming from hog farm lagoons. Methane is combustible at high temperatures, and explosions and fires are a real threat at or near lagoons. Additionally, methane is a greenhouse gas. The U.S. EPA estimated that 13 percent of all the methane emissions came from livestock manure in 1998, and this number has grown in recent years. Recently there has been interest in technology which would capture methane produced from lagoons and sell it as energy. Water-soluble contaminants Contaminants that are water-soluble can escape from anaerobic lagoons and enter the environment through leakage from badly constructed or poorly maintained manure lagoons as well as during excess rain or high winds, resulting in an overflow of lagoons. These leaks and overflows can contaminate surrounding surface and ground water with some hazardous materials which are contained in the lagoon. The most serious of these contaminants are pathogens, antibiotics, heavy metals and hormones. For example, runoff from farms in Maryland and North Carolina are a leading candidate for Pfiesteria piscicida. This contaminant has the ability to kill fish, and it can also cause skin irritation and short term memory loss in humans Pathogens More than 150 pathogens in manure lagoons have been found to impact human health. Healthy individuals who come into contact with pathogens usually recover promptly. However, those who have a weakened immune system, such as cancer patients and young children, have an increased risk for a more severe illness or even death. About 20 percent of the U.S. population are categorized in this risk group. Some of the more notable pathogens are: E. coli E. coli is found in the intestines and feces of both animal and humans. One particularly virulent strain, Escherichia coli O157:H7, is found specifically in the lumen of cattle raised in CAFOs. Because cattle are fed corn in CAFOs instead of grass, this changes the pH of the lumen so that it is more hospitable to E. coli. Grain-fed cattle have 80 percent more of this strain of E. coli than grass-fed cattle. However, the amount of E. coli found in the lumen of grain fed cattle can be significantly reduced by switching an animal to grass only a few days prior to slaughter. This reduction would decrease the pathogen's presence in both meat and waste of the cattle, and decrease the E. coli population found in anaerobic lagoons. Cryptosporidium Cryptosporidium is a parasite that causes diarrhea, vomiting, stomach cramps and fever. It is particularly problematic because it is resistant to most lagoon treatment regimens In a study performed in Canada, 37 percent of swine liquid-manure samples contained Cryptosporidium. Other common pathogens Other common pathogens (and their symptoms) include: Bacillus anthracis, otherwise known as Anthrax (skin sores, headache, fever, chills, nausea, vomiting) Leptospira pomona (abdominal pain, muscle pain, vomiting, fever) Listeria monocytogenes (fever, fatigue, nausea, vomiting, diarrhea) Salmonella (abdominal pain, diarrhea, nausea, chills, fever, headache) Clostridium tetani (violent muscle spasms, lockjaw, difficulty breathing) Histoplasma capsulatum (fever, chills, muscle ache, cough rash, joint pain and stiffness) Microsporum and Trichophyton Ringworm (itching, rash) Giardia lamblia (abdominal pain, abdominal gas, nausea, vomiting, fever) Cryptosporidium (diarrhea, dehydration, weakness, abdominal cramping) Pfiesteria piscicida (neurological damage) Antibiotics Antibiotics are fed to livestock to prevent disease and to increase weight and development, so that there is a shortened time from birth to slaughter. However, because these antibiotics are administered at sub-therapeutic levels, bacterial colonies can build up resistance to the drugs through the natural selection of bacteria resistant to these antibiotics. These antibiotic-resistant bacteria are then excreted and transferred to the lagoons, where they can infect humans and other animals. Each year, 24.6 million pounds of antimicrobials are administered to livestock for non-therapeutic purposes. Seventy percent of all antibiotics and related drugs are given to animals as feed additives. Nearly half of the antibiotics used are nearly identical to ones given to humans. There is strong evidence that the use of antibiotics in animal feed is contributing to an increase in antibiotic-resistant microbes and causing antibiotics to be less effective for humans. Due to concerns over antibiotic-resistant bacteria, the American Medical Association passed a resolution stating its opposition to the use of sub-therapeutic levels of antimicrobials in livestock. Hormones Growth hormones such as rBST, estrogen, and testosterone are administered to increase development rate and muscle mass for the livestock. Yet, only a fraction of these hormones are actually absorbed by the animal. The rest are excreted and wind up in lagoons. Studies have shown that these hormones, if they escape the lagoon and are emitted into the surrounding surface water, can alter fertility and reproductive habits of aquatic animals. One study found that several lagoons and monitoring wells from two facilities (a nursery and a farrowing sow operation) contained high levels of all three types of estrogen: for the nursery, lagoon effluent concentrations ranged from 390 to 620 ng/L for estrone, 180 to 220 ng/L for estriol, and 40 to 50 ng/L for estradiol. For the farrowing sow operation, digester and primary lagoon effluent concentrations ranged from 9,600 to 24,900 ng/L for estrone, 5,000 to 10,400 ng/L for estriol, and 2,200 to 3,000 ng/L for estradiol. Ethinylestradiol was not detected in any of the lagoon or ground water samples. Natural estrogen concentrations in ground water samples were generally less than 0.4 ng/L, although, a few wells at the nursery operation showed quantifiable but low levels." Heavy metals Manure contains trace elements of many heavy metals such as arsenic, cadmium, copper, iron, lead, manganese, molybdenum, nickel, and zinc. Sometimes these metals are given to animals as growth stimulants, some are introduced through pesticides used to rid livestock of insects, and some might pass through the animals as undigested food. Trace elements of these metals and salts from animal manure present risks to human health and ecosystems. New River Spill In 1999, Hurricane Floyd hit North Carolina, flooding hog waste lagoons, releasing 25 million gallons of manure into the New River and contaminating the water supply. Ronnie Kennedy, county director for environmental health, said that of 310 private wells he had tested for contamination since the storm, 9 percent, or three times the average across eastern North Carolina, had fecal coliform bacteria. Normally, tests showing any hint of feces in drinking water, an indication that it can be carrying disease-causing pathogens, are cause for immediate action. Regulation Anaerobic lagoons are built as part of a wastewater operation system. As such, compliance and permitting are handled as an extension of that operation. Therefore, manure lagoons are regulated on the state and national level through the CAFO which operates them. In recent years, because of the environmental and health effects associated with anaerobic lagoons, the EPA has increased regulation of CAFOs with a specific eye towards lagoons. North Carolina banned the construction of new anaerobic lagoons in 1999 and upheld that ban in 2007. Further research Some research has been done to develop and assess the economic feasibility of more environmentally superior technologies. Five main alternatives which have been implemented in North Carolina are: a solids separation/nitrification–denitrification/soluble phosphorus removal system; a thermophilic anaerobic digester system; a centralized composting system; a gasification system; and a fluidized-bed combustion system. These systems were judged based on their ability to: reduce impacts of CAFO waste in the surface and groundwater, decrease ammonia emissions, decrease the escape of disease-transmitting pathogens, and lower the concentration of heavy metal contamination. The U.S. Department of Agriculture (USDA) has also evaluated the prospect of creating a cap and trade program for CAFO's carbon dioxide and nitrous oxide emissions. This program has yet to be implemented, however the USDA speculates that such a program would encourage corporations to adopt EST practices. A comprehensive study of anaerobic swine lagoons nationwide has been launched by the U.S. Agricultural Research Service. This study aims to explore the composition of lagoons and anaerobic lagoon influence on environmental factors and agronomic practices. See also Agricultural wastewater treatment Anaerobic digestion Aerated lagoon Factory farming List of waste water treatment technologies Sewage treatment References External links Waste management Sewerage
Anaerobic lagoon
Chemistry,Engineering,Environmental_science
3,779