id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
59,442,749
https://en.wikipedia.org/wiki/NGC%205965
NGC 5965 is a spiral galaxy located in the constellation Draco. It is located at a distance of circa 150 million light years from Earth, which, given its apparent dimensions, means that NGC 5965 is about 260,000 light years across. It was discovered by William Herschel on May 5, 1788. Two supernovae have been observed in NGC 5965: SN 2001cm (type II, mag. 17.5) and SN 2018cyg (type II, mag. 17). NGC 5965 is seen nearly edge-on, with an inclination of 80 degrees. Dust is seen across the galactic disk, while there is also a red dust lane at the nucleus. The bulge is X-shaped, that suggests that the galaxy is actually barred. NGC 5965 along with another edge-on galaxy, NGC 5746, were the galaxies used to confirm that peanut shaped bulges are associated with the presence of a bar, by spectrographically observing the disturbance caused at the velocity distributions of the galaxies. The galaxy features some level of disk disturbance, like a warp, as the outer part of the disk along with a ring-like dust lane appear to be on a different plane from the bulge, but it could also be a projection effect. When observed in the K band, the galaxy features a stellar ring. NGC 5965 lies in a galaxy filament which also includes NGC 5987 and its loose group, which includes NGC 5981, NGC 5982, NGC 5985, three galaxies known as the Sampler. Gallery References External links Spiral galaxies Draco (constellation) 5965 09914 55459 Astronomical objects discovered in 1788 Discoveries by William Herschel
NGC 5965
[ "Astronomy" ]
348
[ "Constellations", "Draco (constellation)" ]
59,442,969
https://en.wikipedia.org/wiki/Methylhippuric%20acid
Methylhippuric acid is a carboxylic acid and organic compound. Methylhippuric acid has three isomers. The isomers include 2-, 3-, and 4-methylhippuric acid. Methylhippuric acids are metabolites of the isomers of xylene. The presence of methylhippuric acid can be used as a biomarker to determine exposure to xylene. See also Hippuric acid References Carboxylic acids Benzamides Human metabolites
Methylhippuric acid
[ "Chemistry" ]
108
[ "Carboxylic acids", "Functional groups" ]
59,446,763
https://en.wikipedia.org/wiki/C5H9ClO
{{DISPLAYTITLE:C5H9ClO}} The molecular formula C5H9ClO (molar mass: 120.58 g/mol) may refer to: Pentanoyl chloride, an acyl chloride derived from pentanoic acid Pivaloyl chloride, a branched-chain acyl chloride Acyl chlorides
C5H9ClO
[ "Chemistry" ]
75
[ "Isomerism", "Set index articles on molecular formulas" ]
59,448,183
https://en.wikipedia.org/wiki/Convolutional%20sparse%20coding
The convolutional sparse coding paradigm is an extension of the global sparse coding model, in which a redundant dictionary is modeled as a concatenation of circulant matrices. While the global sparsity constraint describes signal as a linear combination of a few atoms in the redundant dictionary , usually expressed as for a sparse vector , the alternative dictionary structure adopted by the convolutional sparse coding model allows the sparsity prior to be applied locally instead of globally: independent patches of are generated by "local" dictionaries operating over stripes of . The local sparsity constraint allows stronger uniqueness and stability conditions than the global sparsity prior, and has shown to be a versatile tool for inverse problems in fields such as image understanding and computer vision. Also, a recently proposed multi-layer extension of the model has shown conceptual benefits for more complex signal decompositions, as well as a tight connection the convolutional neural networks model, allowing a deeper understanding of how the latter operates. Overview Given a signal of interest and a redundant dictionary , the sparse coding problem consist of retrieving a sparse vector , denominated the sparse representation of , such that . Intuitively, this implies is expressed as a linear combination of a small number of elements in . The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding. It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions. Interestingly, by imposing a local sparsity prior in , meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion Besides its versatility in inverse problems, recent efforts have focused on the multi-layer version of the model and provided evidence of its reliability for recovering multiple underlying representations. Moreover, a tight connection between such a model and the well-established convolutional neural network model (CNN) was revealed, providing a new tool for a more rigurous understanding of its theoretical conditions. The convolutional sparse coding model provides a very efficient set of tools to solve a wide range of inverse problems, including image denoising, image inpainting, and image superresolution. By imposing local sparsity constraints, it allows to efficiently tackle the global coding problem by iteratively estimating disjoint patches and assembling them into a global signal. Furthermore, by adopting a multi-layer sparse model, which results from imposing the sparsity constraint to the signal inherent representations themselves, the resulting "layered" pursuit algorithm keeps the strong uniqueness and stability conditions from the single-layer model. This extension also provides some interesting notions about the relation between its sparsity prior and the forward pass of the convolutional neural network, which allows to understand how the theoretical benefits of the CSC model can provide a strong mathematical meaning of the CNN structure. Sparse coding paradigm Basic concepts and models are presented to explain into detail the convolutional sparse representation framework. On the grounds that the sparsity constraint has been proposed under different models, a short description of them is presented to show its evolution up to the model of interest. Also included are the concepts of mutual coherence and restricted isometry property to establish uniqueness stability guarantees. Global sparse coding model Allow signal to be expressed as a linear combination of a small number of atoms from a given dictionary . Alternatively, the signal can be expressed as , where corresponds to the sparse representation of , which selects the atoms to combine and their weights. Subsequently, given , the task of recovering from either the noise-free signal itself or an observation is denominated sparse coding. Considering the noise-free scenario, the coding problem is formulated as follows: The effect of the norm is to favor solutions with as much zero elements as possible. Furthermore, given an observation affected by bounded energy noise: , the pursuit problem is reformulated as: Stability and uniqueness guarantees for the global sparse model Let the spark of be defined as the minimum number of linearly independent columns: Then, from the triangular inequality, the sparsest vector satisfies: . Although the spark provides an upper bound, it is unfeasible to compute in practical scenarios. Instead, let the mutual coherence be a measure of similarity between atoms in . Assuming -norm unit atoms, the mutual coherence of is defined as: , where are atoms. Based on this metric, it can be proven that the true sparse representation can be recovered if and only if . Similarly, under the presence of noise, an upper bound for the distance between the true sparse representation and its estimation can be established via the restricted isometry property (RIP). A k-RIP matrix with constant corresponds to: , where is the smallest number that satisfies the inequality for every . Then, assuming , it is guaranteed that . Solving such a general pursuit problem is a hard task if no structure is imposed on dictionary . This implies learning large, highly overcomplete representations, which is extremely expensive. Assuming such a burden has been met and a representative dictionary has been obtained for a given signal , typically based on prior information, can be estimated via several pursuit algorithms. Pursuit algorithms for the global sparse model Two basic methods for solving the global sparse coding problem are orthogonal matching pursuit (OMP) and basis pursuit (BP). OMP is a greedy algorithm that iteratively selects the atom best correlated with the residual between and a current estimation, followed by a projection onto a subset of pre-selected atoms. On the other hand, basis pursuit is a more sophisticated approach that replaces the original coding problem by a linear programming problem. Based on this algorithms, the global sparse coding provides considerably loose bounds for the uniqueness and stability of . To overcome this, additional priors are imposed over to guarantee tighter bounds and uniqueness conditions. The reader is referred to (, section 2) for details regarding this properties. Convolutional sparse coding model A local prior is adopted such that each overlapping section of is sparse. Let be constructed from shifted versions of a local dictionary . Then, is formed by products between and local patches of . From the latter, can be re-expressed by disjoint sparse vectors : . Similarly, let be a set of consecutive vectors . Then, each disjoint segment in is expressed as: , where operator extracts overlapping patches of size starting at index . Thus, contains only nonzero columns. Hence, by introducing operator which exclusively preserves them: where is known as the stripe dictionary, which is independent of , and is denominated the i-th stripe. So, corresponds to a patch aggregation or convolutional interpretation: Where corresponds to the i-th atom from the local dictionary and is constructed by elements of patches : . Given the new dictionary structure, let the pseudo-norm be defined as: . Then, for the noise-free and noise-corrupted scenarios, the problem can be respectively reformulated as: Stability and uniqueness guarantees for the convolutional sparse model For the local approach, mutual coherence satisfies: So, if a solution obeys , then it is the sparsest solution to the problem. Thus, under the local formulation, the same number of non-zeros is permitted for each stripe instead of the full vector! Similar to the global model, the CSC is solved via OMP and BP methods, the latter contemplating the use of the iterative shrinkage thresholding algorithm (ISTA) for splitting the pursuit into smaller problems. Based on the pseudonorm, if a solution exists satisfying , then both methods are guaranteed to recover it. Moreover, the local model guarantees recovery independently of the signal dimension, as opposed to the prior. Stability conditions for OMP and BP are also guaranteed if its exact recovery condition (ERC) is met for a support with a constant . The ERC is defined as: , where denotes the Pseudo-inverse. Algorithm 1 shows the Global Pursuit method based on ISTA. Algorithm 1: 1D CSC via local iterative soft-thresholding. Input: : Local Dictionary, : observation, : Regularization parameter, : step size for ISTA, tol: tolerance factor, maxiters: maximum number of iterations. (Initialize disjoint patches.) (Initialize residual patches.) Repeat (Coding along disjoint patches) (Patch Aggregation) (Update residuals) Until tol or maxiters. Multi-layered convolutional sparse coding model By imposing the sparsity prior in the inherent structure of , strong conditions for a unique representation and feasible methods for estimating it are granted. Similarly, such a constraint can be applied to its representation itself, generating a cascade of sparse representations: Each code is defined by a few atoms of a given set of convolutional dictionaries. Based on these criteria, yet another extension denominated multi-layer convolutional sparse coding (ML-CSC) is proposed. A set of analytical dictionaries can be efficiently designed, where sparse representations at each layer are guaranteed by imposing the sparsity prior over the dictionaries themselves. In other words, by considering dictionaries to be stride convolutional matrices i.e. atoms of the local dictionaries shift elements instead of a single one, where corresponds to the number of channels in the previous layer, it is guaranteed that the norm of the representations along layers is bounded. For example, given the dictionaries , the signal is modeled as , where is its sparse code, and is the sparse code of . Then, the estimation of each representation is formulated as an optimization problem for both noise-free and noise-corrupted scenarios, respectively. Assuming : In what follows, theoretical guarantees for the uniqueness and stability of this extended model are described. Theorem 1: (Uniqueness of sparse representations) Consider signal satisfies the (ML-CSC) model for a set of convolutional dictionaries with mutual coherence . If the true sparse representations satisfy , then a solution to the problem will be its unique solution if the thresholds are chosen to satisfy: . Theorem 2: (Global stability of the noise-corrupted scenario) Consider signal satisfies the (ML-CSC) model for a set of convolutional dictionaries is contaminated with noise , where . resulting in . If and , then the estimated representations satisfy the following: . Projection-based algorithms As a simple approach for solving the ML-CSC problem, either via the or norms, is by computing inner products between and the dictionary atoms to identify the most representatives ones. Such a projection is described as: which have closed-form solutions via the hard-thresholding and soft-thresholding algorithms , respectively. If a nonnegative constraint is also contemplated, the problem can be expressed via the norm as: which closed-form solution corresponds to the soft nonnegative thresholding operator , where . Guarantees for the Layered soft-thresholding approach are included in the Appendix (Section 6.2). Theorem 3: (Stable recovery of the multi-layered soft-thresholding algorithm) Consider signal that satisfies the (ML-CSC) model for a set of convolutional dictionaries with mutual coherence is contaminated with noise , where . resulting in . Denote by and the lowest and highest entries in . Let be the estimated sparse representations obtained for . If and is chosen according to: Then, has the same support as , and , for Connections to convolutional neural networks Recall the forward pass of the convolutional neural network model, used in both training and inference steps. Let be its input and the filters at layer , which are followed by the rectified linear unit (RLU) , for bias . Based on this elementary block, taking as example, the CNN output can be expressed as: Finally, comparing the CNN algorithm and the Layered thresholding approach for the nonnegative constraint, it is straightforward to show that both are equivalent: As explained in what follows, this naive approach of solving the coding problem is a particular case of a more stable projected gradient descent algorithm for the ML-CSC model. Equipped with the stability conditions of both approaches, a more clear understanding about the class of signals a CNN can recover, under what noise conditions can an estimation be accurately attained, and how can its structure be modified to improve its theoretical conditions. The reader is referred to (, section 5) for details regarding their connection. Pursuit algorithms for the multi-layer CSC model A crucial limitation of the forward pass is it being unable to recover the unique solution of the DCP problem, which existence has been demonstrated. So, instead of using a thresholding approach at each layer, a full pursuit method is adopted, denominated layered basis pursuit (LBP). Considering the projection onto the ball, the following problem is proposed: where each layer is solved as an independent CSC problem, and is proportional to the noise level at each layer. Among the methods for solving the layered coding problem, ISTA is an efficient decoupling alternative. In what follows, a short summary of the guarantees for the LBP are established. Theorem 4: (Recovery guarantee) Consider a signal characterized by a set of sparse vectors , convolutional dictionaries and their corresponding mutual coherences . If , then the LBP algorithm is guaranteed to recover the sparse representations. Theorem 5: (Stability in the presence of noise) Consider the contaminated signal , where and is characterized by a set of sparse vectors and convolutional dictionaries . Let be solutions obtained via the LBP algorithm with parameters . If and , then: (i) The support of the solution is contained in that of , (ii) , and (iii) Any entry greater in absolute value than is guaranteed to be recovered. Applications of the convolutional sparse coding model: image inpainting As a practical example, an efficient image inpainting method for color images via the CSC model is shown. Consider the three-channel dictionary , where denotes the -th atom at channel , represents signal by a single cross-channel sparse representation , with stripes denoted as . Given an observation , where randomly chosen channels at unknown pixel locations are fixed to zero, in a similar way to impulse noise, the problem is formulated as: By means of ADMM, the cost function is decoupled into simpler sub-problems, allowing an efficient estimation. Algorithm 2 describes the procedure, where is the DFT representation of , the convolutional matrix for the term . Likewise, and correspond to the DFT representations of and , respectively, corresponds to the Soft-thresholding function with argument , and the norm is defined as the norm along the channel dimension followed by the norm along the spatial dimension . The reader is referred to (, Section II) for details on the ADMM implementation and the dictionary learning procedure. Algorithm 2: Color image inpainting via the convolutional sparse coding model. Input: : DFT of convolutional matrices , : Color observation, : Regularization parameter, : step sizes for ADMM, tol: tolerance factor, maxiters: maximum number of iterations. Repeat Until tol or maxiters. References External links SParse Optimization Research COde (SPORCO) Coding theory
Convolutional sparse coding
[ "Mathematics" ]
3,308
[ "Discrete mathematics", "Coding theory" ]
59,448,827
https://en.wikipedia.org/wiki/Isodiazomethane
In organic chemistry, isodiazomethane, also known as isocyanamide, aminoisonitrile, or systematically as isocyanoamine, is the parent compound of a class of derivatives of general formula R2N–NC. It has the condensed formula H2N–N+≡C–, making it an isomer of diazomethane. It is prepared by protonating an ethereal solution of lithiodiazomethane, LiCHN2, with aqueous NaH2PO4 or NH4Cl. The parent compound is only marginally stable at room temperature and is isolated by removal of solvent at –50 °C. Derivatives are generally prepared by dehydration of the corresponding substituted formylhydrazine with COCl2 and Et3N. Earlier, the compound was misidentified as the isomeric nitrilimine, HN––N+≡CH. However, this structure was disproven by 1H NMR studies, which show a compound with a single signal at δ 6.40 ppm in (CD3CD2)2O instead of two signals expected for nitrilimine. Moreover, an infrared band at 2140 cm−1 was assigned to the isocyano group. Transition metal complexes of isodiazomethane have been prepared. In bulk form isodiazomethane is a liquid which decomposes when the temperature exceeds 15 °C. If it is heated to 40 °C, the substance explodes. A solution of isodiazomethane in diethyl ether at –30 °C gradually isomerizes to diazomethane upon exposure to sodium hydroxide for 20 min. Microwave spectroscopy indicates that unlike diazomethane, isodiazomethane is not completely planar, with the amino nitrogen undergoing inversion. An ab initio study indicated that there is some N–N double bond character in H2N–N≡C, although less so than in the N–C bond of H2N–C≡N. Like other isocyanide derivatives and carbon monoxide, its primary resonance form carries a negative charge and lone pair on carbon, a comparatively rare situation for neutral molecules. A resonance form with zero formal charge on all atoms also has some importance; however, the carbon atom only has a sextet of electrons and is formally a carbene. References Functional groups
Isodiazomethane
[ "Chemistry" ]
489
[ "Functional groups" ]
44,413,693
https://en.wikipedia.org/wiki/Design%20smell
In computer programming, a design smell is a structure in a design that indicates a violation of fundamental design principles, and which can negatively impact the project's quality. The origin of the term can be traced to the term "code smell" which was featured in the book Refactoring: Improving the Design of Existing Code by Martin Fowler. Details Different authors have defined the word "smell" in different ways: N. Moha et al.: "Code and design smells are poor solutions to recurring implementation and design problems." R. C. Martin: "Design smells are the odors of rotting software." Fowler: "Smells are certain structures in the code that suggest (sometimes they scream for) the possibility of refactoring." Design smells indicate the accumulated design debt (one of the prominent dimensions of technical debt). Bugs or unimplemented features are not accounted as design smells. Design smells arise from the poor design decisions that make the design fragile and difficult to maintain. It is a good practice to identify design smells in a software system and apply appropriate refactoring to eliminate it to avoid accumulation of technical debt. The context (characterized by various factors such as the problem at hand, design eco-system, and platform) plays an important role to decide whether a certain structure or decision should be considered as a design smell. Generally, it is appropriate to live with design smells due to constraints imposed by the context. Nevertheless, design smells should be tracked and managed as technical debt because they degrade the overall system quality over time. Common design smells Missing abstraction when clumps of data or encoded strings are used instead of creating an abstraction. Also known as "primitive obsession" and "data clumps". Multifaceted abstraction when an abstraction has multiple responsibilities assigned to it. Also known as "conceptualization abuse". Duplicate abstraction when two or more abstractions have identical names or implementation or both. Also known as "alternative classes with different interfaces" and "duplicate design artifacts". Deficient encapsulation when the declared accessibility of one or more members of an abstraction is more permissive than actually required. Unexploited encapsulation when client code uses explicit type checks (using chained if-else or switch statements that check for the type of the object) instead of exploiting the variation in types already encapsulated within a hierarchy. Broken modularization when data and/or methods that ideally should have been localized into a single abstraction are separated and spread across multiple abstractions. Insufficient modularization when an abstraction exists that has not been completely decomposed, and a further decomposition could reduce its size, implementation complexity, or both. Circular dependency. Cyclically dependent modularization when two or more abstractions depend on each other directly or indirectly (creating a tight coupling between the abstractions). Also known as "cyclic dependencies". Cyclic hierarchy when a supertype in a hierarchy depends on any of its subtypes. Also known as "inheritance/reference cycles". Unfactored hierarchy when there is unnecessary duplication among types in a hierarchy. Broken hierarchy when a supertype and its subtype conceptually do not share an “IS-A” relationship resulting in broken substitutability. Also known as "inappropriate use of inheritance" and "misapplying IS A". See also Anti-pattern Software rot References Computer programming folklore Software engineering folklore Odor
Design smell
[ "Engineering" ]
698
[ "Software engineering", "Software engineering folklore" ]
44,413,707
https://en.wikipedia.org/wiki/Groundwater%20pollution
Groundwater pollution (also called groundwater contamination) occurs when pollutants are released to the ground and make their way into groundwater. This type of water pollution can also occur naturally due to the presence of a minor and unwanted constituent, contaminant, or impurity in the groundwater, in which case it is more likely referred to as contamination rather than pollution. Groundwater pollution can occur from on-site sanitation systems, landfill leachate, effluent from wastewater treatment plants, leaking sewers, petrol filling stations, hydraulic fracturing (fracking) or from over application of fertilizers in agriculture. Pollution (or contamination) can also occur from naturally occurring contaminants, such as arsenic or fluoride. Using polluted groundwater causes hazards to public health through poisoning or the spread of disease (water-borne diseases). The pollutant often produces a contaminant plume within an aquifer. Movement of water and dispersion within the aquifer spreads the pollutant over a wider area. Its advancing boundary, often called a plume edge, can intersect with groundwater wells and surface water, such as seeps and springs, making the water supplies unsafe for humans and wildlife. The movement of the plume, called a plume front, may be analyzed through a hydrological transport model or groundwater model. Analysis of groundwater pollution may focus on soil characteristics and site geology, hydrogeology, hydrology, and the nature of the contaminants. Different mechanisms have influence on the transport of pollutants, e.g. diffusion, adsorption, precipitation, decay, in the groundwater. The interaction of groundwater contamination with surface waters is analyzed by use of hydrology transport models. Interactions between groundwater and surface water are complex. For example, many rivers and lakes are fed by groundwater. This means that damage to groundwater aquifers e.g. by fracking or over abstraction, could therefore affect the rivers and lakes that rely on it. Saltwater intrusion into coastal aquifers is an example of such interactions. Prevention methods include: applying the precautionary principle, groundwater quality monitoring, land zoning for groundwater protection, locating on-site sanitation systems correctly and applying legislation. When pollution has occurred, management approaches include point-of-use water treatment, groundwater remediation, or as a last resort, abandonment. Pollutant types Contaminants found in groundwater cover a broad range of physical, inorganic chemical, organic chemical, bacteriological, and radioactive parameters. Principally, many of the same pollutants that play a role in surface water pollution may also be found in polluted groundwater, although their respective importance may differ. Arsenic and fluoride Arsenic and fluoride have been recognized by the World Health Organization (WHO) as the most serious inorganic contaminants in drinking-water on a worldwide basis. Inorganic arsenic is the most common type of arsenic in soil and water. The metalloid arsenic can occur naturally in groundwater, as seen most frequently in Asia, including in China, India and Bangladesh. In the Ganges Plain of northern India and Bangladesh severe contamination of groundwater by naturally occurring arsenic affects 25% of water wells in the shallower of two regional aquifers. Groundwater in these areas is also contaminated by the use of arsenic-based pesticides. Arsenic in groundwater can also be present where there are mining operations or mine waste dumps that will leach arsenic. Natural fluoride in groundwater is of growing concern as deeper groundwater is being used, "with more than 200 million people at risk of drinking water with elevated concentrations." Fluoride can especially be released from acidic volcanic rocks and dispersed volcanic ash when water hardness is low. High levels of fluoride in groundwater is a serious problem in the Argentinean Pampas, Chile, Mexico, India, Pakistan, the East African Rift, and some volcanic islands (Tenerife) In areas that have naturally occurring high levels of fluoride in groundwater which is used for drinking water, both dental and skeletal fluorosis can be prevalent and severe. Pathogens The lack of proper sanitation measures, as well as improperly placed wells, can lead to drinking water contaminated with pathogens carried in feces and urine. Such fecal–oral transmitted diseases include typhoid, cholera and diarrhea. Of the four pathogen types that are present in feces (bacteria, viruses, protozoa, and helminths or helminth eggs), the first three can be commonly found in polluted groundwater, whereas the relatively large helminth eggs are usually filtered out by the soil matrix. Deep, confined aquifers are usually considered the safest source of drinking water with respect to pathogens. Pathogens from treated or untreated wastewater can contaminate certain, especially shallow, aquifers. Nitrate Nitrate is the most common chemical contaminant in the world's groundwater and aquifers. In some low-income countries, nitrate levels in groundwater are extremely high, causing significant health problems. It is also stable (it does not degrade) under high oxygen conditions. Elevated nitrate levels in groundwater can be caused by on-site sanitation, sewage sludge disposal and agricultural activities. It can therefore have an urban or agricultural origin. Nitrate levels above 10 mg/L (10 ppm) in groundwater can cause "blue baby syndrome" (acquired methemoglobinemia). Drinking water quality standards in the European Union stipulate less than 50 mg/L for nitrate in drinking water. The linkages between nitrates in drinking water and blue baby syndrome have been disputed in other studies. The syndrome outbreaks might be due to other factors than elevated nitrate concentrations in drinking water. Organic compounds Volatile organic compounds (VOCs) are a dangerous contaminant of groundwater. They are generally introduced to the environment through careless industrial practices. Many of these compounds were not known to be harmful until the late 1960s and it was some time before regular testing of groundwater identified these substances in drinking water sources. Primary VOC pollutants found in groundwater include aromatic hydrocarbons such as BTEX compounds (benzene, toluene, ethylbenzene and xylenes), and chlorinated solvents including tetrachloroethylene (PCE), trichloroethylene (TCE), and vinyl chloride (VC). BTEX are important components of gasoline. PCE and TCE are industrial solvents used in dry cleaning processes and as a metal degreaser, respectively. Other organic pollutants present in groundwater and derived from industrial operations are the polycyclic aromatic hydrocarbons (PAHs). Due to its molecular weight, naphthalene is the most soluble and mobile PAH found in groundwater, whereas benzo(a)pyrene is the most toxic one. PAHs are generally produced as byproducts by incomplete combustion of organic matter. Organic pollutants can also be found in groundwater as insecticides and herbicides. As many other synthetic organic compounds, most pesticides have very complex molecular structures. This complexity determines the water solubility, adsorption capacity, and mobility of pesticides in the groundwater system. Thus, some types of pesticides are more mobile than others so they can more easily reach a drinking-water source. Metals Several trace metals occur naturally in certain rock formations and can enter in the environment from natural processes such as weathering. However, industrial activities such as mining, metallurgy, solid waste disposal, paint and enamel works, etc. can lead to elevated concentrations of toxic metals including lead, cadmium and chromium. These contaminants have the potential to make their way into groundwater. The migration of metals (and metalloids) in groundwater will be affected by several factors, in particular by chemical reactions which determine the partitioning of contaminants among different phases and species. Thus, the mobility of metals primarily depends on the pH and redox state of groundwater. Pharmaceuticals Trace amounts of pharmaceuticals from treated wastewater infiltrating into the aquifer are among emerging ground-water contaminants being studied throughout the United States. Popular pharmaceuticals such as antibiotics, anti-inflammatories, antidepressants, decongestants, tranquilizers, etc. are normally found in treated wastewater. This wastewater is discharged from the treatment facility, and often makes its way into the aquifer or source of surface water used for drinking water. Trace amounts of pharmaceuticals in both groundwater and surface water are far below what is considered dangerous or of concern in most areas, but it could be an increasing problem as population grows and more reclaimed wastewater is utilized for municipal water supplies. Others Other organic pollutants include a range of organohalides and other chemical compounds, petroleum hydrocarbons, various chemical compounds found in personal hygiene and cosmetic products, drug pollution involving pharmaceutical drugs and their metabolites. Inorganic pollutants might include other nutrients such as ammonia and phosphate, and radionuclides such as uranium (U) or radon (Rn) naturally present in some geological formations. Saltwater intrusion is also an example of natural contamination, but is very often intensified by human activities. Groundwater pollution is a worldwide issue. A study of the groundwater quality of the principal aquifers of the United States conducted between 1991 and 2004, showed that 23% of domestic wells had contaminants at levels greater than human-health benchmarks. Another study suggested that the major groundwater pollution problems in Africa, considering the order of importance are: (1) nitrate pollution, (2) pathogenic agents, (3) organic pollution, (4) salinization, and (5) acid mine drainage. Causes Causes of groundwater pollution include (further details below): Naturally-occurring (geogenic) On-site sanitation systems Sewage and sewage sludge Fertilizers and pesticides Commercial and industrial leaks Hydraulic fracturing Landfill leachate Other Naturally-occurring (geogenic) "Geogenic" refers to naturally occurring as a result from geological processes. The natural arsenic pollution occurs because aquifer sediments contain organic matter that generates anaerobic conditions in the aquifer. These conditions result in the microbial dissolution of iron oxides in the sediment and, thus, the release of the arsenic, normally strongly bound to iron oxides, into the water. As a consequence, arsenic-rich groundwater is often iron-rich, although secondary processes often obscure the association of dissolved arsenic and dissolved iron.. Arsenic is found in groundwater most commonly as the reduced species arsenite and the oxidized species arsenate, the acute toxicity of arsenite being somewhat greater than that of arsenate. Investigations by WHO indicated that 20% of 25,000 boreholes tested in Bangladesh had arsenic concentrations exceeding 50 μg/L. The occurrence of fluoride is close related to the abundance and solubility of fluoride-containing minerals such as fluorite (CaF2). Considerably high concentrations of fluoride in groundwater are typically caused by a lack of calcium in the aquifer. Health problems associated with dental fluorosis may occur when fluoride concentrations in groundwater exceed 1.5 mg/L, which is the WHO guideline value since 1984. The Swiss Federal Institute of Aquatic Science and Technology (EAWAG) has recently developed the interactive Groundwater Assessment Platform (GAP), where the geogenic risk of contamination in a given area can be estimated using geological, topographical and other environmental data without having to test samples from every single groundwater resource. This tool also allows the user to produce probability risk mapping for both arsenic and fluoride. High concentrations of parameters like salinity, iron, manganese, uranium, radon and chromium, in groundwater, may also be of geogenic origin. This contaminants can be important locally but they are not as widespread as arsenic and fluoride. On-site sanitation systems Groundwater pollution with pathogens and nitrate can also occur from the liquids infiltrating into the ground from on-site sanitation systems such as pit latrines and septic tanks, depending on the population density and the hydrogeological conditions. Factors controlling the fate and transport of pathogens are quite complex and the interaction among them is not well understood. If the local hydrogeological conditions (which can vary within a space of a few square kilometers) are ignored, simple on-site sanitation infrastructures such as pit latrines can cause significant public health risks via contaminated groundwater. Liquids leach from the pit and pass the unsaturated soil zone (which is not completely filled with water). Subsequently, these liquids from the pit enter the groundwater where they may lead to groundwater pollution. This is a problem if a nearby water well is used to supply groundwater for drinking water purposes. During the passage in the soil, pathogens can die off or be adsorbed significantly, mostly depending on the travel time between the pit and the well. Most, but not all pathogens die within 50 days of travel through the subsurface. The degree of pathogen removal strongly varies with soil type, aquifer type, distance and other environmental factors. For example, the unsaturated zone becomes "washed" during extended periods of heavy rain, providing hydraulic pathway for the quick pass of pathogens. It is difficult to estimate the safe distance between a pit latrine or a septic tank and a water source. In any case, such recommendations about the safe distance are mostly ignored by those building pit latrines. In addition, household plots are of a limited size and therefore pit latrines are often built much closer to groundwater wells than what can be regarded as safe. This results in groundwater pollution and household members falling sick when using this groundwater as a source of drinking water. Sewage and sewage sludge Groundwater pollution can be caused by untreated waste discharge leading to diseases like skin lesions, bloody diarrhea and dermatitis. This is more common in locations having limited wastewater treatment infrastructure, or where there are systematic failures of the on-site sewage disposal system. Along with pathogens and nutrients, untreated sewage can also have an important load of heavy metals that may seep into the groundwater system. The treated effluent from sewage treatment plants may also reach the aquifer if the effluent is infiltrated or discharged to local surface water bodies. Therefore, those substances that are not removed in conventional sewage treatment plants may reach the groundwater as well. For example, detected concentrations of pharmaceutical residues in groundwater were in the order of 50 mg/L in several locations in Germany. This is because in conventional sewage treatment plants, micropollutants such as hormones, pharmaceutical residues and other micropollutants contained in urine and feces are only partially removed and the remainder is discharged into surface water, from where it may also reach the groundwater. Groundwater pollution can also occur from leaking sewers which has been observed for example in Germany. This can also lead to potential cross-contamination of drinking-water supplies. Spreading wastewater or sewage sludge in agriculture may also be included as sources of fecal contamination in groundwater. Fertilizers and pesticides Nitrate can also enter the groundwater via excessive use of fertilizers, including manure spreading. This is because only a fraction of the nitrogen-based fertilizers is converted to produce and other plant matter. The remainder accumulates in the soil or lost as run-off. High application rates of nitrogen-containing fertilizers combined with the high water-solubility of nitrate leads to increased runoff into surface water as well as leaching into groundwater, thereby causing groundwater pollution. The excessive use of nitrogen-containing fertilizers (be they synthetic or natural) is particularly damaging, as much of the nitrogen that is not taken up by plants is transformed into nitrate which is easily leached. The nutrients, especially nitrates, in fertilizers can cause problems for natural habitats and for human health if they are washed off soil into watercourses or leached through soil into groundwater. The heavy use of nitrogenous fertilizers in cropping systems is the largest contributor to anthropogenic nitrogen in groundwater worldwide. Feedlots/animal corrals can also lead to the potential leaching of nitrogen and metals to groundwater. Over application of animal manure may also result in groundwater pollution with pharmaceutical residues derived from veterinary drugs. The US Environmental Protection Agency (EPA) and the European Commission are seriously dealing with the nitrate problem related to agricultural development, as a major water supply problem that requires appropriate management and governance. Runoff of pesticides may leach into groundwater causing human health problems from contaminated water wells. Pesticide concentrations found in groundwater are typically low, and often the regulatory human health-based limits exceeded are also very low. The organophosphorus insecticide monocrotophos (MCP) appears to be one of a few hazardous, persistent, soluble and mobile (it does not bind with minerals in soils) pesticides able to reach a drinking-water source. In general, more pesticide compounds are being detected as groundwater quality monitoring programs have become more extensive; however, much less monitoring has been conducted in developing countries due to the high analysis costs. Commercial and industrial leaks A wide variety of both inorganic and organic pollutants have been found in aquifers underlying commercial and industrial activities. Ore mining and metal processing facilities are the primary responsible of the presence of metals in groundwater of anthropogenic origin, including arsenic. The low pH associated with acid mine drainage (AMD) contributes to the solubility of potential toxic metals that can eventually enter the groundwater system. There is an increasing concern over the groundwater pollution by gasoline leaked from petroleum underground storage tanks (USTs) of gas stations. BTEX compounds are the most common additives of the gasoline. BTEX compounds, including benzene, have densities lower than water (1 g/mL). Similar to the oil spills on the sea, the non-miscible phase, referred to as Light Non-Aqueous Phase Liquid (LNAPL), will "float" upon the water table in the aquifer. Chlorinated solvents are used in nearly any industrial practice where degreasing removers are required. PCE is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost. It has also been used for metal-degreasing operations. Because it is highly volatile, it is more frequently found in groundwater than in surface water. TCE has historically been used as a metal cleaning. The military facility Anniston Army Depot (ANAD) in the United States was placed on the EPA Superfund National Priorities List (NPL) because of groundwater contamination with as much as 27 million pounds of TCE. Both PCE and TCE may degrade to vinyl chloride (VC), the most toxic chlorinated hydrocarbon. Many types of solvents may have also been disposed illegally, leaking over time to the groundwater system. Chlorinated solvents such as PCE and TCE have densities higher than water and the non-miscible phase is referred to as Dense Non-Aqueous Phase Liquids (DNAPL). Once they reach the aquifer, they will "sink" and eventually accumulate on the top of low-permeability layers. Historically, wood-treating facilities have also release insecticides such as pentachlorophenol (PCP) and creosote into the environment, impacting the groundwater resources. PCP is a highly soluble and toxic obsolete pesticide recently listed in the Stockholm Convention on Persistent Organic Pollutants. PAHs and other semi-VOCs are the common contaminants associated with creosote. Although non-miscible, both LNAPLs and DNAPLs still have the potential to slowly dissolve into the aqueous (miscible) phase to create a plume and thus become a long-term source of contamination. DNAPLs (chlorinated solvents, heavy PAHs, creosote, PCBs) tend to be difficult to manage as they can reside very deep in the groundwater system. Hydraulic fracturing The recent growth of hydraulic fracturing ("Fracking") wells in the United States has raised concerns regarding its potential risks of contaminating groundwater resources. EPA, along with many other researchers, has been delegated to study the relationship between hydraulic fracturing and drinking water resources. While it is possible to perform hydraulic fracturing without having a relevant impact on groundwater resources if stringent controls and quality management measures are in place, there are a number of cases where groundwater pollution due to improper handling or technical failures was observed. While the EPA has not found significant evidence of a widespread, systematic impact on drinking water by hydraulic fracturing, this may be due to insufficient systematic pre- and post- hydraulic fracturing data on drinking water quality, and the presence of other agents of contamination that preclude the link between tight oil and shale gas extraction and its impact. Despite the EPA's lack of profound widespread evidence, other researchers have made significant observations of rising groundwater contamination in close proximity to major shale oil/gas drilling sites located in Marcellus (British Columbia, Canada). Within one kilometer of these specific sites, a subset of shallow drinking water consistently showed higher concentration levels of methane, ethane, and propane concentrations than normal. An evaluation of higher Helium and other noble gas concentration along with the rise of hydrocarbon levels supports the distinction between hydraulic fracturing fugitive gas and naturally occurring "background" hydrocarbon content. This contamination is speculated to be the result of leaky, failing, or improperly installed gas well casings. Furthermore, it is theorized that contamination could also result from the capillary migration of deep residual hyper-saline water and hydraulic fracturing fluid, slowly flowing through faults and fractures until finally making contact with groundwater resources; however, many researchers argue that the permeability of rocks overlying shale formations are too low to allow this to ever happen sufficiently. To ultimately prove this theory, there would have to be traces of toxic trihalomethanes (THM) since they are often associated with the presence of stray gas contamination, and typically co-occur with high halogen concentrations in hyper-saline waters. Besides, highly saline waters are a common natural feature in deep groundwater systems. While conclusions regarding groundwater pollution as the result to hydraulic fracturing fluid flow is restricted in both space and time, researchers have hypothesized that the potential for systematic stray gas contamination depends mainly on the integrity of the shale oil/gas well structure, along with its relative geological location to local fracture systems that could potentially provide flow paths for fugitive gas migration. Though widespread, systematic contamination by hydraulic fracturing has been heavily disputed, one major source of contamination that has the most consensus among researchers of being the most problematic is site-specific accidental spillage of hydraulic fracturing fluid and produced water. So far, a significant majority of groundwater contamination events are derived from surface-level anthropogenic routes rather than the subsurface flow from underlying shale formations. While the damage can be obvious, and much more effort is being done to prevent these accidents from occurring so frequently, the lack of data from fracking oil spills continue to leave researchers in the dark. In many of these events, the data acquired from the leakage or spillage is often very vague, and thus would lead researchers to lacking conclusions. Researchers from the Federal Institute for Geosciences and Natural Resources (BGR) conducted a model study for a deep shale-gas formation in the North German Basin. They concluded that the probability is small that the rise of fracking fluids through the geological underground to the surface will impact shallow groundwater. Landfill leachate Leachate from sanitary landfills can lead to groundwater pollution. Chemicals can reach into ground water through precipitation and runoff. New landfills are required to be lined with clay or another synthetic material, along with leachate to protect surrounding ground water. However, older landfills do not have these measures and are often close to surface waters and in permeable soils. Closed landfills can still pose a threat to ground water if they are not capped by an impermeable material before closure to prevent leaking of contaminants. Love Canal was one of the most widely known examples of groundwater pollution. In 1978, residents of the Love Canal neighborhood in upstate New York noticed high rates of cancer and an alarming number of birth defects. This was eventually traced to organic solvents and dioxins from an industrial landfill that the neighborhood had been built over and around, which had then infiltrated into the water supply and evaporated in basements to further contaminate the air. Eight hundred families were reimbursed for their homes and moved, after extensive legal battles and media coverage. Over-pumping Satellite data in the Mekong Delta in Vietnam have provided evidence that over-pumping of groundwater leads to land subsidence as well as consequential release of arsenic and possibly other heavy metals. Arsenic is found in clay strata due to their high surface area to volume ratio relative to sand-sized particles. Most pumped groundwater travels through sands and gravels with low arsenic concentration. However, during over-pumping, a high vertical gradient pulls water from less-permeable clays, thus promoting arsenic release into the water. Other Groundwater pollution can be caused by chemical spills from commercial or industrial operations, chemical spills occurring during transport (e.g. spillage of diesel fuels), illegal waste dumping, infiltration from urban runoff or mining operations, road salts, de-icing chemicals from airports and even atmospheric contaminants since groundwater is part of the hydrologic cycle. Herbicide use can contribute to groundwater contamination through arsenic infiltration. Herbicides contribute to arsenic desorption through mobilization and transportation of the contaminant. Chlorinated herbicides exhibit a lower impact on arsenic desorption than phosphate type herbicides. This can help to prevent arsenic contamination through choosing herbicides appropriate for different concentrations of arsenic present in certain soils. The burial of corpses and their subsequent degradation may also pose a risk of pollution to groundwater. Mechanisms The passage of water through the subsurface can provide a reliable natural barrier to contamination but it only works under favorable conditions. The stratigraphy of the area plays an important role in the transport of pollutants. An area can have layers of sandy soil, fractured bedrock, clay, or hardpan. Areas of karst topography on limestone bedrock are sometimes vulnerable to surface pollution from groundwater. Earthquake faults can also be entry routes for downward contaminant entry. Water table conditions are of great importance for drinking water supplies, agricultural irrigation, waste disposal (including nuclear waste), wildlife habitat, and other ecological issues. Many chemicals undergo reactive decay or chemical change, especially over long periods of time in groundwater reservoirs. A noteworthy class of such chemicals is the chlorinated hydrocarbons such as trichloroethylene (used in industrial metal degreasing and electronics manufacturing) and tetrachloroethylene used in the dry cleaning industry. Both of these chemicals, which are thought to be carcinogens themselves, undergo partial decomposition reactions, leading to new hazardous chemicals (including dichloroethylene and vinyl chloride). Interactions with surface water Although interrelated, surface water and groundwater have often been studied and managed as separate resources. Interactions between groundwater and surface water are complex. Surface water seeps through the soil and becomes groundwater. Conversely, groundwater can also feed surface water sources. For example, many rivers and lakes are fed by groundwater. This means that damage to groundwater aquifers e.g. by fracking or over abstraction, could therefore affect the rivers and lakes that rely on it. Saltwater intrusion into coastal aquifers is an example of such interactions. A spill or ongoing release of chemical or radionuclide contaminants into soil (located away from a surface water body) may not create point or non-point source pollution but can contaminate the aquifer below, creating a toxic plume. The movement of the plume, may be analyzed through a hydrological transport model or groundwater model. Prevention Precautionary principle The precautionary principle, evolved from Principle 15 of the Rio Declaration on Environment and Development, is important in protecting groundwater resources from pollution. The precautionary principle provides that "where there are threats of irreversible damage, lack of full scientific certainty shall not be used as reason for postponing cost-effective measures to prevent environmental degradation." One of the six basic principles of the European Union (EU) water policy is the application of the precautionary principle. Groundwater quality monitoring Groundwater quality monitoring programs have been implemented regularly in many countries around the world. They are important components to understand the hydrogeological system, and for the development of conceptual models and aquifer vulnerability maps. Groundwater quality must be regularly monitored across the aquifer to determine trends. Effective groundwater monitoring should be driven by a specific objective, for example, a specific contaminant of concern. Contaminant levels can be compared to the World Health Organization (WHO) guidelines for drinking-water quality. It is not rare that limits of contaminants are reduced as more medical experience is gained. Sufficient investment should be given to continue monitoring over the long term. When a problem is found, action should be taken to correct it. Waterborne outbreaks in the United States decreased with the introduction of more stringent monitoring (and treatment) requirements in the early 90s. The community can also help monitor the groundwater quality. Scientists have developed methods by which hazard maps could be produced for geogenic toxic substances in groundwater. This provides an efficient way of determining which wells should be tested. Land zoning for groundwater protection The development of land-use zoning maps has been implemented by several water authorities at different scales around the world. There are two types of zoning maps: aquifer vulnerability maps and source protection maps. Aquifer vulnerability map It refers to the intrinsic (or natural) vulnerability of a groundwater system to pollution. Intrinsically, some aquifers are more vulnerable to pollution than other aquifers. Shallow unconfined aquifers are more at risk of pollution because there are fewer layers to filter out contaminants. The unsaturated zone can play an important role in retarding (and in some cases eliminating) pathogens and so must be considered when assessing aquifer vulnerability. The biological activity is greatest in the top soil layers where the attenuation of pathogens is generally most effective. Preparation of the vulnerability maps typically involves overlaying several thematic maps of physical factors that have been selected to describe the aquifer vulnerability. The index-based parametric mapping method GOD developed by Foster and Hirata (1988) uses three generally available or readily estimated parameters, the degree of Groundwater hydraulic confinement, geological nature of the Overlying strata and Depth to groundwater. A further approach developed by EPA, a rating system named "DRASTIC", employs seven hydrogeological factors to develop an index of vulnerability: Depth to water table, net Recharge, Aquifer media, Soil media, Topography (slope), Impact on the vadose zone, and hydraulic Conductivity. There is a particular debate among hydrogeologists as to whether aquifer vulnerability should be established in a general (intrinsic) way for all contaminants, or specifically for each pollutant. Source protection map It refers to the capture areas around an individual groundwater source, such as a water well or a spring, to especially protect them from pollution. Thus, potential sources of degradable pollutants, such as pathogens, can be located at distances which travel times along the flowpaths are long enough for the pollutant to be eliminated through filtration or adsorption. Analytical methods using equations to define groundwater flow and contaminant transport are the most widely used. The WHPA is a semi-analytical groundwater flow simulation program developed by the US EPA for delineating capture zones in a wellhead protection area. The simplest form of zoning employs fixed-distance methods where activities are excluded within a uniformly applied specified distance around abstraction points. Locating on-site sanitation systems As the health effects of most toxic chemicals arise after prolonged exposure, risk to health from chemicals is generally lower than that from pathogens. Thus, the quality of the source protection measures is an important component in controlling whether pathogens may be present in the final drinking-water. On-site sanitation systems can be designed in such a way that groundwater pollution from these sanitation systems is prevented from occurring. Detailed guidelines have been developed to estimate safe distances to protect groundwater sources from pollution from on-site sanitation. The following criteria have been proposed for safe siting (i.e. deciding on the location) of on-site sanitation systems: Horizontal distance between the drinking water source and the sanitation system Guideline values for horizontal separation distances between on-site sanitation systems and water sources vary widely (e.g. 15 to 100 m horizontal distance between pit latrine and groundwater wells) Vertical distance between drinking water well and sanitation system Aquifer type Groundwater flow direction Impermeable layers Slope and surface drainage Volume of leaking wastewater Superposition, i.e. the need to consider a larger planning area As a very general guideline it is recommended that the bottom of the pit should be at least 2 m above groundwater level, and a minimum horizontal distance of 30 m between a pit and a water source is normally recommended to limit exposure to microbial contamination.[1] However, no general statement should be made regarding the minimum lateral separation distances required to prevent contamination of a well from a pit latrine. For example, even 50 m lateral separation distance might not be sufficient in a strongly karstified system with a downgradient supply well or spring, while 10 m lateral separation distance is completely sufficient if there is a well developed clay cover layer and the annular space of the groundwater well is well sealed. Legislation Institutional and legal issues are critical in determining the success or failure of groundwater protection policies and strategies. In the United States the Resource Conservation and Recovery Act protects groundwater by regulating the disposal of solid waste and hazardous waste, and the Comprehensive Environmental Response, Compensation, and Liability Act, also known as "Superfund", requires remediation of abandoned hazardous waste sites. Management Options for remediation of contaminated groundwater can be grouped into the following categories: containing the pollutants to prevent them from migrating further removing the pollutants from the aquifer remediating the aquifer by either immobilizing or detoxifying the contaminants while they are still in the aquifer (in-situ) treating the groundwater at its point of use abandoning the use of this aquifer's groundwater and finding an alternative source of water. Point-of-use treatment Portable water purification devices or "point-of-use" (POU) water treatment systems and field water disinfection techniques can be used to remove some forms of groundwater pollution prior to drinking, namely any fecal pollution. Many commercial portable water purification systems or chemical additives are available which can remove pathogens, chlorine, bad taste, odors, and heavy metals like lead and mercury. Techniques include boiling, filtration, activated charcoal absorption, chemical disinfection, ultraviolet purification, ozone water disinfection, solar water disinfection, solar distillation, homemade water filters. Arsenic removal filters (ARF) are dedicated technologies typically installed to remove arsenic. Many of these technologies require a capital investment and long-term maintenance. Filters in Bangladesh are usually abandoned by the users due to their high cost and complicated maintenance, which is also quite expensive. Groundwater remediation Groundwater pollution is much more difficult to abate than surface pollution because groundwater can move great distances through unseen aquifers. Non-porous aquifers such as clays partially purify water of bacteria by simple filtration (adsorption and absorption), dilution, and, in some cases, chemical reactions and biological activity; however, in some cases, the pollutants merely transform to soil contaminants. Groundwater that moves through open fractures and caverns is not filtered and can be transported as easily as surface water. In fact, this can be aggravated by the human tendency to use natural sinkholes as dumps in areas of karst topography. Pollutants and contaminants can be removed from ground water by applying various techniques thereby making it safe for use. Ground water treatment (or remediation) techniques span biological, chemical, and physical treatment technologies. Most ground water treatment techniques utilize a combination of technologies. Some of the biological treatment techniques include bioaugmentation, bioventing, biosparging, bioslurping, and phytoremediation. Some chemical treatment techniques include ozone and oxygen gas injection, chemical precipitation, membrane separation, ion exchange, carbon absorption, aqueous chemical oxidation, and surfactant-enhanced recovery. Some chemical techniques may be implemented using nanomaterials. Physical treatment techniques include, but are not limited to, pump and treat, air sparging, and dual phase extraction. Abandonment If treatment or remediation of the polluted groundwater is deemed to be too difficult or expensive, then abandoning the use of this aquifer's groundwater and finding an alternative source of water is the only other option. Examples Africa Lusaka, Zambia The peri-urban areas of Lusaka, the capital of Zambia, have ground conditions which are strongly karstified and for this reason – together with the increasing population density in these peri-urban areas – pollution of water wells from pit latrines is a major public health threat there. Babati town, Tanzania In Tanzania, many residents rely on groundwater sources, mainly from shallow on-site wells, for drinking and other domestic purposes. The cost of the official water supply has resulted in many households relying on private wells rather than Babati's urban water and sanitation facilities. The consumption of water from temporary water sources of unknown quality (mainly shallow wells) has resulted in large numbers of people suffering from water-borne diseases. In Tanzania, 23,900 children under the age of 5 are reported to die each year from dysentery and diarrhoea associated with drinking unsafe water. Asia India The Ganga River Basin (GRB) which is a sacred body of water for the Hindus is facing severe arsenic contamination. India covers 79% of the GRB, and thus numerous states have been affected. Affected states include Uttarakhand, Uttar Pradesh, Delhi, Madhya Pradesh, Bihar, Jharkhand, Rajasthan, Chhattisgarh, Punjab, Haryana, and West Bengal. The arsenic levels are up to 4730 μg/L in the groundwater, ~1000 μg/L in irrigation water, and up to 3947 μg/kg in food materials all of which all exceed the United Nations Food and Agricultural Organization's standard for irrigation water and the World Health Organization's standards for drinking water. As a result, individuals who are exposed suffer from diseases that affect their dermal, neurological, reproductive and cognitive functioning, and can even result in cancer. In India the government has proceeded to promote sanitation development in order to combat the rise in ground water contamination in several regions of the country. The effort has proved to show results and has decreased the groundwater pollution and has decreased the chance of sickness for mothers and children who were mainly affected by this issue. This was something greatly needed as according to the study, over 117,000 children under five die every year due to consuming polluted water. The countries effort has seen success in the more economically developed sections of the country. North America Hinkley, U.S. The town of Hinkley, California (U.S.), had its groundwater contaminated with hexavalent chromium starting in 1952, resulting in a legal case against Pacific Gas & Electric (PG&E) and a multimillion-dollar settlement in 1996. The legal case was dramatized in the film Erin Brockovich, released in 2000. San Joaquin, U.S. Intensive pumping in San Joaquin county, California, has resulted in arsenic pollution. San Joaquin county has faced serious intensive pumping which has caused the ground below San Joaquin to sink and in turn damaged infrastructure. This intensive pumping into groundwater has allowed arsenic to move into groundwater aquifers which supply drinking water to at least a million residents and used in irrigation for crops in some of the richest farmland in the US. Aquifers are made up of sand and gravel that are separated by thin layers of clay which acts as a sponge that holds onto water and arsenic. When water is pumped intensively, the aquifer compresses and ground sinks which leads to the clay releasing arsenic. Study shows that aquifers contaminated as a result from over pumping, they can recover if withdrawals stop. Norco, California The town of Norco, California was affected by Trichloroethylene and Hydrazine groundwater contamination as a result of improper and negligent hazardous material handling and disposal practices from the Wyle Laboratories facility, which was located next to the Norco High School. Trichlorethylene levels were detected as high as 128 times higher than the states safe limit for drinking water and Hydrazine was found in 2 nearby wells. The site is no longer in operation, and is an active hazmat cleanup site. Walkerton, Canada In the year 2000, groundwater pollution occurred in the small town of Walkerton, Canada leading to seven deaths in what is known as the Walkerton E. Coli outbreak. The water supply which was drawn from groundwater became contaminated with the highly dangerous O157:H7 strain of E. coli bacteria. This contamination was due to farm runoff into an adjacent water well that was vulnerable to groundwater pollution. References External links United States Geological Survey - Office of Groundwater UK Groundwater Forum IGRAC, International Groundwater Resources Assessment Centre IAH, International Association of Hydrogeologists Groundwater pollution and sanitation (documents in library of the Sustainable Sanitation Alliance) UPGro – Unlocking the Potential of Groundwater for the Poor Liquid water Aquifers Hydrology Hydraulic engineering Sanitation Water and the environment Water Lithosphere
Groundwater pollution
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
8,695
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Aquifers", "Environmental engineering", "Water", "Hydraulic engineering" ]
44,414,514
https://en.wikipedia.org/wiki/Newton-X
Newton-X is a general program for molecular dynamics simulations beyond the Born-Oppenheimer approximation. It has been primarily used for simulations of ultrafast processes (femtosecond to picosecond time scale) in photoexcited molecules. It has also been used for simulation of band envelops of absorption and emission spectra. Newton-X uses the trajectory surface hopping method, a semi-classical approximation in which the nuclei are treated classically by Newtonian dynamics, while the electrons are treated as a quantum subsystem via a local approximation of the Time-dependent Schrödinger Equation. Nonadiabatic effects (the spread of the nuclear wave packet between several states) are recovered by a stochastic algorithm, which allows individual trajectories to change between different potential energy states during the dynamics. Capabilities Newton-X is designed as a platform to perform all steps of the nonadiabatic dynamics simulations, from the initial conditions generation, through trajectories computation, to the statistical analysis of the results. It works interfaced to a number of electronic structure programs available for computational chemistry, including Gaussian, Turbomole, Gamess, and Columbus. Its modular development allows to create new interfaces and integrate new methods. Users’ new developments are encouraged and are in due course included into the main branch of the program. Nonadiabatic couplings, the central quantity in nonadiabatic simulations, can be either provided by a third-party program or computed by Newton-X. When computed by Newton-X, it is done with a numerical approximation based on overlap of electronic wavefunctions obtained in sequential time steps. A local diabatization method is also available to provide couplings in the case of weak nonadiabatic interactions. Hybrid combination of methods is possible in Newton-X. Forces computed with different methods for different atomic subsets can be linearly combined to generate the final force driving the dynamics. These hybrid forces may, for instance, be combined into the popular electrostatic-embedding quantum-mechanical/molecular-mechanical method (QM/MM). Important options for QM/MM simulations, such as link atoms, boundaries, and thermostats are available as well. As part of the initial conditions module, Newton-X can simulate absorption, emission, and photoelectron spectra, using the Nuclear Ensemble approach, which provides full spectral widths and absolute intensities. Methods and Interfaces to Third-Party Programs Newton-X can simulate surface-hopping dynamics with the following programs and quantum-chemical methods: Nonadiabatic couplings The surface hopping probability depends on the values of the nonadiabatic couplings between electronic states. Newton-X can either compute nonadiabatic couplings during the dynamics or read them from an interfaced third-party program. The computation of the couplings in Newton-X is done by finite differences, following the Hammes-Schiffer-Tully approach. In this approach, the key quantity for computation of the surface hopping probability, the inner product between the nonadiabatic couplings (τLM) and the nuclear velocities (v) at time t, is given by , where the terms are wavefunction overlaps between states L and M in different time steps. This method can be generally used for any electronic-structure method, provided that a configuration interaction representation of the electronic wavefunction can be worked out. In Newton-X, it is used with a number of quantum-chemical methods, including MCSCF (Multiconfigurational Self-Consistent Field), MRCI (Multi-Reference Configuration Interaction), CC2 (Coupled Cluster to Approximated Second Order), ADC(2) (Algebraic Diagrammatic Construction to Second Order), TDDFT (Time-Dependent Density Functional Theory), and TDA (Tamm-Dankov Approximation). In the case of MCSCF and MRCI, the configuration interaction coefficients are directly used for computation of couplings. For the other methods, the linear-response amplitudes are used as the coefficients of a configuration interaction wavefunction with single excitations. Spectrum Simulations Newton-X simulates absorption and emission spectra using the Nuclear Ensemble approach. In this approach, an ensemble of nuclear geometries is built in the initial state and the transition energies and transition moments to the other states are computed for each geometry in the ensemble. A convolution of the results provides spectral widths and absolute intensities. In the Nuclear Ensemble approach, the photoabsorption cross section for a molecule initially in the ground state and being excited with photoenergy E into Nfs final electronic states is given by , where e is the elementary charge, ħ is the reduced Planck constant, m is the electron mass, c is the speed of light, ε0 is the vacuum permittivity, and nr is the refractive index of the medium. The first summation runs over all target states and the second summation runs over all Np points in the nuclear ensemble. Each point in the ensemble has nuclear geometry Rp, transition energy ΔE0,n, and oscillator strength f0,n (for a transition from the ground state into state n). g is a normalized Gaussian function with width δ given by . For emission, the differential emission rate is given by . In both absorption and emission, the nuclear ensemble can be sampled either from a dynamics simulation or from a Wigner distribution. Starting from version 2.0, it is possible to use the nuclear ensemble approach to simulate steady and time-resolved photoelectron spectra. Development and credits The development of Newton-X started in 2005 at the Institute for the Theoretical Chemistry of the University of Vienna. It was designed by Mario Barbatti in collaboration with Hans Lischka. The original code used and expanded routines written by Giovanni Granucci and Maurizio Persico from the University of Pisa. A modulus for computation of nonadiabatic couplings based on finite differences of either MCSCF or MRCI wavefunctions was implemented by Jiri Pittner (J. Heyrovsky Institute) and later adapted to work with TDDFT. A modulus for QM/MM dynamics was developed by Matthias Ruckenbauer. Felix Plasser implemented the local diabatization method and dynamics based on CC2 and ADC(2). Rachel Crespo-Otero extended the TDDFT and TDA capabilities. An interface to Gamess was added by Aaron West and Theresa Windus (Iowa State University). Mario Barbatti coordinates new program developments, their integration into the official version, and the Newton-X distribution. Distribution and training Newton-X is distributed free of charges for academic usage and with open source. The original paper describing the program had been cited 190 times by December 22, 2014, according to Google Scholar. Newton-X counts with a comprehensive documentation and a public discussion forum. A tutorial is also available on line, showing how to use the main features of the program step-by-step. Examples of simulations are shown at a YouTube channel. The program itself is distributed with a collection of input and output files of several worked-out examples. A number of workshops on nonadiabatic simulations using Newton-X have been organized in Vienna (2008), Rio de Janeiro (2009), Sao Carlos (2011), Chiang Mai (2011, 2015), and Jeddah (2014). Program philosophy and architecture A main concept guiding the Newton-X development is that the program should be simple to use, but still providing as many options as possible to customize the jobs. This is achieved by a series of input tools that guide the user through the program options, providing context-dependent variable values always that possible. Newton-X is written as a combination of independent programs. The coordinated execution of these programs is done by drivers written in Perl, while the programs dealing with integration of the dynamics and other mathematical aspects are written in Fortran 90 and C. Memory is dynamically allocated and there are no formal limits for most of variables, such as number of atoms or states. Newton-X works in a three-level parallelization: the first level is a trivial parallelization given by the Independent-Trajectories approach used by the program. Complete sets of input files are redundantly written to allow each trajectory to be executed independently. They can be easily merged for final analysis in a later step. In a second level, Newton-X takes advantage of the parallelization of the third-party programs with which it is interfaced. Thus, a Newton-X simulation using the interface with Gaussian program can be first distributed over a cluster in terms of independent trajectories and each trajectory runs parallelized version of Gaussian. In the third level, the coupling computations in Newton-X are parallelized. Starting with version (1.3, 2013), Newton-X uses meta-codes to control the dynamics simulation behavior. Based on a series of initial instructions provided by the user, new codes are automatically written and executed on-the-fly. These codes allow, for instance, checking specific conditions to terminate the simulations. Drawbacks To keep a modular architecture for easy inclusion of new algorithms, Newton-X is organized as a series of independent programs connected by general program drivers. For this reason, a large amount of input/output is required during the program's execution, reducing its efficiency. When dynamics is based on ab initio methods, this is normally not a problem, as the time bottleneck is in the electronic structure calculation. Low efficiency due to input/output can, however, be relevant with semiempirical methods. Other problems with the current implementation are the lack of parallelization of the code, especially of the couplings computation, and the restriction of the program to Linux systems. References External links Newton-X webpage Discussion forum Computational chemistry software
Newton-X
[ "Chemistry" ]
2,047
[ "Computational chemistry", "Computational chemistry software", "Chemistry software" ]
44,416,015
https://en.wikipedia.org/wiki/Magnetic%20skyrmion
In physics, magnetic skyrmions (occasionally described as 'vortices,' or 'vortex-like' configurations) are statically stable solitons which have been predicted theoretically and observed experimentally in condensed matter systems. Magnetic skyrmions can be formed in magnetic materials in their 'bulk' such as in manganese monosilicide (MnSi), or in magnetic thin films. They can be achiral, or chiral (Fig. 1 a and b are both chiral skyrmions) in nature, and may exist both as dynamic excitations or stable or metastable states. Although the broad lines defining magnetic skyrmions have been established de facto, there exist a variety of interpretations with subtle differences. Most descriptions include the notion of topology – a categorization of shapes and the way in which an object is laid out in space – using a continuous-field approximation as defined in micromagnetics. Descriptions generally specify a non-zero, integer value of the topological index, (not to be confused with the chemistry meaning of 'topological index'). This value is sometimes also referred to as the winding number, the topological charge (although it is unrelated to 'charge' in the electrical sense), the topological quantum number (although it is unrelated to quantum mechanics or quantum mechanical phenomena, notwithstanding the quantization of the index values), or more loosely as the “skyrmion number.” The topological index of the field can be described mathematically as where is the topological index, is the unit vector in the direction of the local magnetization within the magnetic thin, ultra-thin or bulk film, and the integral is taken over a two-dimensional space. (A generalization to a three-dimensional space is possible). Passing to spherical coordinates for the space ( ) and for the magnetisation ( ), one can understand the meaning of the skyrmion number. In skyrmion configurations the spatial dependence of the magnetisation can be simplified by setting the perpendicular magnetic variable independent of the in-plane angle () and the in-plane magnetic variable independent of the radius ( ). Then the topological skyrmion number reads: where p describes the magnetisation direction in the origin (p=1 (−1) for ) and W is the winding number. Considering the same uniform magnetisation, i.e. the same p value, the winding number allows to define the skyrmion () with a positive winding number and the antiskyrmion with a negative winding number and thus a topological charge opposite to the one of the skyrmion. What this equation describes physically is a configuration in which the spins in a magnetic film are all aligned orthonormal to the plane of the film, with the exception of those in one specific region, where the spins progressively turn over to an orientation that is perpendicular to the plane of the film but anti-parallel to those in the rest of the plane. Assuming 2D isotropy, the free energy of such a configuration is minimized by relaxation towards a state exhibiting circular symmetry, resulting in the configuration illustrated schematically (for a two dimensional skyrmion) in figure 1. In one dimension, the distinction between the progression of magnetization in a 'skyrmionic' pair of domain walls, and the progression of magnetization in a topologically trivial pair of magnetic domain walls, is illustrated in figure 2. Considering this one dimensional case is equivalent to considering a horizontal cut across the diameter of a 2-dimensional hedgehog skyrmion (fig. 1(a)) and looking at the progression of the local spin orientations. It is worth observing that there are two different configurations which satisfy the topological index criterion stated above. The distinction between these can be made clear by considering a horizontal cut across both of the skyrmions illustrated in figure 1, and looking at the progression of the local spin orientations. In the case of fig. 1(a) the progression of magnetization across the diameter is cycloidal. This type of skyrmion is known as a hedgehog skyrmion. In the case of fig. 1(b), the progression of magnetization is helical, giving rise to what is often called a vortex skyrmion. Stability The skyrmion magnetic configuration is predicted to be stable because the atomic spins which are oriented opposite those of the surrounding thin-film cannot ‘flip around’ to align themselves with the rest of the atoms in the film, without overcoming an energy barrier. This energy barrier is often ambiguously described as arising from ‘topological protection.’ (See Topological stability vs. energy stability). Depending on the magnetic interactions existing in a given system, the skyrmion topology can be a stable, meta-stable, or unstable solution when one minimizes the system's free energy. Theoretical solutions exist for both isolated skyrmions and skyrmion lattices. However, since the stability and behavioral attributes of skyrmions can vary significantly based on the type of interactions in a system, the word 'skyrmion' can refer to substantially different magnetic objects. For this reason, some physicists choose to reserve use of the term 'skyrmion' to describe magnetic objects with a specific set of stability properties, and arising from a specific set of magnetic interactions. Definitions In general, definitions of magnetic skyrmions fall into 2 categories. Which category one chooses to refer to depends largely on the emphasis one wishes to place on different qualities. A first category is based strictly on topology. This definition may seem appropriate when considering topology-dependent properties of magnetic objects, such as their dynamical behavior. A second category emphasizes the intrinsic energy stability of certain solitonic magnetic objects. In this case, the energy stability is often (but not necessarily) associated with a form of chiral interaction, which might originate from the Dzyaloshinskii-Moriya interaction (DMI), or spiral magnetism originating from double-exchange mechanism (DE) or competing Heisenberg exchange interaction. When expressed mathematically, definitions in the first category state that magnetic spin-textures with a spin-progression satisfying the condition: where is an integer ≥1, can be qualified as magnetic skyrmions. Definitions in the second category similarly stipulate that a magnetic skyrmion exhibits a spin-texture with a spin-progression satisfying the condition: where is an integer ≥1, but further suggest that there must exist an energy term that stabilizes the spin-structure into a localized magnetic soliton whose energy is invariant by translation of the soliton's position in space. (The spatial energy invariance condition constitutes a way to rule out structures stabilized by locally-acting factors external to the system, such as confinement arising from the geometry of a specific nanostructure). The first set of definitions for magnetic skyrmions is a superset of the second, in that it places less stringent requirements on the properties of a magnetic spin texture. This definition finds a raison d'être because topology itself determines some properties of magnetic spin textures, such as their dynamical responses to excitations. The second category of definitions may be preferred to underscore intrinsic stability qualities of some magnetic configurations. These qualities arise from stabilizing interactions which may be described in several mathematical ways, including for example by using higher-order spatial derivative terms such as 2nd or 4th order terms to describe a field, (the mechanism originally proposed in particle physics by Tony Skyrme for a continuous field model), or 1st order derivative functionals known as Lifshitz invariants—energy contributions linear in first spatial derivatives of the magnetization—as later proposed by Alexei Bogdanov. (An example of such a 1st order functional is the Dzyaloshinskii-Moriya Interaction). In all cases the energy term acts to introduce topologically non-trivial solutions to a system of partial differential equations. In other words, the energy term acts to render possible the existence of a topologically non-trivial magnetic configuration that is confined to a finite, localized region, and possesses an intrinsic stability or meta-stability relative to a trivial homogeneously magnetized ground-state — i.e. a magnetic soliton. An example hamiltonian containing one set of energy terms that allows for the existence of skyrmions of the second category is the following: where the first, second, third and fourth sums correspond to the exchange, Dzyaloshinskii-Moriya, Zeeman (responsible for the "usual" torques and forces observed on a magnetic dipole moment in a magnetic field), and magnetic Anisotropy (typically magnetocrystalline anisotropy) interaction energies respectively. Note that equation (2) does not contain a term for the dipolar, or 'demagnetizing' interaction between atoms. As in eq. (2), the dipolar interaction is sometimes omitted in simulations of ultra-thin two-dimensional magnetic films, because it tends to contribute a minor effect in comparison with the others. Braided skyrmion tubes have been observed in FeGe. If a skyrmion tube has finite length with Bloch points at either end, it has been called a toron or a dipole string. A bound state of a skyrmion and a vortex of the XY-model, is in fact a type of screw dislocation of helimagnetic order in chiral magnets. Role of the topology Topological stability vs. energetic stability A non-trivial topology does not in itself imply energetic stability. There is in fact no necessary relation between topology and energetic stability. Hence, one must be careful not to confuse ‘topological stability,’ which is a mathematical concept, with energy stability in real physical systems. Topological stability refers to the idea that in order for a system described by a continuous field to transition from one topological state to another, a rupture must occur in the continuous field, i.e. a discontinuity must be produced. For example, if one wishes to transform a flexible balloon doughnut (torus) into an ordinary spherical balloon, it is necessary to introduce a rupture on some part of the balloon doughnut's surface. Mathematically, the balloon doughnut would be described as 'topologically stable.' However, in physics, the free energy required to introduce a rupture enabling the transition of a system from one ‘topological’ state to another is always finite. For example, it is possible to turn a rubber ballon into flat piece of rubber by poking it with a needle (and popping it!). Thus, while a physical system can be approximately described using the mathematical concept of topology, attributes such as energetic stability are dependent on the system's parameters—the strength of the rubber in the example above—not the topology per se. In order to draw a meaningful parallel between the concept of topological stability and the energy stability of a system, the analogy must necessarily be accompanied by the introduction of a non-zero phenomenological ‘field rigidity’ to account for the finite energy needed to rupture the field's topology. Modeling and then integrating this field rigidity can be likened to calculating a breakdown energy-density of the field. These considerations suggest that what is often referred to as ‘topological protection,’ or a 'topological barrier,' should more accurately be referred to as a 'topology-related energy barrier,' though this terminology is somewhat cumbersome. A quantitative evaluation of such a topological barrier can be obtained by extracting the critical magnetic configuration when the topological number changes during the dynamical process of a skyrmion creation event. Applying the topological charge defined in a lattice, the barrier height is theoretically shown to be proportional to the exchange stiffness. Further observations It is important to be cognizant of the fact that magnetic =1 structures are in fact not stabilized by virtue of their ‘topology,’ but rather by the field rigidity parameters that characterize a given system. However, this does not suggest that topology plays an insignificant role with respect to energetic stability. On the contrary, topology may create the possibility for certain stable magnetic states to exist, which otherwise could not. However, topology in itself does not guarantee the stability of a state. In order for a state to have stability associated with its topology, it must be further accompanied by a non-zero field rigidity. Thus, topology can be considered a necessary but insufficient condition for the existence of certain classes of stable objects. While this distinction may at first seem pedantic, its physical motivation becomes apparent when considering two magnetic spin configurations of identical topology =1, but subject to the influences of only one differing magnetic interaction. For example, we may consider one spin configuration with, and one configuration without the presence of magnetocrystalline anisotropy, oriented perpendicular to the plane of an ultra-thin magnetic film. In this case, the =1 configuration that is influenced by the magnetocrystalline anisotropy will be more energetically stable than the =1 configuration without it, in spite of identical topologies. This is because the magnetocrystalline anisotropy contributes to the field rigidity, and it is the field rigidity, not the topology, that confers the notable energy barrier protecting the topological state. Finally, it is interesting to observe that in some cases, it is not the topology which helps =1 configurations to be stable, but rather the converse, as it is the stability of the field (which depends on the relevant interactions) which favors the =1 topology. This is to say that the most stable energy configuration of the field constituents, (in this case magnetic atoms), may in fact be to arrange into a topology which can be described as an =1 topology. Such is the case for magnetic skyrmions stabilized by the Dzyaloshinskii–Moriya interaction, which causes adjacent magnetic spins to 'prefer' having a fixed angle between each other (energetically speaking). Note that from a point of view of practical applications this does not alter the usefulness of developing systems with Dzyaloshinskii–Moriya interaction, as such applications depend strictly on the topology [of the skyrmions, or lack thereof], which encodes the information, and not the underlying mechanisms which stabilize the necessary topology. These examples illustrate why use of the terms 'topological protection' or 'topological stability' interchangeably with the concept of energy stability is misleading, and is liable to lead to fundamental confusion. Limitations of applying the concept of topology One must exercise caution when making inferences based on topology-related energy barriers, as it can be misleading to apply the notion of topology—a description which only rigorously applies to continuous fields— to infer the energetic stability of structures existing in discontinuous systems. Giving way to this temptation is sometimes problematic in physics, where fields which are approximated as continuous become discontinuous below certain size-scales. Such is the case for example when the concept of topology is associated with the micromagnetic model—which approximates the magnetic texture of a system as a continuous field—and then applied indiscriminately without consideration of the model's physical limitations (i.e. that it ceases to be valid at atomic dimensions). In practice, treating the spin textures of magnetic materials as vectors of a continuous field model becomes inaccurate at size-scales on the order of < 2 nm, due to the discretization of the atomic lattice. Thus, it is not meaningful to speak of magnetic skyrmions below these size-scales. Practical applications Magnetic skyrmions are anticipated to allow for the existence of discrete magnetic states which are significantly more energetically stable (per unit volume) than their single-domain counterparts. For this reason, it is envisioned that magnetic skyrmions may be used as bits to store information in future memory and logic devices, where the state of the bit is encoded by the existence or non-existence of the magnetic skyrmion. The dynamical magnetic skyrmion exhibits strong breathing which opens the avenue for skyrmion-based microwave applications. Simulations also indicate that the position of magnetic skyrmions within a film/nanotrack may be manipulated using spin currents or spin waves. Thus, magnetic skyrmions also provide promising candidates for future racetrack-type in-memory logic computing technologies. References Quasiparticles Magnetism
Magnetic skyrmion
[ "Physics", "Materials_science" ]
3,360
[ "Quasiparticles", "Subatomic particles", "Condensed matter physics", "Matter" ]
44,416,617
https://en.wikipedia.org/wiki/Disodium%204%2C4%27-dinitrostilbene-2%2C2%27-disulfonate
Disodium 4,4′-dinitrostilbene-2,2′-disulfonate is an organic compound with the formula (O2NC6H3(SO3Na)CH)2. This salt is a common precursor to a variety of textile dyes and optical brighteners. Preparation and reactions The synthesis of disodium 4,4′-dinitrostilbene-2,2′-disulfonate begins with sulfonation of 4-nitrotoluene. This reaction affords 4-nitrotoluene-2-sulfonic acid. Oxidation of this species with sodium hypochlorite yields the disodium salt of 4,4′-dinitrostilbene-2,2′-disulfonic acid. The product is useful as its reaction with aniline derivatives results in the formation of azo dyes. Commercially important dyes derived from this compound include Direct Red 76, Direct Brown 78, and Direct Orange 40. Reduction gives 4,4′-diamino-2,2′-stilbenedisulfonic acid, which is a common optical brightener. History Arthur Green and André Wahl first reported the formation of disodium 4,4'-dinitrostilbene-2,2'-disulfonate using sodium hypochlorite. References Benzenesulfonates Nitrobenzene derivatives Stilbenoids Organic sodium salts
Disodium 4,4'-dinitrostilbene-2,2'-disulfonate
[ "Chemistry" ]
310
[ "Organic sodium salts", "Salts" ]
44,417,235
https://en.wikipedia.org/wiki/Visual%20reinforcement%20audiometry
Visual reinforcement audiometry (VRA) is a key behavioural test for evaluating hearing in young children. First introduced by Liden and Kankkunen in 1969, VRA is a good indicator of how responsive a child is to sound and speech and whether the child is developing awareness to sound as expected. Performed by an audiologist, VRA is the preferred behavioral technique for children that are 6 – 24 months of age. Using classic operant conditioning, a stimulus is presented, which is followed by a 90 degree head turn from midline by the child, resulting in the child being reinforced with an animation. The child is typically seated in a high chair or on a parent's lap while facing forward. A loud speaker or two are situated at 45 or 90 degrees from the child. As the auditory stimulus is presented, the child will naturally search for the sound source, resulting in a head turn and reinforcement is followed shortly after through an animated toy or video next to the speaker where the auditory stimulus was presented. Using VRA, an audiologist can obtain minimal hearing thresholds ranging in frequencies from 250 Hz - 8000 Hz using speakers, headphones, inserts earphones or through a bone conduction transducer and plot them on an audiogram. The results from the audiogram, paired with other objective measures such as a Tympanogram, Otoacoustic emissions testing and/or Auditory Brainstem Response testing can provide further insight into the child's auditory hearing status as well as future treatment plans if deemed necessary. VRA works well until about 18–24 months of age. Above 18–24 months of age, children need more interesting tasks to hold their attention, which is when audiologists introduce Conditioned Play Audiometry. Conditioned orientation reflex (COR) is a variant of VRA where more than one sound is used. The key difference between COR and VRA is that COR is dependent on the child to have the ability to detect and localize the sound, whereas VRA only requires the child to have a head turn response after the auditory stimulus is presented, they do not need to accurately localize the sound as well. References Acoustics Hearing Ear procedures
Visual reinforcement audiometry
[ "Physics" ]
444
[ "Classical mechanics", "Acoustics" ]
44,417,413
https://en.wikipedia.org/wiki/Ind-scheme
In algebraic geometry, an ind-scheme is a set-valued functor that can be written (represented) as a direct limit (i.e., inductive limit) of closed embedding of schemes. Examples is an ind-scheme. Perhaps the most famous example of an ind-scheme is an infinite grassmannian (which is a quotient of the loop group of an algebraic group G.) See also formal scheme References A. Beilinson, Vladimir Drinfel'd, Quantization of Hitchin’s integrable system and Hecke eigensheaves on Hitchin system, preliminary version V.Drinfeld, Infinite-dimensional vector bundles in algebraic geometry, notes of the talk at the `Unity of Mathematics' conference. Expanded version http://ncatlab.org/nlab/show/ind-scheme Algebraic geometry
Ind-scheme
[ "Mathematics" ]
183
[ "Fields of abstract algebra", "Algebraic geometry" ]
44,417,556
https://en.wikipedia.org/wiki/Behavioral%20observation%20audiometry
Behavioral observation audiometry (BOA) is a type of audiometry (a test of hearing for ability to recognize pitch, volume, etc.) done in children less than six months old. References Acoustics Hearing Ear procedures
Behavioral observation audiometry
[ "Physics" ]
47
[ "Classical mechanics", "Acoustics" ]
44,417,637
https://en.wikipedia.org/wiki/Conditioned%20play%20audiometry
Conditioned play audiometry (CPA) is a type of audiometry done in children from ages 2 to 5 years old, in developmental age. It is the test that directly follows visual reinforcement audiometry when the child becomes able to focus on a task. It is a type of behavioral hearing test, of which there are many. Conditioned play audiometry uses toys to direct the child's attention on the listening task and turns it into a game. Instead of raising one's hand in response to the sound, as an adult would, the child might drop a toy into a bucket every time he or she hears a sound. This keeps the child interested in the listening task for longer. Common games include dropping balls in buckets, placing rings on a stick, feeding coins in a play pig, among many others. The first part of CPA involves conditioning the child. The audiologist presents a loud sound that the child can comfortably hear, while encouraging the child to "drop the ball in the bucket every time you hear the sound," or whichever game is being used. After a few trials to get the child comfortable with the task, the audiologist then attempts to drop to low levels in order to find the softest sound the child can hear. It's incredibly important to go quickly to ensure the child does not lose attention to the task. There are precautions to take to ensure good reliability when performing solo play audiometry. It is important that the child not react to the clinician's hand movements, instead of sounds themselves. To address this, false taps on the tablet are essential to ensure the child is abiding by the listening task and not visual cues. Should the child react to non-sound producing (false) taps, re-conditioning may be warranted. Just like typical audiometry, CPA is performed at multiple frequencies, from 250 to 8000 Hz, to get a full range of the child's hearing. This can be performed using typical headphones and with a bone oscillator, and all thresholds are plotted on an audiogram. Once the child has reached approximately five years old, conventional audiometry using a button or hand-raising can typically be performed. References Acoustics Hearing Ear procedures Audiology
Conditioned play audiometry
[ "Physics" ]
456
[ "Classical mechanics", "Acoustics" ]
44,417,797
https://en.wikipedia.org/wiki/Dirtbox%20%28cell%20phone%29
A dirtbox (or DRT box) is a cell site simulator, a phone device mimicking a cell phone tower, that creates a signal strong enough to cause nearby dormant mobile phones to switch to it. Mounted on aircraft, it has been used by the United States Marshals Service since at least 2007 to locate and collect information from cell phones believed to be connected with criminal activity. It can also be used to jam phones. The device's name comes from the company that developed it, Digital Receiver Technology, Inc. (DRT), owned by the Boeing Company. Boeing describes the device as a hybrid of "jamming, managed access and detection". A similar device with a smaller range, the controversial StingRay phone tracker, has been widely used by U.S. federal entities, including the Federal Bureau of Investigation (FBI). History It is not known when Digital Receiver Technology, Inc. (DRT) first manufactured the dirtbox. As of 2014, the company did not publicly advertise it, stating on its web site: "Due to the sensitive nature of our work, we are unable to publicly advertise many of our products." The Wall Street Journal wrote that the U.S. Marshals Service program utilizing the device had "fully matured by 2007". Boeing bought DRT in 2008. Similar devices from the Harris Corporation, like the Stingray phone tracker, have been sold around the same time. Since 2008, their airborne mounting kit for cell phone surveillance has been said to cost $9,000. On June 11, 2010, the Boeing Company asked the National Telecommunications and Information Administration to advise the United States Congress that the "... Communications Act of 1934 be modified to allow prison officials and state and local law enforcement to use appropriate cell phone management", and suggested that special weapons and tactics (SWAT) teams and other paramilitary tactical units could use their devices to control wireless communications during raids. Technology The device is described as in size. To mimic a cell phone tower, it utilizes IMSI-catcher (International Mobile Subscriber Identity) technology, which phone services use to identify individual subscribers. It emits a pilot signal made to appear stronger than that from the nearest cell tower, causing phones within its range to broadcast their IMSI numbers and electronic serial numbers (ESN). Encryption does not prevent this process; the devices can retrieve a phone's encryption session keys in less than one second, with success rates of 50–75% under "real world conditions". An aircraft-mounted device can locate a phone within 10 feet, Another source claims that by triangulating flights, a dirtbox can pinpoint a phone's location in as few as two feet. The dirtbox is a hybrid of detection, managed access and jamming technologies. According to The Wall Street Journal, "people with knowledge of the program" can determine which phones belong to suspects and which to non-suspects, and that "cell phones not of interest, such as those belonging to prison personnel or commercial users in the area, are returned to their local network." It can also selectively interrupt or prevent calls on certain phones, and has been used to block unauthorized phone use by prison inmates. It can also retrieve data from phones. According to Boeing, its technology is "unobtrusive to legitimate wireless communications", and bypasses phone companies in its operations. Agency use Law enforcement , the U.S. Marshals Service Technical Operations Group has used the device, fixed on crewed airplanes, to track fugitives, and has said it can deploy it on "targets requested by other parts of the Justice Program". The devices are operated out of at least five U.S. airports, "covering most of the U.S. population". It is unclear whether the U.S. Marshals Service requests court orders to use the devices. The Marshals Service has used dirtboxes in the Mexican Drug War, tracking fugitives in coordination with Mexico's Naval Infantry Force and flights in Guatemala. Dirtboxes are used by the United States Special Operations Command, the Drug Enforcement Administration, the FBI and U.S. Customs and Border Protection. According to procurement documents, the U.S. Navy bought dirtboxes to mount on drones at Naval Air Weapons Station China Lake, its research and development facility in Southern California. The Pentagon Washington Headquarters Services bought dirtboxes in 2011. The Chicago Police Department bought dirtboxes to eavesdrop on demonstrators during the 2012 NATO summit, and used them during the 2014 Black Lives Matter demonstrations. In 2015, it became known that the Los Angeles Police Department had purchased the devices. Signal intelligence Based on references to "DRTBox" in NSA's Boundless informant screenshots leaked by Edward Snowden, dirtboxes are probably used by the NSA. In 2013, the French newspaper LeMonde wrote, "Thanks to DRTBOX, 62.5 million phone data were collected in France". The United States Naval Special Warfare Development Group's Group One bought a Digital Receiver Technology 1301B System on April 2, 2007 for over $25,000, according to the United States government procurement web site. U.S. regulation The National Telecommunications and Information Administration (NTIA) has known of dirtboxes since at least 2010. In 2014, the United States Department of Justice refused to confirm or deny that government agencies used them, but an official said, "It would be utterly false to conflate the law-enforcement program with the collection of bulk telephone records by the National Security Agency". The Federal Communications Commission, responsible for licensing and regulating cell-service providers, was not aware of dirtbox activity prior to The Wall Street Journal exposé. In January 2015, the US Senate Judiciary Committee asked the Department of Justice and Department of Homeland Security which law enforcement agencies used DRTboxes, and to specify the legal processes and policies that existed to protect the privacy of those whose information was collected. Criticism In 2014, privacy advocates, including U.S. Rep. Alan Grayson (D-Florida), have criticized dirtbox use as a violation of the Fourth Amendment to the United States Constitution. Brian Owsley, a law professor at Indiana Institute of Technology and former United States magistrate, said in 2014 that to use the devices legally, "I think the government would need to obtain a search warrant based on probable cause consistent with the Fourth Amendment". The Guardian quoted Michael German, a professor at New York University Law School and former FBI agent, as saying: "The overriding problem is the excessive secrecy that hides the government’s ever-expanding surveillance programs from public accountability." In November 2014, Senator Edward Markey (D-Massachusetts) and former Senator Al Franken (D-Minnesota) have warned that Americans' privacy rights must be assured. See also Cellphone surveillance Signals intelligence Stingray phone tracker References External links Law enforcement equipment Mobile security Surveillance Telephone tapping Telephony equipment Telecommunications equipment
Dirtbox (cell phone)
[ "Technology", "Engineering" ]
1,418
[ "Mobile security", "Cybersecurity engineering" ]
44,418,559
https://en.wikipedia.org/wiki/DNA%20polymerase%20epsilon
DNA polymerase epsilon is a member of the DNA polymerase family of enzymes found in eukaryotes. It is composed of the following four subunits: POLE (central catalytic unit), POLE2 (subunit 2), POLE3 (subunit 3), and POLE4 (subunit 4). Recent evidence suggests that it plays a major role in leading strand DNA synthesis and nucleotide and base excision repair. Research had conducted to study nucleotide excision repair DNA synthesis by DNA polymerase epsilon in the presence of PCNA (proliferating cell nuclear antigen), RFC (replication factor C) and RPA (replication protein A). Either DNA polymerase epsilon or DNA polymerase delta along with DNA ligase can be used to repair UV-damaged DNA. However, it is found that DNA polymerase delta require the presence of both RFC and PCNA in order in DNA repair. In addition, it only produces small amount of fractionated DNA ligated products. DNA polymerase epsilon proves to be best suited for nucleotide excision repair. DNA polymerase epsilon is independent of both PCNA and RFC, and produces mostly ligated DNA products. It is also found that under one condition where DNA polymerase epsilon require PCNA and RFC: nucleotide excision repair in the presence of single strand binding protein RPA. PCNA and RFC function as anchor and direct DNA polymerase epsilon onto the DNA template. References Polymerase chain reaction DNA replication DNA repair DNA-binding proteins
DNA polymerase epsilon
[ "Chemistry", "Biology" ]
306
[ "Biochemistry methods", "Genetics techniques", "DNA repair", "Polymerase chain reaction", "DNA replication", "Molecular genetics", "Cellular processes" ]
44,419,868
https://en.wikipedia.org/wiki/Single-cell%20variability
In cell biology, single-cell variability occurs when individual cells in an otherwise similar population differ in shape, size, position in the cell cycle, or molecular-level characteristics. Such differences can be detected using modern single-cell analysis techniques. Investigation of variability within a population of cells contributes to understanding of developmental and pathological processes, Single-cell analysis A sample of cells may appear similar, but the cells can vary in their individual characteristics, such as shape and size, mRNA expression levels, genome, or individual counts of metabolites. In the past, the only methods available for investigating such properties required a population of cells and provided an estimate of the characteristic of interest, averaged over the population, which could obscure important differences among the cells. Single-cell analysis allows scientists to study the properties of a single cell of interest with high accuracy, revealing individual differences among populations and offering new insights in molecular biology. These individual differences are important in fields such as developmental biology, where individual cells can take on different "fates" - become specialized cells such as neurons or organ tissue - during the growth of an embryo; in cancer research, where individual malignant cells can vary in their response to therapy; or in infectious disease, where only a subset of cells in a population become infected by a pathogen. Population-level views of cells can offer a distorted view of the data by averaging out the properties of distinct subsets of cells. For example, if half the cells of a particular group are expressing high levels of a given gene, and the rest are expressing low levels, results from a population-wide analysis may appear as if all cells are expressing a medium level of the given gene. Thus, single-cell analysis allows researchers to study biological processes in finer detail and answer questions that could not have been addressed otherwise. Types of variation Variation in gene expression Cells with identical genomes may vary in the expression of their genes due to differences in their specialized function in the body, their timepoint in the cell cycle, their environment, and also noise and stochastic factors. Thus, accurate measurement of gene expression in individual cells allows researchers to better understand these critical aspects of cellular biology. For example, early study of gene expression in individual cells in fruit fly embryos allowed scientists to discover regularized patterns or gradients of specific gene transcription during different stages of growth, allowing for a more detailed understanding of development at the level of location and time. Another phenomenon in gene expression which could only be identified at the single cell level is oscillatory gene expression, in which a gene is expressed on and off periodically. Single-cell gene expression is typically assayed using RNA-seq. After the cell has been isolated, the RNA-seq protocol typically consists of three steps: the RNA is reverse transcribed into cDNA, the cDNA is amplified to make more material available for the sequencer, and the cDNA is sequenced. Variation in DNA sequence A population of single celled organisms like bacteria typically vary slightly in their DNA sequence due to mutations acquired during reproduction. Within a single human, individual cells typically have identical genomes, though there are interesting exceptions, such as B-cells, which have variation in their DNA enabling them to generate different antibodies to bind to the variety of pathogens that can attack the body. Measuring the differences and the rate of change in DNA content at the single-cell level can help scientists better understand how pathogens develop antibiotic resistance, why the immune system often cannot produce antibodies for rapidly mutating viruses like HIV, and other important phenomena. Many technologies exist for sequencing genomes, but they are designed to use DNA from a population of cells rather than a single cell. The primary challenge for single-cell genome sequencing is to make multiple copies of (amplify) the DNA so that there is enough material available for the sequencer, a process called whole genome amplification (WGA). Typical methods for WGA consist of: (1) Multiple Displacement Amplification (MDA) in which multiple primers anneal to the DNA, polymerases copy the DNA, and knock off other polymerases, freeing strands that can be processed by the sequencer, (2) PCR-based methods, or (3) some combination of both. Variation in metabolomic properties Cells vary in the metabolites they contain, which are the intermediary compounds and end products of complex biochemical reactions that sustain the cell. Genetically identical cells in different conditions and environments can use different metabolic pathways to sustain themselves. By measuring the metabolites present, scientists can infer the metabolic pathways used, and infer useful information about the state of the cell. An example of this is found in the immune system, where CD4+ cells can differentiate into Th17 or TReg cells (among other possibilities), both of which direct the immune system's response in different ways. Th17 cells stimulate a strong inflammatory response, whereas TReg cells stimulate the opposite effect. The former tend to rely much more on glycolysis, due to their increased energy demands. In order to profile the metabolic content of a cell, researchers must identify the cell of interest in the larger population, isolate it for analysis, quickly inhibit enzymes and halt the metabolic processes in the cell, and then use techniques such as NMR, mass-spec, microfluidics, and other methods to analyze the contents of the cell. Variation in proteome Similar to variation in the metabolome, the proteins present in a cell and their abundances can vary from cell to cell in an otherwise similar population. While transcription and translation determine the amount and variety of proteins produced, these processes are imprecise, and cells have a number of mechanisms which can change or degrade proteins, allowing for variance in the proteome that may not be accounted for by variance in gene expression. Also, proteins have many other important features besides simply being present or absent, such as whether have undergone posttranslational modifications such as phosphorylation, or are bound to molecules of interest. The variation in abundance and characteristics of proteins has implications for fields such as cancer research and cancer therapy, where a drug targeting a particular protein may vary in its impact due to variability in the proteome, or vary in efficacy due to the broader biological phenomenon of tumor heterogeneity. Cytometry, surface methods, and microfluidics technologies are the three classes of tools commonly used to profile the proteomes of individual cells. Cytometry allows researchers to isolate cells of interest, and stain 15–30 proteins to measure their location and/or relative abundance. Image cycling techniques have been developed to measure multi-target abundance and distribution in biopsy samples and tissues. In these methods, 3–4 targets are stained with fluorescently labeled antibodies, imaged, and then stripped of their fluorophores by a variety of means, including oxidation-based chemistries or more recently antibody-DNA conjugation methods, allowing additional targets to be stained in follow-on cycles; in some methods up to 60 individual targets have been visualized. For surface methods, researchers place a single cell on a surface coated with antibodies, which then bind to proteins secreted by the cell and allow them to be measured. Microfluidics methods for proteome analysis immobilize single cells on a microchip and use staining to measure the proteins of interest, or antibodies to bind to the proteins of interest. Variation in cell size and morphology Cells in an otherwise similar population can vary in their size and morphology due to differences in function, changes in metabolism, or simply being in different phases of the cell cycle or some other factor. For example, stem cells can divide asymmetrically, which means the two resultant daughter cells may have different fates (specialized functions), and can differ from each other in size or shape. Researchers who study development may be interested in tracking the physical characteristics of the individual progeny in a growing population in order to understand how stem cells differentiate into a complex tissue or organism over time. Microscopy can be used to analyze cell size and morphology by obtaining high-quality images over time. These pictures will typically contain a population of cells, but algorithms can be applied to identify and track individual cells across multiple images. The algorithms must be able to process gigabytes of data to remove noise and summarize the relevant characteristics for the given research question. Variation in cell cycle Individual cells in a population will often be at different points in the cell cycle. Scientists who wish to understand characteristics of the cell at a particular point in the cycle would have difficulty using population-level estimates, since they would average measurements from cells at different stages. Also, understanding the cell cycle in individual diseased cells, like those in a tumor, is also important, since they will often have a very different cycle than healthy cells. Single-cell analysis of characteristics of the cell cycle allow scientists to understand these properties in greater detail. Variability in cell cycle can be studied using several of the methods previously described. For example, cells in G2 will be quite large in size (as they are a just at the point where they are about to divide in two), and can be identified using protocols for cell size and shape. Cells in S phase copy their genomes, and could be identified using protocols for staining DNA and measuring its content by flow cytometry or quantitative fluorescence microscopy, or by using probes for genes expressed highly at specific phases of the cell cycle. References Cell biology
Single-cell variability
[ "Biology" ]
1,945
[ "Cell biology" ]
44,419,959
https://en.wikipedia.org/wiki/Penny%20Crane%20Award%20for%20Distinguished%20Service
The Penny Crane Award for Distinguished Service is an award issued by the Association for Computing Machinery's Special Interest Group on University and College Computing Services. It was established in 2000 to recognise individuals who have made significant contributions to the Special Interest Group, and to computing in higher education. Recipients Source: ACM 2000 – Jane Caviness 2001 – John H. (Jack) Esbin 2002 – John Bucher 2003 – Russell Vaught 2004 – Linda Hutchison 2005 – J. Michael Yohe 2006 – Jennifer Fajman 2007 – Dennis Mar 2008 – Jerry Smith 2009 – Robert Paterson 2010 – Lida Larsen 2011 – Leila Lyons 2012 – no recipient 2013 – Terris Wolff 2014 – Cynthia Dooling 2015 – Bob Haring-Smith 2016 – Phil Isensee 2017 – Tim Foley 2018 – Nancy Bauer 2019 – Kelly Wainwright 2022 – Melissa Bauer 2023 – Beth Rugg See also See Qualifications and Nominations page, at the ACM SIGUCCS Web Page. Penny Crane Award Web Page at ACM/SIGUCCS Penny Crane memory book List of computer science awards References Awards of the Association for Computing Machinery Awards established in 2000 Computer science awards Distinguished service awards Education awards
Penny Crane Award for Distinguished Service
[ "Technology" ]
239
[ "Science and technology awards", "Computer science", "Computer science awards" ]
44,420,375
https://en.wikipedia.org/wiki/ACM%20SIGUCCS%20Hall%20of%20Fame%20Award
The Association for Computing Machinery Special Interest Group on University and College Computing Services Hall of Fame Award was established by the Association for Computing Machinery to recognize individuals whose specific contributions have had a positive impact on the organization and therefore on the professional careers of the members and their institutions. Recipients 2000 Alicia Ewing Towster 2000 Frank A Thomas 2000 John E Skelton 2000 Gordon R Sherman 2000 Rita Seplowitz Saltz 2000 Robert W Lutz 2000 Ralph E Lee 2000 William Heinbecker 2000 Jane Shearin Caviness 2000 Jean Bonney 2001 James R Wruck 2001 Barbara Wolfe 2001 Lawrence W Westermeyer 2001 Russel S Vaught 2001 Jerry Niebaum 2001 James L Moss 2001 Polley Ann McClure 2001 Elizabeth R Little 2001 Priscilla Jane Huston 2001 John H "Jack" Esbin 2002 Terris Wolff 2002 Lois Secrist 2002 Jerry Martin 2002 Carl Malstrom 2002 Geraldine MacDonald 2002 Sheri Prupis 2002 Larry Pickett 2002 Diane Jung 2002 Fred Harris 2002 John Bucher 2003 Michael Yohe 2003 Vincent H Swoyer 2003 Beth Ruffo 2003 Dennis Mar 2003 Leila C Lyons 2003 Linda Hutchison 2003 Tex Hull 2003 Patrick J Gossman 2004 Stan Yagi 2004 Alan Herbert 2004 Susan Nycum 2004 Greydon D Freeman 2004 Lida Larsen 2004 M Lloyd Edwards 2004 Thea Drell Hodge 2004 Linda Downing 2005 Mervin E Muller 2005 Glen R Ingram 2005 Jennifer Fajman 2005 Jim Bostick 2005 Kay K Beach 2006 Leland H Williams 2006 Chris Jones 2006 Marion F Taylor 2006 John W Hamblen 2006 Glenda E Moum 2006 Jayne Ashworth 2007 Shiree Moreland 2007 Phil Isensee 2007 Kathy Mayberry 2007 Bonnie Hites 2007 Jeanne Kellogg 2007 Susan Hales 2008 Jerry Smith 2008 Robert Paterson 2008 John Lateulere 2008 Jack McCredie 2009 Glenn Ricart 2009 Lynnell Lacy 2009 Teresa Lockard 2009 Jim Kerlin 2009 Nancy Bauer 2010 Jennifer "Jen" Whiting 2010 Ann Amsler 2011 Richard Nelson 2011 Alex Nagorski 2011 Timothy Foley 2012 No recipients 2013 Jim Yucha 2013 Christine Vucinich 2013 Leila Shahbender 2013 Cindy Sanders 2013 Carol Rhodes 2013 Patti Mitch 2013 Greg Hanek 2013 Gale Fritsche 2014 Elizabeth Wagnon 2014 Robert Haring-Smith 2014 Parrish Nnambi 2014 Karen McRitchie 2015 Jacquelynn Hongosh 2016 Debbie Fisher 2016 Naomi Fujimura 2016 Takashi Yamanoue 2017 Melissa Bauer 2017 Allan Chen 2017 Beth Rugg 2017 Kelly Wainwright 2018 Miranda Carney-Morris 2018 Trevor Murphy 2018 Mo Nishiyama 2018 Gail Rankin 2019 No recipients 2020 Chester Andrews 2020 Mat Felthousen 2020 Dan Herrick 2020 Chris King 2020 Becky Lineberry See also List of computer science awards References Awards of the Association for Computing Machinery Awards established in 2000 Halls of fame in New York (state) Computer science awards
ACM SIGUCCS Hall of Fame Award
[ "Technology" ]
563
[ "Science and technology awards", "Computer science", "Computer science awards" ]
44,422,772
https://en.wikipedia.org/wiki/History%20of%20radio%20receivers
Radio waves were first identified in German physicist Heinrich Hertz's 1887 series of experiments to prove James Clerk Maxwell's electromagnetic theory. Hertz used spark-excited dipole antennas to generate the waves and micrometer spark gaps attached to dipole and loop antennas to detect them. These precursor radio receivers were primitive devices, more accurately described as radio wave "sensors" or "detectors", as they could only receive radio waves within about 100 feet of the transmitter, and were not used for communication but instead as laboratory instruments in scientific experiments and engineering demonstrations. Spark era The first radio transmitters, used during the initial three decades of radio from 1887 to 1917, a period called the spark era, were spark gap transmitters which generated radio waves by discharging a capacitance through an electric spark. Each spark produced a transient pulse of radio waves which decreased rapidly to zero. These damped waves could not be modulated to carry sound, as in modern AM and FM transmission. So spark transmitters could not transmit sound, and instead transmitted information by radiotelegraphy. The transmitter was switched on and off rapidly by the operator using a telegraph key, creating different length pulses of damped radio waves ("dots" and "dashes") to spell out text messages in Morse code. Therefore, the first radio receivers did not have to extract an audio signal from the radio wave like modern receivers, but just detected the presence of the radio signal, and produced a sound during the "dots" and "dashes". The device which did this was called a "detector". Since there were no amplifying devices at this time, the sensitivity of the receiver mostly depended on the detector. Many different detector devices were tried. Radio receivers during the spark era consisted of these parts: An antenna, to intercept the radio waves and convert them to tiny radio frequency electric currents. A tuned circuit, consisting of a capacitor connected to a coil of wire, which acted as a bandpass filter to select the desired signal out of all the signals picked up by the antenna. Either the capacitor or coil was adjustable to tune the receiver to the frequency of different transmitters. The earliest receivers, before 1897, did not have tuned circuits, they responded to all radio signals picked up by their antennas, so they had little frequency-discriminating ability and received any transmitter in their vicinity. Most receivers used a pair of tuned circuits with their coils magnetically coupled, called a resonant transformer (oscillation transformer) or "loose coupler". A detector, which produced a pulse of DC current for each damped wave received. An indicating device such as an earphone, which converted the pulses of current into sound waves. The first receivers used an electric bell instead. Later receivers in commercial wireless systems used a Morse siphon recorder, which consisted of an ink pen mounted on a needle swung by an electromagnet (a galvanometer) which drew a line on a moving paper tape. Each string of damped waves constituting a Morse "dot" or "dash" caused the needle to swing over, creating a displacement of the line, which could be read off the tape. With such an automated receiver a radio operator did not have to continuously monitor the receiver. The signal from the spark gap transmitter consisted of damped waves repeated at an audio frequency rate, from 120 to perhaps 4000 per second, so in the earphone the signal sounded like a musical tone or buzz, and the Morse code "dots" and "dashes" sounded like beeps. The first person to use radio waves for communication was Guglielmo Marconi. Marconi invented little himself, but he was first to believe that radio could be a practical communication medium, and singlehandedly developed the first wireless telegraphy systems, transmitters and receivers, beginning in 1894–5, mainly by improving technology invented by others. Oliver Lodge and Alexander Popov were also experimenting with similar radio wave receiving apparatus at the same time in 1894–5, but they are not known to have transmitted Morse code during this period, just strings of random pulses. Therefore, Marconi is usually given credit for building the first radio receivers. Coherer receiver The first radio receivers invented by Marconi, Oliver Lodge and Alexander Popov in 1894–5 used a primitive radio wave detector called a coherer, invented in 1890 by Edouard Branly and improved by Lodge and Marconi. The coherer was a glass tube with metal electrodes at each end, with loose metal powder between the electrodes. It initially had a high resistance. When a radio frequency voltage was applied to the electrodes, its resistance dropped and it conducted electricity. In the receiver the coherer was connected directly between the antenna and ground. In addition to the antenna, the coherer was connected in a DC circuit with a battery and relay. When the incoming radio wave reduced the resistance of the coherer, the current from the battery flowed through it, turning on the relay to ring a bell or make a mark on a paper tape in a siphon recorder. In order to restore the coherer to its previous nonconducting state to receive the next pulse of radio waves, it had to be tapped mechanically to disturb the metal particles. This was done by a "decoherer", a clapper which struck the tube, operated by an electromagnet powered by the relay. The coherer is an obscure antique device, and even today there is some uncertainty about the exact physical mechanism by which the various types worked. However it can be seen that it was essentially a bistable device, a radio-wave-operated switch, and so it did not have the ability to rectify the radio wave to demodulate the later amplitude modulated (AM) radio transmissions that carried sound. In a long series of experiments Marconi found that by using an elevated wire monopole antenna instead of Hertz's dipole antennas he could transmit longer distances, beyond the curve of the Earth, demonstrating that radio was not just a laboratory curiosity but a commercially viable communication method. This culminated in his historic transatlantic wireless transmission on December 12, 1901, from Poldhu, Cornwall to St. John's, Newfoundland, a distance of 3500 km (2200 miles), which was received by a coherer. However the usual range of coherer receivers even with the powerful transmitters of this era was limited to a few hundred miles. The coherer remained the dominant detector used in early radio receivers for about 10 years, until replaced by the crystal detector and electrolytic detector around 1907. In spite of much development work, it was a very crude unsatisfactory device. It was not very sensitive, and also responded to impulsive radio noise (RFI), such as nearby lights being switched on or off, as well as to the intended signal. Due to the cumbersome mechanical "tapping back" mechanism it was limited to a data rate of about 12-15 words per minute of Morse code, while a spark-gap transmitter could transmit Morse at up to 100 WPM with a paper tape machine. Other early detectors The coherer's poor performance motivated a great deal of research to find better radio wave detectors, and many were invented. Some strange devices were tried; researchers experimented with using frog legs and even a human brain from a cadaver as detectors. By the first years of the 20th century, experiments in using amplitude modulation (AM) to transmit sound by radio (radiotelephony) were being made. So a second goal of detector research was to find detectors that could demodulate an AM signal, extracting the audio (sound) signal from the radio carrier wave. It was found by trial and error that this could be done by a detector that exhibited "asymmetrical conduction"; a device that conducted current in one direction but not in the other. This rectified the alternating current radio signal, removing one side of the carrier cycles, leaving a pulsing DC current whose amplitude varied with the audio modulation signal. When applied to an earphone this would reproduce the transmitted sound. Below are the detectors that saw wide use before vacuum tubes took over around 1920. All except the magnetic detector could rectify and therefore receive AM signals: Magnetic detector - Developed by Guglielmo Marconi in 1902 from a method invented by Ernest Rutherford and used by the Marconi Co. until it adopted the Audion vacuum tube around 1912, this was a mechanical device consisting of an endless band of iron wires which passed between two pulleys turned by a windup mechanism. The iron wires passed through a coil of fine wire attached to the antenna, in a magnetic field created by two magnets. The hysteresis of the iron induced a pulse of current in a sensor coil each time a radio signal passed through the exciting coil. The magnetic detector was used on shipboard receivers due to its insensitivity to vibration. One was part of the wireless station of the RMS Titanic which was used to summon help during its famous 15 April 1912 sinking. Electrolytic detector ("liquid barretter") - Invented in 1903 by Reginald Fessenden, this consisted of a thin silver-plated platinum wire enclosed in a glass rod, with the tip making contact with the surface of a cup of nitric acid. The electrolytic action caused current to be conducted in only one direction. The detector was used until about 1910. Electrolytic detectors that Fessenden had installed on US Navy ships received the first AM radio broadcast on Christmas Eve, 1906, an evening of Christmas music transmitted by Fessenden using his new alternator transmitter. Thermionic diode (Fleming valve) - The first vacuum tube, invented in 1904 by John Ambrose Fleming, consisted of an evacuated glass bulb containing two electrodes: a cathode consisting of a hot wire filament similar to that in an incandescent light bulb, and a metal plate anode. Fleming, a consultant to Marconi, invented the valve as a more sensitive detector for transatlantic wireless reception. The filament was heated by a separate current through it and emitted electrons into the tube by thermionic emission, an effect which had been discovered by Thomas Edison. The radio signal was applied between the cathode and anode. When the anode was positive, a current of electrons flowed from the cathode to the anode, but when the anode was negative the electrons were repelled and no current flowed. The Fleming valve was used to a limited extent but was not popular because it was expensive, had limited filament life, and was not as sensitive as electrolytic or crystal detectors. Crystal detector (cat's whisker detector) - invented around 1904–1906 by Henry H. C. Dunwoody and Greenleaf Whittier Pickard, based on Karl Ferdinand Braun's 1874 discovery of "asymmetrical conduction" in crystals, these were the most successful and widely used detectors before the vacuum tube era and gave their name to the crystal radio receiver (below). One of the first semiconductor electronic devices, a crystal detector consisted of a pea-sized pebble of a crystalline semiconductor mineral such as galena (lead sulfide) whose surface was touched by a fine springy metal wire mounted on an adjustable arm. This functioned as a primitive diode which conducted electric current in only one direction. In addition to their use in crystal radios, carborundum crystal detectors were also used in some early vacuum tube radios because they were more sensitive than the vacuum tube grid-leak detector. During the vacuum tube era, the term "detector" changed from meaning a radio wave detector to mean a demodulator, a device that could extract the audio modulation signal from a radio signal. That is its meaning today. Tuning "Tuning" means adjusting the frequency of the receiver to the frequency of the desired radio transmission. The first receivers had no tuned circuit, the detector was connected directly between the antenna and ground. Due to the lack of any frequency selective components besides the antenna, the bandwidth of the receiver was equal to the broad bandwidth of the antenna. This was acceptable and even necessary because the first Hertzian spark transmitters also lacked a tuned circuit. Due to the impulsive nature of the spark, the energy of the radio waves was spread over a very wide band of frequencies. To receive enough energy from this wideband signal the receiver had to have a wide bandwidth also. When more than one spark transmitter was radiating in a given area, their frequencies overlapped, so their signals interfered with each other, resulting in garbled reception. Some method was needed to allow the receiver to select which transmitter's signal to receive. Multiple wavelengths produced by a poorly tuned transmitter caused the signal to "dampen", or die down, greatly reducing the power and range of transmission. In 1892, William Crookes gave a lecture on radio in which he suggested using resonance to reduce the bandwidth of transmitters and receivers. Different transmitters could then be "tuned" to transmit on different frequencies so they did not interfere. The receiver would also have a resonant circuit (tuned circuit), and could receive a particular transmission by "tuning" its resonant circuit to the same frequency as the transmitter, analogously to tuning a musical instrument to resonance with another. This is the system used in all modern radio. Tuning was used in Hertz's original experiments and practical application of tuning showed up in the early to mid 1890s in wireless systems not specifically designed for radio communication. Nikola Tesla's March 1893 lecture demonstrating the wireless transmission of power for lighting (mainly by what he thought was ground conduction) included elements of tuning. The wireless lighting system consisted of a spark-excited grounded resonant transformer with a wire antenna which transmitted power across the room to another resonant transformer tuned to the frequency of the transmitter, which lighted a Geissler tube. Use of tuning in free space "Hertzian waves" (radio) was explained and demonstrated in Oliver Lodge's 1894 lectures on Hertz's work. At the time Lodge was demonstrating the physics and optical qualities of radio waves instead of attempting to build a communication system but he would go on to develop methods (patented in 1897) of tuning radio (what he called "syntony"), including using variable inductance to tune antennas. By 1897 the advantages of tuned systems had become clear, and Marconi and the other wireless researchers had incorporated tuned circuits, consisting of capacitors and inductors connected together, into their transmitters and receivers. The tuned circuit acted like an electrical analog of a tuning fork. It had a high impedance at its resonant frequency, but a low impedance at all other frequencies. Connected between the antenna and the detector it served as a bandpass filter, passing the signal of the desired station to the detector, but routing all other signals to ground. The frequency of the station received f was determined by the capacitance C and inductance L in the tuned circuit: Inductive coupling In order to reject radio noise and interference from other transmitters near in frequency to the desired station, the bandpass filter (tuned circuit) in the receiver has to have a narrow bandwidth, allowing only a narrow band of frequencies through. The form of bandpass filter that was used in the first receivers, which has continued to be used in receivers until recently, was the double-tuned inductively-coupled circuit, or resonant transformer (oscillation transformer or RF transformer). The antenna and ground were connected to a coil of wire, which was magnetically coupled to a second coil with a capacitor across it, which was connected to the detector. The RF alternating current from the antenna through the primary coil created a magnetic field which induced a current in the secondary coil which fed the detector. Both primary and secondary were tuned circuits; the primary coil resonated with the capacitance of the antenna, while the secondary coil resonated with the capacitor across it. Both were adjusted to the same resonant frequency. This circuit had two advantages. One was that by using the correct turns ratio, the impedance of the antenna could be matched to the impedance of the receiver, to transfer maximum RF power to the receiver. Impedance matching was important to achieve maximum receiving range in the unamplified receivers of this era. The coils usually had taps which could be selected by a multiposition switch. The second advantage was that due to "loose coupling" it had a much narrower bandwidth than a simple tuned circuit, and the bandwidth could be adjusted. Unlike in an ordinary transformer, the two coils were "loosely coupled"; separated physically so not all the magnetic field from the primary passed through the secondary, reducing the mutual inductance. This gave the coupled tuned circuits much "sharper" tuning, a narrower bandwidth than a single tuned circuit. In the "Navy type" loose coupler (see picture), widely used with crystal receivers, the smaller secondary coil was mounted on a rack which could be slid in or out of the primary coil, to vary the mutual inductance between the coils. When the operator encountered an interfering signal at a nearby frequency, the secondary could be slid further out of the primary, reducing the coupling, which narrowed the bandwidth, rejecting the interfering signal. A disadvantage was that all three adjustments in the loose coupler - primary tuning, secondary tuning, and coupling - were interactive; changing one changed the others. So tuning in a new station was a process of successive adjustments. Selectivity became more important as spark transmitters were replaced by continuous wave transmitters which transmitted on a narrow band of frequencies, and broadcasting led to a proliferation of closely spaced radio stations crowding the radio spectrum. Resonant transformers continued to be used as the bandpass filter in vacuum tube radios, and new forms such as the variometer were invented. Another advantage of the double-tuned transformer for AM reception was that when properly adjusted it had a "flat top" frequency response curve as opposed to the "peaked" response of a single tuned circuit. This allowed it to pass the sidebands of AM modulation on either side of the carrier with little distortion, unlike a single tuned circuit which attenuated the higher audio frequencies. Until recently the bandpass filters in the superheterodyne circuit used in all modern receivers were made with resonant transformers, called IF transformers. Patent disputes Marconi's initial radio system had relatively poor tuning limiting its range and adding to interference. To overcome this drawback he developed a four circuit system with tuned coils in "syntony" at both the transmitters and receivers. His 1900 British #7,777 (four sevens) patent for tuning filed in April 1900 and granted a year later opened the door to patents disputes since it infringed on the Syntonic patents of Oliver Lodge, first filed in May 1897, as well as patents filed by Ferdinand Braun. Marconi was able to obtain patents in the UK and France but the US version of his tuned four circuit patent, filed in November 1900, was initially rejected based on it being anticipated by Lodge's tuning system, and refiled versions were rejected because of the prior patents by Braun, and Lodge. A further clarification and re-submission was rejected because it infringed on parts of two prior patents Tesla had obtained for his wireless power transmission system. Marconi's lawyers managed to get a resubmitted patent reconsidered by another examiner who initially rejected it due to a pre-existing John Stone Stone tuning patent, but it was finally approved it in June 1904 based on it having a unique system of variable inductance tuning that was different from Stone who tuned by varying the length of the antenna. When Lodge's Syntonic patent was extended in 1911 for another 7 years the Marconi Company agreed to settle that patent dispute, purchasing Lodge's radio company with its patent in 1912, giving them the priority patent they needed. Other patent disputes would crop up over the years including a 1943 US Supreme Court ruling on the Marconi Company's ability to sue the US government over patent infringement during World War I. The Court rejected the Marconi Company's suit saying they could not sue for patent infringement when their own patents did not seem to have priority over the patents of Lodge, Stone, and Tesla. Crystal radio receiver Although it was invented in 1904 in the wireless telegraphy era, the crystal radio receiver could also rectify AM transmissions and served as a bridge to the broadcast era. In addition to being the main type used in commercial stations during the wireless telegraphy era, it was the first receiver to be used widely by the public. During the first two decades of the 20th century, as radio stations began to transmit in AM voice (radiotelephony) instead of radiotelegraphy, radio listening became a popular hobby, and the crystal was the simplest, cheapest detector. The millions of people who purchased or homemade these inexpensive reliable receivers created the mass listening audience for the first radio broadcasts, which began around 1920. By the late 1920s the crystal receiver was superseded by vacuum tube receivers and became commercially obsolete. However it continued to be used by youth and the poor until World War II. Today these simple radio receivers are constructed by students as educational science projects. The crystal radio used a cat's whisker detector, invented by Harrison H. C. Dunwoody and Greenleaf Whittier Pickard in 1904, to extract the audio from the radio frequency signal. It consisted of a mineral crystal, usually galena, which was lightly touched by a fine springy wire (the "cat whisker") on an adjustable arm. The resulting crude semiconductor junction functioned as a Schottky barrier diode, conducting in only one direction. Only particular sites on the crystal surface worked as detector junctions, and the junction could be disrupted by the slightest vibration. So a usable site was found by trial and error before each use; the operator would drag the cat's whisker across the crystal until the radio began functioning. Frederick Seitz, a later semiconductor researcher, wrote: Such variability, bordering on what seemed the mystical, plagued the early history of crystal detectors and caused many of the vacuum tube experts of a later generation to regard the art of crystal rectification as being close to disreputable. The crystal radio was unamplified and ran off the power of the radio waves received from the radio station, so it had to be listened to with earphones; it could not drive a loudspeaker. It required a long wire antenna, and its sensitivity depended on how large the antenna was. During the wireless era it was used in commercial and military longwave stations with huge antennas to receive long distance radiotelegraphy traffic, even including transatlantic traffic. However, when used to receive broadcast stations a typical home crystal set had a more limited range of about 25 miles. In sophisticated crystal radios the "loose coupler" inductively coupled tuned circuit was used to increase the Q. However it still had poor selectivity compared to modern receivers. Heterodyne receiver and BFO Beginning around 1905 continuous wave (CW) transmitters began to replace spark transmitters for radiotelegraphy because they had much greater range. The first continuous wave transmitters were the Poulsen arc invented in 1904 and the Alexanderson alternator developed 1906–1910, which were replaced by vacuum tube transmitters beginning around 1920. The continuous wave radiotelegraphy signals produced by these transmitters required a different method of reception. The radiotelegraphy signals produced by spark gap transmitters consisted of strings of damped waves repeating at an audio rate, so the "dots" and "dashes" of Morse code were audible as a tone or buzz in the receivers' earphones. However the new continuous wave radiotelegraph signals simply consisted of pulses of unmodulated carrier (sine waves). These were inaudible in the receiver headphones. To receive this new modulation type, the receiver had to produce some kind of tone during the pulses of carrier. The first crude device that did this was the tikker, invented in 1908 by Valdemar Poulsen. This was a vibrating interrupter with a capacitor at the tuner output which served as a rudimentary modulator, interrupting the carrier at an audio rate, thus producing a buzz in the earphone when the carrier was present. A similar device was the "tone wheel" invented by Rudolph Goldschmidt, a wheel spun by a motor with contacts spaced around its circumference, which made contact with a stationary brush. In 1901 Reginald Fessenden had invented a better means of accomplishing this. In his heterodyne receiver an unmodulated sine wave radio signal at a frequency fO offset from the incoming radio wave carrier fC was generated by a local oscillator and applied to a rectifying detector such as a crystal detector or electrolytic detector, along with the radio signal from the antenna. In the detector the two signals mixed, creating two new heterodyne (beat) frequencies at the sum fC + fO and the difference fC − fO between these frequencies. By choosing fO correctly the lower heterodyne fC − fO was in the audio frequency range, so it was audible as a tone in the earphone whenever the carrier was present. Thus the "dots" and "dashes" of Morse code were audible as musical "beeps". A major attraction of this method during this pre-amplification period was that the heterodyne receiver actually amplified the signal somewhat, the detector had "mixer gain". The receiver was ahead of its time, because when it was invented there was no oscillator capable of producing the radio frequency sine wave fO with the required stability. Fessenden first used his large radio frequency alternator, but this was not practical for ordinary receivers. The heterodyne receiver remained a laboratory curiosity until a cheap compact source of continuous waves appeared, the vacuum tube electronic oscillator invented by Edwin Armstrong and Alexander Meissner in 1913. After this it became the standard method of receiving CW radiotelegraphy. The heterodyne oscillator is the ancestor of the beat frequency oscillator (BFO) which is used to receive radiotelegraphy in communications receivers today. The heterodyne oscillator had to be retuned each time the receiver was tuned to a new station, but in modern superheterodyne receivers the BFO signal beats with the fixed intermediate frequency, so the beat frequency oscillator can be a fixed frequency. Armstrong later used Fessenden's heterodyne principle in his superheterodyne receiver (below). Vacuum tube era The Audion (triode) vacuum tube invented by Lee De Forest in 1906 was the first practical amplifying device and revolutionized radio. Vacuum tube transmitters replaced spark transmitters and made possible four new types of modulation: continuous wave (CW) radiotelegraphy, amplitude modulation (AM) around 1915 which could carry audio (sound), frequency modulation (FM) around 1938 which had much improved audio quality, and single sideband (SSB). The amplifying vacuum tube used energy from a battery or electrical outlet to increase the power of the radio signal, so vacuum tube receivers could be more sensitive and have a greater reception range than the previous unamplified receivers. The increased audio output power also allowed them to drive loudspeakers instead of earphones, permitting more than one person to listen. The first loudspeakers were produced around 1915. These changes caused radio listening to evolve explosively from a solitary hobby to a popular social and family pastime. The development of amplitude modulation (AM) and vacuum-tube transmitters during World War I, and the availability of cheap receiving tubes after the war, set the stage for the start of AM broadcasting, which sprang up spontaneously around 1920. The advent of radio broadcasting increased the market for radio receivers greatly, and transformed them into a consumer product. At the beginning of the 1920s the radio receiver was a forbidding high-tech device, with many cryptic knobs and controls requiring technical skill to operate, housed in an unattractive black metal box, with a tinny-sounding horn loudspeaker. By the 1930s, the broadcast receiver had become a piece of furniture, housed in an attractive wooden case, with standardized controls anyone could use, which occupied a respected place in the home living room. In the early radios the multiple tuned circuits required multiple knobs to be adjusted to tune in a new station. One of the most important ease-of-use innovations was "single knob tuning", achieved by linking the tuning capacitors together mechanically. The dynamic cone loudspeaker invented in 1924 greatly improved audio frequency response over the previous horn speakers, allowing music to be reproduced with good fidelity. Convenience features like large lighted dials, tone controls, pushbutton tuning, tuning indicators and automatic gain control (AGC) were added. The receiver market was divided into the above broadcast receivers and communications receivers, which were used for two-way radio communications such as shortwave radio. A vacuum-tube receiver required several power supplies at different voltages, which in early radios were supplied by separate batteries. By 1930 adequate rectifier tubes were developed, and the expensive batteries were replaced by a transformer power supply that worked off the house current. Vacuum tubes were bulky, expensive, had a limited lifetime, consumed a large amount of power and produced a lot of waste heat, so the number of tubes a receiver could economically have was a limiting factor. Therefore, a goal of tube receiver design was to get the most performance out of a limited number of tubes. The major radio receiver designs, listed below, were invented during the vacuum tube era. A defect in many early vacuum-tube receivers was that the amplifying stages could oscillate, act as an oscillator, producing unwanted radio frequency alternating currents. These parasitic oscillations mixed with the carrier of the radio signal in the detector tube, producing audible beat notes (heterodynes); annoying whistles, moans, and howls in the speaker. The oscillations were caused by feedback in the amplifiers; one major feedback path was the capacitance between the plate and grid in early triodes. This was solved by the Neutrodyne circuit, and later the development of the tetrode and pentode around 1930. Edwin Armstrong is one of the most important figures in radio receiver history, and during this period invented technology which continues to dominate radio communication. He was the first to give a correct explanation of how De Forest's triode tube worked. He invented the feedback oscillator, regenerative receiver, the superregenerative receiver, the superheterodyne receiver, and modern frequency modulation (FM). The first vacuum-tube receivers The first amplifying vacuum tube, the Audion, a crude triode, was invented in 1906 by Lee De Forest as a more sensitive detector for radio receivers, by adding a third electrode to the thermionic diode detector, the Fleming valve. It was not widely used until its amplifying ability was recognized around 1912. The first tube receivers, invented by De Forest and built by hobbyists until the mid-1920s, used a single Audion which functioned as a grid-leak detector which both rectified and amplified the radio signal. There was uncertainty about the operating principle of the Audion until Edwin Armstrong explained both its amplifying and demodulating functions in a 1914 paper. The grid-leak detector circuit was also used in regenerative, TRF, and early superheterodyne receivers (below) until the 1930s. To give enough output power to drive a loudspeaker, 2 or 3 additional vacuum tube stages were needed for audio amplification. Many early hobbyists could only afford a single tube receiver, and listened to the radio with earphones, so early tube amplifiers and speakers were sold as add-ons. In addition to very low gain of about 5 and a short lifetime of about 30 – 100 hours, the primitive Audion had erratic characteristics because it was incompletely evacuated. De Forest believed that ionization of residual air was key to Audion operation. This made it a more sensitive detector but also caused its electrical characteristics to vary during use. As the tube heated up, gas released from the metal elements would change the pressure in the tube, changing the plate current and other characteristics, so it required periodic bias adjustments to keep it at the correct operating point. Each Audion stage usually had a rheostat to adjust the filament current, and often a potentiometer or multiposition switch to control the plate voltage. The filament rheostat was also used as a volume control. The many controls made multitube Audion receivers complicated to operate. By 1914, Harold Arnold at Western Electric and Irving Langmuir at GE realized that the residual gas was not necessary; the Audion could operate on electron conduction alone. They evacuated tubes to a lower pressure of 10−9 atm, producing the first "hard vacuum" triodes. These more stable tubes did not require bias adjustments, so radios had fewer controls and were easier to operate. During World War I civilian radio use was prohibited, but by 1920 large-scale production of vacuum tube radios began. The "soft" incompletely evacuated tubes were used as detectors through the 1920s then became obsolete. Regenerative (autodyne) receiver The regenerative receiver, invented by Edwin Armstrong in 1913 when he was a 23-year-old college student, was used very widely until the late 1920s particularly by hobbyists who could only afford a single-tube radio. Today transistor versions of the circuit are still used in a few inexpensive applications like walkie-talkies. In the regenerative receiver the gain (amplification) of a vacuum tube or transistor is increased by using regeneration (positive feedback); some of the energy from the tube's output circuit is fed back into the input circuit with a feedback loop. The early vacuum tubes had very low gain (around 5). Regeneration could not only increase the gain of the tube enormously, by a factor of 15,000 or more, it also increased the Q factor of the tuned circuit, decreasing (sharpening) the bandwidth of the receiver by the same factor, improving selectivity greatly. The receiver had a control to adjust the feedback. The tube also acted as a grid-leak detector to rectify the AM signal. Another advantage of the circuit was that the tube could be made to oscillate, and thus a single tube could serve as both a beat frequency oscillator and a detector, functioning as a heterodyne receiver to make CW radiotelegraphy transmissions audible. This mode was called an autodyne receiver. To receive radiotelegraphy, the feedback was increased until the tube oscillated, then the oscillation frequency was tuned to one side of the transmitted signal. The incoming radio carrier signal and local oscillation signal mixed in the tube and produced an audible heterodyne (beat) tone at the difference between the frequencies. A widely used design was the Armstrong circuit, in which a "tickler" coil in the plate circuit was coupled to the tuning coil in the grid circuit, to provide the feedback. The feedback was controlled by a variable resistor, or alternately by moving the two windings physically closer together to increase loop gain, or apart to reduce it. This was done by an adjustable air core transformer called a variometer (variocoupler). Regenerative detectors were sometimes also used in TRF and superheterodyne receivers. One problem with the regenerative circuit was that when used with large amounts of regeneration the selectivity (Q) of the tuned circuit could be too sharp, attenuating the AM sidebands, thus distorting the audio modulation. This was usually the limiting factor on the amount of feedback that could be employed. A more serious drawback was that it could act as an inadvertent radio transmitter, producing interference (RFI) in nearby receivers. In AM reception, to get the most sensitivity the tube was operated very close to instability and could easily break into oscillation (and in CW reception did oscillate), and the resulting radio signal was radiated by its wire antenna. In nearby receivers, the regenerative's signal would beat with the signal of the station being received in the detector, creating annoying heterodynes, (beats), howls and whistles. Early regeneratives which oscillated easily were called "bloopers". One preventive measure was to use a stage of RF amplification before the regenerative detector, to isolate it from the antenna. But by the mid-1920s "regens" were no longer sold by the major radio manufacturers. Superregenerative receiver This was a receiver invented by Edwin Armstrong in 1922 which used regeneration in a more sophisticated way, to give greater gain. It was used in a few shortwave receivers in the 1930s, and is used today in a few cheap high frequency applications such as walkie-talkies and garage door openers. In the regenerative receiver the loop gain of the feedback loop was less than one, so the tube (or other amplifying device) did not oscillate but was close to oscillation, giving large gain. In the superregenerative receiver, the loop gain was made equal to one, so the amplifying device actually began to oscillate, but the oscillations were interrupted periodically. This allowed a single tube to produce gains of over 106. TRF receiver The tuned radio frequency (TRF) receiver, invented in 1916 by Ernst Alexanderson, improved both sensitivity and selectivity by using several stages of amplification before the detector, each with a tuned circuit, all tuned to the frequency of the station. A major problem of early TRF receivers was that they were complicated to tune, because each resonant circuit had to be adjusted to the frequency of the station before the radio would work. In later TRF receivers the tuning capacitors were linked together mechanically ("ganged") on a common shaft so they could be adjusted with one knob, but in early receivers the frequencies of the tuned circuits could not be made to "track" well enough to allow this, and each tuned circuit had its own tuning knob. Therefore, the knobs had to be turned simultaneously. For this reason most TRF sets had no more than three tuned RF stages. A second problem was that the multiple radio frequency stages, all tuned to the same frequency, were prone to oscillate, and the parasitic oscillations mixed with the radio station's carrier in the detector, producing audible heterodynes (beat notes), whistles and moans, in the speaker. This was solved by the invention of the Neutrodyne circuit (below) and the development of the tetrode later around 1930, and better shielding between stages. Today the TRF design is used in a few integrated (IC) receiver chips. From the standpoint of modern receivers the disadvantage of the TRF is that the gain and bandwidth of the tuned RF stages are not constant but vary as the receiver is tuned to different frequencies. Since the bandwidth of a filter with a given Q is proportional to the frequency, as the receiver is tuned to higher frequencies its bandwidth increases. Neutrodyne receiver The Neutrodyne receiver, invented in 1922 by Louis Hazeltine, was a TRF receiver with a "neutralizing" circuit added to each radio amplification stage to cancel the feedback to prevent the oscillations which caused the annoying whistles in the TRF. In the neutralizing circuit a capacitor fed a feedback current from the plate circuit to the grid circuit which was 180° out of phase with the feedback which caused the oscillation, canceling it. The Neutrodyne was popular until the advent of cheap tetrode tubes around 1930. Reflex receiver The reflex receiver, invented in 1914 by Wilhelm Schloemilch and Otto von Bronk, and rediscovered and extended to multiple tubes in 1917 by Marius Latour and William H. Priess, was a design used in some inexpensive radios of the 1920s which enjoyed a resurgence in small portable tube radios of the 1930s and again in a few of the first transistor radios in the 1950s. It is another example of an ingenious circuit invented to get the most out of a limited number of active devices. In the reflex receiver the RF signal from the tuned circuit is passed through one or more amplifying tubes or transistors, demodulated in a detector, then the resulting audio signal is passed again though the same amplifier stages for audio amplification. The separate radio and audio signals present simultaneously in the amplifier do not interfere with each other since they are at different frequencies, allowing the amplifying tubes to do "double duty". In addition to single tube reflex receivers, some TRF and superheterodyne receivers had several stages "reflexed". Reflex radios were prone to a defect called "play-through" which meant that the volume of audio did not go to zero when the volume control was turned down. Superheterodyne receiver The superheterodyne, invented in 1918 during World War I by Edwin Armstrong when he was in the Signal Corps, is the design used in almost all modern receivers, except a few specialized applications. It is a more complicated design than the other receivers above, and when it was invented required 6 - 9 vacuum tubes, putting it beyond the budget of most consumers, so it was initially used mainly in commercial and military communication stations. However, by the 1930s the "superhet" had replaced all the other receiver types above. In the superheterodyne, the "heterodyne" technique invented by Reginald Fessenden is used to shift the frequency of the radio signal down to a lower "intermediate frequency" (IF), before it is processed. Its operation and advantages over the other radio designs in this section are described above in The superheterodyne design By the 1940s the superheterodyne AM broadcast receiver was refined into a cheap-to-manufacture design called the "All American Five", because it only used five vacuum tubes: usually a converter (mixer/local oscillator), an IF amplifier, a detector/audio amplifier, audio power amplifier, and a rectifier. This design was used for virtually all commercial radio receivers until the transistor replaced the vacuum tube in the 1970s. Semiconductor era The invention of the transistor in 1947 revolutionized radio technology, making truly portable receivers possible, beginning with transistor radios in the late 1950s. Although portable vacuum tube radios were made, tubes were bulky and inefficient, consuming large amounts of power and requiring several large batteries to produce the filament and plate voltage. Transistors did not require a heated filament, reducing power consumption, and were smaller and much less fragile than vacuum tubes. Portable radios Companies first began manufacturing radios advertised as portables shortly after the start of commercial broadcasting in the early 1920s. The vast majority of tube radios of the era used batteries and could be set up and operated anywhere, but most did not have features designed for portability such as handles and built in speakers. Some of the earliest portable tube radios were the Winn "Portable Wireless Set No. 149" that appeared in 1920 and the Grebe Model KT-1 that followed a year later. Crystal sets such as the Westinghouse Aeriola Jr. and the RCA Radiola 1 were also advertised as portable radios. Thanks to miniaturized vacuum tubes first developed in 1940, smaller portable radios appeared on the market from manufacturers such as Zenith and General Electric. First introduced in 1942, Zenith's Trans-Oceanic line of portable radios were designed to provide entertainment broadcasts as well as being able to tune into weather, marine and international shortwave stations. By the 1950s, a "golden age" of tube portables included lunchbox-sized tube radios like the Emerson 560, that featured molded plastic cases. So-called "pocket portable" radios like the RCA BP10 had existed since the 1940s, but their actual size was compatible with only the largest of coat pockets. But some, like the Privat-ear and Dyna-mite pocket radios, were small enough to fit a pocket. The development of the bipolar junction transistor in the early 1950s resulted in it being licensed to a number of electronics companies, such as Texas Instruments, who produced a limited run of transistorized radios as a sales tool. The Regency TR-1, made by the Regency Division of I.D.E.A. (Industrial Development Engineering Associates) of Indianapolis, Indiana, was launched in 1954. The era of true, shirt-pocket sized portable radios followed, with manufacturers such as Sony, Zenith, RCA, DeWald, and Crosley offering various models. The Sony TR-63 released in 1957 was the first mass-produced transistor radio, leading to the mass-market penetration of transistor radios. Digital technology The development of integrated circuit (IC) chips in the 1970s created another revolution, allowing an entire radio receiver to be put on an IC chip. IC chips reversed the economics of radio design used with vacuum-tube receivers. Since the marginal cost of adding additional amplifying devices (transistors) to the chip was essentially zero, the size and cost of the receiver was dependent not on how many active components were used, but on the passive components; inductors and capacitors, which could not be integrated easily on the chip. The development of RF CMOS chips, pioneered by Asad Ali Abidi at UCLA during the 1980s and 1990s, allowed low power wireless devices to be made. The current trend in receivers is to use digital circuitry on the chip to do functions that were formerly done by analog circuits which require passive components. In a digital receiver the IF signal is sampled and digitized, and the bandpass filtering and detection functions are performed by digital signal processing (DSP) on the chip. Another benefit of DSP is that the properties of the receiver; channel frequency, bandwidth, gain, etc. can be dynamically changed by software to react to changes in the environment; these systems are known as software-defined radios or cognitive radio. Many of the functions performed by analog electronics can be performed by software instead. The benefit is that software is not affected by temperature, physical variables, electronic noise and manufacturing defects. Digital signal processing permits signal processing techniques that would be cumbersome, costly, or otherwise infeasible with analog methods. A digital signal is essentially a stream or sequence of numbers that relay a message through some sort of medium such as a wire. DSP hardware can tailor the bandwidth of the receiver to current reception conditions and to the type of signal. A typical analog only receiver may have a limited number of fixed bandwidths, or only one, but a DSP receiver may have 40 or more individually selectable filters. DSP is used in cell phone systems to reduce the data rate required to transmit voice. In digital radio broadcasting systems such as Digital Audio Broadcasting (DAB), the analog audio signal is digitized and compressed, typically using a modified discrete cosine transform (MDCT) audio coding format such as AAC+. "PC radios", or radios that are designed to be controlled by a standard PC are controlled by specialized PC software using a serial port connected to the radio. A "PC radio" may not have a front-panel at all, and may be designed exclusively for computer control, which reduces cost. Some PC radios have the great advantage of being field upgradable by the owner. New versions of the DSP firmware can be downloaded from the manufacturer's web site and uploaded into the flash memory of the radio. The manufacturer can then in effect add new features to the radio over time, such as adding new filters, DSP noise reduction, or simply to correct bugs. A full-featured radio control program allows for scanning and a host of other functions and, in particular, integration of databases in real-time, like a "TV-Guide" type capability. This is particularly helpful in locating all transmissions on all frequencies of a particular broadcaster, at any given time. Some control software designers have even integrated Google Earth to the shortwave databases, so it is possible to "fly" to a given transmitter site location with a click of a mouse. In many cases the user is able to see the transmitting antennas where the signal is originating from. Since the Graphical User Interface to the radio has considerable flexibility, new features can be added by the software designer. Features that can be found in advanced control software programs today include a band table, GUI controls corresponding to traditional radio controls, local time clock and a UTC clock, signal strength meter, a database for shortwave listening with lookup capability, scanning capability, or text-to-speech interface. The next level in integration is "software-defined radio", where all filtering, modulation and signal manipulation is done in software. This may be a PC soundcard or by a dedicated piece of DSP hardware. There will be a RF front-end to supply an intermediate frequency to the software defined radio. These systems can provide additional capability over "hardware" receivers. For example, they can record large swaths of the radio spectrum to a hard drive for "playback" at a later date. The same SDR that one minute is demodulating a simple AM broadcast may also be able to decode an HDTV broadcast in the next. An open-source project called GNU Radio is dedicated to evolving a high-performance SDR. All-digital radio transmitters and receivers present the possibility of advancing the capabilities of radio. References Receiver (radio) Receivers
History of radio receivers
[ "Engineering" ]
10,307
[ "Radio electronics", "Receiver (radio)" ]
44,424,239
https://en.wikipedia.org/wiki/All%20in%20the%20Method
All in the Method is a British comedy web series produced, written by and starring Luke Kaile and Rich Keeble. The series is broadcast on the internet and premiered on 17 June 2012. So far, five episodes of season one have been made. The show can be found distributed on the web via YouTube. The series sees Rich and Luke as flat-sharing brothers who have both chosen the poorly paid profession of acting as their chosen career paths. Both of them believe in the ‘method’ form of acting, where they become their character twenty-four-seven. This naturally results in the pair getting themselves into scenarios that their everyday lives would never have come close to brushing with if it wasn’t for their commitment to the acting cause. Both Keeble and Kaile’s experiences as actors and writers helped prepare them for producing All in the Method in many ways, despite never having produced a web series before. The cast includes many people they’ve worked with on past projects, both in the theatre and in film, and many of the characters are inspired by those past experiences. All in the Method was screened at the Raindance Film Festival and won 'Best Guest Actor' (Peter Glover) at LA Web Fest 2014. References External links Official website All in the Method on YouTube 2012 web series debuts 2013 web series endings British comedy web series
All in the Method
[ "Technology" ]
273
[ "Computing stubs", "World Wide Web stubs" ]
44,424,894
https://en.wikipedia.org/wiki/Khurpa
A khurpa (alternatively called a khurpi) is a short-handled cutting tool similar to a Trowel with a flat blade used for digging soil and weeding in small gardens or vegetable farms. It is commonly used in small farms, ridges, or rows of vegetables to hoewing or earth up the weeds. It is traditionally used while in a squatting posture. The work khurpa is a word in the Punjabi language. The khurpa is used in Punjab (as well as in other areas in India) for small-scale gardening processes such as bed preparation, digging, tilling, and weeding. It is an Indian weeding tool. References Gardening tools Mechanical hand tools Squatting position
Khurpa
[ "Physics" ]
148
[ "Mechanics", "Mechanical hand tools" ]
44,424,907
https://en.wikipedia.org/wiki/Process%20qualification
Process qualification is the qualification of manufacturing and production processes to confirm they are able to operate at a certain standard during sustained commercial manufacturing. Data covering critical process parameters must be recorded and analyzed to ensure critical quality attributes can be guaranteed throughout production. This may include testing equipment at maximum operating capacity to show quantity demands can be met. Once all processes have been qualified the manufacturer should have a complete understanding of the process design and have a framework in place to routinely monitor operations. Only after process qualification has been completed can the manufacturing process begin production for commercial use. Equally important as qualifying processes and equipment is qualifying software and personnel. A well trained staff and accurate, thorough records helps ensure ongoing protection from process faults and quick recovery from otherwise costly process malfunctions. In many countries qualification measures are also required, especially in the pharmaceutical manufacturing field. Process qualification should cover the following aspects of manufacturing: Facility Utilities Equipment Personnel End-to-end manufacturing Control protocols and monitoring software. Process qualification is the second stage of process validation. A vital component of process qualification is process performance qualification protocol. PPQ protocol is essential in defining and maintaining production standards within an organization. See also Installation qualification Design qualification Performance qualification Process validation References External links Drugregulations.org Formal methods Enterprise modelling Business process management
Process qualification
[ "Engineering" ]
257
[ "Software engineering", "Systems engineering", "Enterprise modelling", "Formal methods" ]
44,425,089
https://en.wikipedia.org/wiki/Decision%20Model%20and%20Notation
In business analysis, the Decision Model and Notation (DMN) is a standard published by the Object Management Group. It is a standard approach for describing and modeling repeatable decisions within organizations to ensure that decision models are interchangeable across organizations. The DMN standard provides the industry with a modeling notation for decisions that will support decision management and business rules. The notation is designed to be readable by business and IT users alike. This enables various groups to effectively collaborate in defining a decision model: the business people who manage and monitor the decisions, the business analysts or functional analysts who document the initial decision requirements and specify the detailed decision models and decision logic, the technical developers responsible for the automation of systems that make the decisions. The DMN standard can be effectively used standalone but it is also complementary to the BPMN and CMMN standards. BPMN defines a special kind of activity, the Business Rule Task, which "provides a mechanism for the process to provide input to a business rule engine and to get the output of calculations that the business rule engine might provide" that can be used to show where in a BPMN process a decision defined using DMN should be used. DMN has been made a standard for Business Analysis according to BABOK v3. Elements of the standard The standard includes three main elements Decision Requirements Diagrams that show how the elements of decision-making are linked into a dependency network. Decision tables to represent how each decision in such a network can be made. Business context for decisions such as the roles of organizations or the impact on performance metrics. A Friendly Enough Expression Language (FEEL) that can be used to evaluate expressions in a decision table and other logic formats. Use cases The standard identifies three main use cases for DMN Defining manual decision making Specifying the requirements for automated decision-making Representing a complete, executable model of decision-making Benefits Using the DMN standard will improve business analysis and business process management, since other popular requirement management techniques such as BPMN and UML do not handle decision making growth of projects using business rule management systems or BRMS, which allow faster changes it facilitates better communications between business, IT and analytic roles in a company it provides an effective requirements modeling approach for Predictive Analytics projects and fulfills the need for "business understanding" in methodologies for advanced analytics such as CRISP-DM it provides a standard notation for decision tables, the most common style of business rules in a BRMS Relationship to BPMN DMN has been designed to work with BPMN. Business process models can be simplified by moving process logic into decision services. DMN is a separate domain within the OMG that provides an explicit way to connect to processes in BPMN. Decisions in DMN can be explicitly linked to processes and tasks that use the decisions. This integration of DMN and BPMN has been studied extensively. DMN expects that the logic of a decision will be deployed as a stateless, side-effect free Decision Service. Such a service can be invoked from a business process and the data in the process can be mapped to the inputs and outputs of the decision service. DMN BPMN example As mentioned, BPMN is a related OMG Standard for process modeling. DMN complements BPMN, providing a separation of concerns between the decision and the process. The example here describes a BPMN process and DMN DRD (Decision Requirements Diagram) for onboarding a bank customer. Several decisions are modeled and these decisions will direct the processes response. New bank account process In the BPMN process model shown in the figure, a customer makes a request to open a new bank account. The account application provides the account representative with all the information needed to create an account and provide the requested services. This includes the name, address and various forms of identification. In the next steps of the work flow, the 'Know Your Customer' (KYC) services are called. In the 'KYC' services, the name and address are validated; followed by a check against the international criminal database (Interpol) and the database of persons that are 'Politically exposed persons (PEP)'. The PEP is a person who is either entrusted with a prominent political position or a close relative thereof. Deposits from persons on the PEP list are potentially corrupt. This is shown as two services on the process model. Anti-money-laundering (AML) regulations require these checks before the customer account is certified. The results of these services plus the forms of identification are sent to the Certify New Account decision. This is shown as a 'rule' activity, verify account, on the process diagram. If the new customer passes certification, then the account is classified into onboarding for Business Retail, Retail, Wealth Management and High Value Business. Otherwise the customer application is declined. The Classify New Customer Decision classifies the customer. If the verify-account process returns a result of 'Manual' then the PEP or the Interpol check returned a close match. The account representative must visually inspect the name and the application to determine if the match is valid and accept or decline the application. Certify new account decision An account is certified for opening if the individual's' address is verified, and if valid identification is provided, and if the applicant is not on a list of criminals or politically exposed persons. These are shown as sub-decisions below the 'certify new account' decision. The account verification services provides a 100% match of the applicants address. For identification to be valid, the customer must provide a driver's license, passport or government issued ID. The checks against PEP and Interpol are 'Fuzzy' matches and return matching score values. Scores above 85 are considered a 'match' and scores between 65 and 85 would require a 'manual' screening process. People who match either of these lists are rejected by the account application process. If there is a partial match with a score between 65 and 85, against the Interpol or PEP list then the certification is set to manual and an account representative performs a manual verification of the applicant's data. These rules are reflected in the figure below, which presents the decision table for whether to pass the provided name for the lists checks. Client category The client's on-boarding process is driven by what category they fall in. The category is decided by the: Type of client, business or private The size of the funds on deposit And the estimated net worth This decision is shown below: There are 6 business rules that determine the client's category and these are shown in the decision table here: Summary example In this example, the outcome of the 'Verify Account' decision directed the responses of the new account process. The same is true for the 'Classify Customer' decision. By adding or changing the business rules in the tables, one can easily change the criteria for these decisions and control the process differently. Modeling is a critical aspect of improving an existing process or business challenge. Modeling is generally done by a team of business analysts, IT personnel, and modeling experts. The expressive modeling capabilities of BPMN allows business analyst to understand the functions of the activities of the process. Now with the addition of DMN, business analysts can construct an understandable model of complex decisions. Combining BPMN and DMN yields a very powerful combination of models that work synergistically to simplify processes. Relationship to decision mining and process mining Automated discovery techniques that infer decision models from process execution data have been proposed as well. Here, a DMN decision model is derived from a data-enriched event log, along with the process that uses the decisions. In doing so, decision mining complements process mining with traditional data mining approaches. cDMN extension Constraint Decision Model and Notation (cDMN) is a formal notation for expressing knowledge in a tabular, intuitive format. It extends DMN with constraint reasoning and related concepts while aiming to retain the user-friendliness of the original. cDMN is also meant to express other problems besides business modeling, such as complex component design. It extends DMN in four ways: Constraint modelling (see Constraint programming) Adding expressive data representation, such as typed predicates and functions (similar to First-order logic) Data tables, in which each entry represents a different problem instance Quantification Due to these additions, cDMN models can express more complex problems. Furthermore, they can also express some DMN models in more compact, less-convoluted ways. Unlike DMN, cDMN is not deterministic, in the sense that a set of input values could have multiple different solutions. Indeed, where a DMN model always defines a single solution, a cDMN model defines a solution space. Usage of cDMN models can also be integrated in Business Process Model and Notation process models, just like DMN. Example As an example, consider the well-known map coloring or Graph coloring problem. Here, we wish to color a map in such a way that no bordering countries share the same color. The constraint table shown in the figure (as denoted by its E* hit policy in the top-left corner) expresses this logic. It is read as "For each country c1, country c2 holds that if they are different countries which border, then the color of c1 is not the color of c2. Here, the first two columns introduce two quantifiers, both of type country, which serve as universal quantifier. In the third column, the 2-ary predicate borders is used to express when two countries have a shared border. Finally, the last column uses the 1-ary function color of, which maps each country on a color. References External links DMN specifications published by Object Management Group DMN Technology Capability Kit: Test platform for evaluating DMN standard conformance of DMN software products cDMN on readthedocs.io Enterprise modelling Diagrams Decision-making Rule engines Analytics Business analysis Modeling languages
Decision Model and Notation
[ "Engineering" ]
2,063
[ "Systems engineering", "Enterprise modelling" ]
44,425,759
https://en.wikipedia.org/wiki/Rapid%20antigen%20test
A rapid antigen test (RAT), sometimes called a rapid antigen detection test (RADT), antigen rapid test (ART), or loosely just a rapid test, is a rapid diagnostic test suitable for point-of-care testing that directly detects the presence or absence of an antigen. RATs are a type of lateral flow test detecting antigens, rather than antibodies (antibody tests) or nucleic acid (nucleic acid tests). Rapid tests generally give a result in 5 to 30 minutes, require minimal training or infrastructure, and have significant cost advantages. Rapid antigen tests for the detection of SARS-CoV-2, the virus that causes COVID-19, have been commonly used during the COVID-19 pandemic. For many years, an early and major class of RATs—the rapid strep tests for streptococci—were so often the referent when RATs or RADTs were mentioned that the two latter terms were often loosely treated as synonymous with those. Since the COVID-19 pandemic, awareness of RATs is no longer limited to health professionals and COVID-19 has become the expected referent, so more precise usage is required in other circumstances. RATs are based on the principle of antigen-antibody interaction. They detect antigens (generally a protein on the surface of a virus). A linear chromatography substrate (a porous piece of material) bears an indicator line, onto which antibodies directed against the target antigen are fixed. Antibodies are also fixed to a visualisation marker (generally a dye, though sometimes these antibodies are modified to fluoresce), to which the sample is added. Any virus particles present will bind to these markers. This mix then travels through the substrate through capillarity. When it reaches the indicator line, virus particles are immobilised by the antibodies fixed there, along with the visualisation marker, allowing concentration and thus visual detection of significant levels of virus in a sample. A positive result with an antigen test should generally be confirmed by RT-qPCR or some other test with higher sensitivity and specificity. Uses Common examples of RATs or RADTs include: COVID-19 testing-related rapid tests Rapid strep tests (for streptococcal antigens) Rapid influenza diagnostic tests (RIDTs) (for influenza virus antigens) Malaria antigen detection tests (for Plasmodium antigens) COVID-19 rapid antigen tests Rapid antigen tests for COVID-19 are one of the most useful applications of these tests. Often called lateral flow tests, they have provided global governments with several benefits. They are quick to implement with minimal training, offered significant cost advantages, costing a fraction of existing forms of PCR testing and give users a result within 5–30 minutes. Rapid antigen tests have found their best use as part of mass testing or population-wide screening approaches. They are successful in these approaches because in addition to the aforementioned benefits, they identify individuals who are the most infectious and could potentially spread the virus to a large number of other people. This differs slightly from other forms of COVID-19 tests such as PCR that are generally seen to be a useful test for individuals. As early as February 2021, the US Department of State considered the antigen test suitable for entry to the country. In Canada, although the antigen test appeared to be no route to entry in January 2021, Health Canada in August 2021 made available subsidized at no cost rapid antigen tests "to more small and medium-sized organizations through new pharmacy partners". Scientific basis and underlying biology RATs are immunochromatographic assays which give results that can be seen with the naked eye (with or without special illumination, such as a UV lamp). They are qualitative in nature, although within a certain range it is possible to make rough order of magnitude estimates of viral load from the results. RATs are generally screening tests, with relatively low sensitivity and specificity, thus results should be evaluated on the basis of confirmatory tests like PCR testing or western blot. One inherent advantage of an antigen test over an antibody test (such as antibody-detecting rapid HIV tests) is that it can take time for the immune system to develop antibodies after infection begins, but the foreign antigen is present right away. Although any diagnostic test may have false negatives, this latency period can open an especially wide avenue for false negatives in antibody tests, although the particulars depend on which disease and which test are involved. A rapid antigen test typically costs around US$5 to manufacture. References Medical tests Medical terminology Molecular biology Biotechnology Molecular biology techniques Chromatography
Rapid antigen test
[ "Chemistry", "Biology" ]
950
[ "Chromatography", "Separation processes", "Biotechnology", "Molecular biology techniques", "nan", "Molecular biology", "Biochemistry" ]
44,426,201
https://en.wikipedia.org/wiki/Virtual%20metrology
In semiconductor manufacturing, virtual metrology refers to methods to predict the properties of a wafer based on machine parameters and sensor data in the production equipment, without performing the (costly) physical measurement of the wafer properties. Statistical methods such as classification and regression are used to perform such a task. Depending on the accuracy of this virtual data, it can be used in modelling for other purposes, such as predicting yield, preventative analysis, etc. This virtual data is helpful for modelling techniques that are adversely affected by missing data. Another option to handle missing data is to use imputation techniques on the dataset, but virtual metrology in many cases, can be a more accurate method. Examples of virtual metrology include: the prediction of the silicon nitride () layer thickness in the chemical vapor deposition process (CVD), using multivariate regression methods; the prediction of critical dimension in photolithography, using multi-level and regularization approaches; the prediction of layer width in etching. References Semiconductor device fabrication Industrial automation Lithography (microfabrication) Metrology
Virtual metrology
[ "Materials_science", "Engineering" ]
223
[ "Microtechnology", "Industrial engineering", "Automation", "Semiconductor device fabrication", "Nanotechnology", "Industrial automation", "Lithography (microfabrication)" ]
64,360,892
https://en.wikipedia.org/wiki/Raseef22
Raseef22 () is a liberal Arabic media network founded in 2013 based in Beirut, Lebanon. It publishes content in Arabic and English from different Arab states and describes itself as an independent media platform. International Media Support mentions Raseef22 along with HuffPost Arabic and Al Jazeera as one of the biggest Pan-Arab online platforms. Name The Arabic word raseef () means platform or pavement, and the number 22 refers to the number of states in the Arab League. History Kareem Sakka co-founded Raseef22 in the aftermath of the Arab Spring, which he cites as a source of inspiration. In an article in The Washington Post, he wrote that Raseef22 was created as a "digital space for those eager to know what was going on around them." Raseef22 was one of the 500 websites censored in Egypt in late 2017 after it published an article on Egyptian security agencies' vies to influence the media. After the site was blocked in Egypt, it was targeted in a cyber attack that took it offline in locations around the world. Jamal Khashoggi wrote for Raseef22 regularly. One of his notable articles was "Notes on the Freedom of the Arabs from Oslo, Norway," published June 5, 2018. The site was blocked in Saudi Arabia December 2018 when the Saudi Ministry of Communications and Information Technology ordered its censorship due to its "unprecedented response to the assassination of Jamal Khashoggi in Istanbul." This decision might have also been related to Raseef22's coverage of Saudi-Israeli relations and interviews with activists later imprisoned or placed under house arrest coverage In 2019 the (AJL) in Paris gave Raseef22 a golden foreign press award for its six-month series of articles on gender and sexuality issues. Readership According to its publisher in 2019, the news agency counted 12 million readers annually from 22 Arab nations. Of the readership, he wrote that it "believes in the talent and promise of the Arab mind and sees the ugliness of tyranny, patriarchy, misogyny and the futility of proxy rulers and wars." Al-Quds Al-Arabi described Raseef22 as "oriented to the youth." References Digital media Arabic-language websites
Raseef22
[ "Technology" ]
469
[ "Multimedia", "Digital media" ]
64,362,373
https://en.wikipedia.org/wiki/89%20Leonis
89 Leonis is a single star in the equatorial constellation of Leo, the lion. It has a yellow-white hue and is faintly visible to the naked eye with an apparent visual magnitude of 5.70. Based upon parallax measurements, it is located at a distance of 88 light years from the Sun. The star has a high proper motion and is moving further away with a radial velocity of +4.8 km/s. It is a candidate member of the TW Hydrae stellar kinematic group. This is an F-type main-sequence star with a stellar classification of F5.5V. It is an estimated 1.13 billion years old and is spinning with a rotation period of 7.73 days. It shows evidence of a short-term activity cycle lasting  days. The star has 1.3 times the mass of the Sun and 1.4 times the Sun's radius. It is radiating three times the luminosity of the Sun from its photosphere at an effective temperature of 6,461 K. References F-type main-sequence stars TW Hydrae association Leo (constellation) Durchmusterung objects Leonis, 89 100563 056445 4455
89 Leonis
[ "Astronomy" ]
250
[ "Leo (constellation)", "Constellations" ]
64,362,585
https://en.wikipedia.org/wiki/Ann%20and%20H.J.%20Smead%20Department%20of%20Aerospace%20Engineering%20Sciences
The Ann and H.J. Smead Department of Aerospace Engineering Sciences is a department within the College of Engineering & Applied Science at the University of Colorado Boulder, providing aerospace education and research. Housed primarily in the Aerospace Engineering Sciences building on the university's East Campus in Boulder, it awards baccalaureate, masters, and PhD degrees, as well as certificates, graduating approximately 225 students annually. The Ann and H.J. Smead Department of Aerospace Engineering Sciences is ranked 10th in the nation in both undergraduate and graduate aerospace engineering education among public universities by US News & World Report. History Aerospace engineering at the University of Colorado Boulder initially began as an option within the university’s mechanical engineering program in 1930. In 1946, it was split off and became the Department of Aeronautical Engineering under the leadership of aerospace education pioneer Karl Dawson Wood, who served as its first chair. It was renamed the Department of Aerospace Engineering Sciences in 1963. Both the State of Colorado and the department grew as aerospace research centers during the space race. In 1948. The Laboratory for Atmospheric and Space Physics was founded on campus as the Upper Air Laboratory, followed a few years later by Ball Aerospace Corporation, which opened a research facility in Boulder that eventually became their headquarters, and Lockheed Martin Space Systems, which established a strategic plant in nearby southwest Denver in 1955. The later addition of numerous federal research labs to the Boulder landscape, including the National Institute of Standards and Technology (NIST), National Oceanic and Atmospheric Administration, National Center for Atmospheric Research, and in Golden, the National Renewable Energy Laboratory, further expanded the area’s research center. Today, Boulder and the surrounding Denver Metro are home to operations for large aerospace corporations and small startups. In 2017, the department was renamed the Ann and H.J. Smead Department of Aerospace Engineering Sciences in recognition of former Kaiser Aerospace & Electronics Corp CEO, Harold “Joe” Smead, and his widow Ann Smead, in recognition of their significant contributions to the department. Later the same year ground was broken on a 175,000 square-foot, $101 million aerospace building, which opened in 2019. The department now conducts a wide range of research across aeronautical and astronautical science and engineering, as well as in Earth and space sciences. Much of the department’s research cuts across these focus areas including astrodynamics, autonomous systems, bioastronautics, and remote sensing. Facilities Aerospace Mechanics Research Center - Dedicated to the development of next-generation aerospace structures and systems. Known for expertise in multiphysics modeling and optimization of structural systems. Autonomous Vehicle Systems lab - Researching spacecraft dynamics, formation flying, and orbital debris removal utilizing electrostatic force fields. Bioastronautics Laboratories - Low and high bay facilities housing a human centrifuge, Dream Chaser cabin mock-up, thermal vacuum chamber, and other human spacecraft mock-ups. BioServe Space Technologies – Originally founded through a NASA grant in 1987, designs, builds, and operates life science research and hardware for microgravity environments. Facilities include a payload operations center for conducting live uplinks with orbiting astronauts. Colorado Center for Astrodynamics Research – Conducts astrodynamics, space weather, and remote sensing research. Is the largest center in the department, by number of faculty and students. Experimental Aerodynamics Laboratory (EAL) – Housed in a dedicated research building adjacent to the Department’s Aerospace Engineering Sciences Building and containing a low-speed wind tunnel, the EAL is devoted to improving production, understanding, and control of complex flow fields in aerodynamic applications. Research and Engineering Center for Unmanned Vehicles (RECUV) – Research center for development and execution of scientific and commercial experiments for mitigation of natural disasters and national defense utilizing aerial, ground-based, and submersible unmanned vehicles. Part of the multi-university TORUS project partnership. UAV Fabrication Lab - Dedicated to the design and construction of unmanned aerial vehicles and scientific instruments carried by them. Woods / Composites and Metal Machine Shops - Containing four-axis CNC milling machines and lathes, water jets, welding equipment, metal and plastic 3D printers, composite ovens, and indoor hazardous materials test cell. Notable people George Born - Pioneering aerospace researcher and professor who founded the Colorado Center for Astrodynamics Research. Adolf Busemann - Former professor, designer of the swept-wing aircraft Steve Chappell - aerospace engineer and NASA scientist. Member of the NASA Extreme Environment Mission Operations 14 (NEEMO 14) aquanaut crew Charbel Farhat - Former professor, current chair of Stanford University's Department of Aeronautics and Astronautics. Moriba Jah - astrodynamicist, professor at the University of Texas-Austin, and former spacecraft navigator for NASA's Jet Propulsion Laboratory. Steve Jolly – Director and chief engineer of commercial civil space at Lockheed Martin Space Systems. Mark Sirangelo - Current faculty member and former Executive Vice President of Sierra Nevada Space Systems Michael T. Voorhees - Entrepreneur, engineer, designer, geographer, and aeronaut Karl Dawson Wood – Aerospace education pioneer and department founder. Current Faculty Members of the National Academies Brian Argrow Penina Axelrad Daniel Baker Kristine Larson David Marshall Daniel Scheeres CU Boulder-Affiliated Astronauts Loren Acton, NASA astronaut Patrick Baudry, CNES astronaut Vance D. Brand, NASA astronaut Scott Carpenter, NASA astronaut in second orbital flight of Project Mercury Kalpana Chawla, NASA astronaut, died on Columbia Takao Doi, NASA astronaut Samuel T. Durrance, NASA astronaut Richard Hieb, NASA astronaut, current professor Marsha Ivins, NASA astronaut John M. Lounge, NASA astronaut George Nelson, NASA astronaut Ellison Onizuka, NASA astronaut, died on Challenger in January 1986 Stuart Roosa, NASA astronaut, flew on Apollo 14 Ronald M. Sega, NASA astronaut Steven Swanson, NASA astronaut Jack Swigert, NASA astronaut, flew on Apollo 13 Joe Tanner, NASA astronaut, retired professor James Voss, NASA astronaut, current professor References University of Colorado Boulder Aerospace engineering University departments in the United States
Ann and H.J. Smead Department of Aerospace Engineering Sciences
[ "Engineering" ]
1,226
[ "Aerospace engineering" ]
64,362,831
https://en.wikipedia.org/wiki/Stress%20wave%20communication
Stress wave communication is a technique of sending and receiving messages using host structure itself as the transmission medium. Conventional modulation methods such as amplitude-shift keying (ASK), frequency-shift keying (FSK), phase-shift keying (PSK), quadrature amplitude modulation (QAM), pulse-position modulation (PPM) and orthogonal frequency-division multiplexing (OFDM) could be leveraged for stress wave communication. The challenge to use stress wave as the carrier of the communication is the severe signal distortions due to the multipath channel dispersion. Compared with other communication techniques, it is a very reliable communication for special applications, such as within concrete structures, well drilling string, pipeline structures and so on. References Quantized radio modulation modes Applied probability Fault tolerance
Stress wave communication
[ "Mathematics", "Engineering" ]
164
[ "Applied mathematics", "Reliability engineering", "Applied probability", "Fault tolerance" ]
64,364,472
https://en.wikipedia.org/wiki/Computational%20microscopy
Computational microscopy is a subfield of computational imaging, which combines algorithmic reconstruction with sensing to capture microscopic images of objects. The algorithms used in computational microscopy often combine the information of several images captured using various illuminations or measurements to form an aggregated 2D or 3D image using iterative techniques or machine learning. Notable forms of computational microscopy include super-resolution fluorescence microscopy, quantitative phase imaging, and Fourier ptychography. Computational microscopy is at the intersection of computer science and optics. References Imaging Microscopy Multidimensional signal processing
Computational microscopy
[ "Chemistry" ]
107
[ "Microscopy" ]
64,365,600
https://en.wikipedia.org/wiki/Sulfite%20sulfate
A sulfite sulfate is a chemical compound that contains both sulfite and sulfate anions [SO3]2− [SO4]2−. These compounds were discovered in the 1980s as calcium and rare earth element salts. Minerals in this class were later discovered. Minerals may have sulfite as an essential component, or have it substituted for another anion as in alloriite. The related ions [O3SOSO2]2− and [(O2SO)2SO2]2− may be produced in a reaction between sulfur dioxide and sulfate and exist in the solid form as tetramethyl ammonium salts. They have a significant partial pressure of sulfur dioxide. Related compounds are selenate selenites and tellurate tellurites with a varying chalcogen. They can be classed as mixed valent compounds. Production Europium and cerium rare earth sulfite sulfates are produced when heating the metal sulfite trihydrate in air. Ce2(SO3)3.3H2O + O2 → Ce2(SO3)2SO4 + 3H2O Ce2(SO3)3.3H2O + O2 → Ce2SO3(SO4)2 + 3H2O Other rare earth sulfite sulfates can be crystallized as hydrates from a water solution. These sulfite sulfates can be made by at least three methods. One is to dissolve a rare earth oxosulfate in water and then bubble in sulfur dioxide. The second way a rare earth oxide is dissolved in a half equivalent of sulfuric acid. The third way was to bubble sulfur dioxide through a suspension of rare earth oxide in water until it dissolved, then let it sit around for a few days with limited air exposure. To make calcium sulfite sulfate, a soluble calcium salt is added to a mixed solution of sodium sulfite and sodium sulfate. Control of pH is important when attempting to produce solid sulfite compounds. In basic conditions sulfite easily oxidises to sulfate and in acidic conditions it easily turns into sulfur dioxide. Properties In the sulfite sulfates, sulfur has both a +4 and a +6 oxidation state. The crystal structure of sulfite sulfates has been difficult to study, as the crystal symmetry is low, crystals are usually microscopic as they are quite insoluble, and they are mixed with other related phases. So they have been studied via powder X-ray diffraction. Reactions When heated in the absence of oxygen, cerium sulfite sulfate hydrate parts with water by 400 °C. Up to 800° it loses some sulfur dioxide. From 800° to 850 °C it loses sulfur dioxide and disulfur resulting in cerium oxy disulfate, and dioxy sulfate, which loses some further sulfur dioxide as it is heated to 1000 °C. Over 1000° the remaining oxysulfates decompose to sulfur dioxide, oxygen and cerium dioxide. This reaction is studied as a way to convert sulfur dioxide into sulfur and oxygen using only heat. Another thermochemical reaction for cerium sulfite sulfate hydrate involves using iodine to oxidise the sulfite to sulfate, producing hydrogen iodide which can then be used to make hydrogen gas and iodine. When combined with the previous high temperature process, water can be split into oxygen and hydrogen using heat only. This is termed the GA sulfur-iodine water splitting cycle. Applications Calcium sulfite sulfate hydrate is formed in flue gas scrubbers that attempt to remove sulfur dioxide from coal burning facilities. Calcium sulfite sulfate hydrate is also formed in the weathering of limestone, concrete and mortar by sulfur dioxide polluted air. These two would be classed as anthropogenic production as it was not deliberately produced or used. List References Sulfites Sulfates Mixed anion compounds
Sulfite sulfate
[ "Physics", "Chemistry" ]
806
[ "Matter", "Mixed anion compounds", "Sulfates", "Salts", "Ions" ]
64,365,851
https://en.wikipedia.org/wiki/Raman%20spectroelectrochemistry
Raman spectroelectrochemistry (Raman-SEC) is a technique that studies the inelastic scattering or Raman scattering of monochromatic light related to chemical compounds involved in an electrode process. This technique provides information about vibrational energy transitions of molecules, using a monochromatic light source, usually from a laser that belongs to the UV, Vis or NIR region. Raman spectroelectrochemistry provides specific information about structural changes, composition and orientation of the molecules on the electrode surface involved in an electrochemical reaction, being the Raman spectra registered a real fingerprint of the compounds. When a monochromatic light beam samples the electrode/solution interface, most of the photons are scattered elastically, with the same energy than the incident light. However, a small fraction is scattered inelastically, being the energy of the laser photons shifted up or down. When the scattering is elastic, the phenomenon is denoted as Rayleigh scattering, while when it is inelastic it is called Raman scattering. Raman spectroscopy combined with electrochemical techniques, makes Raman spectroelectrochemistry a powerful technique in the identification, characterization and quantification of molecules. The main advantage of Raman spectroelectrochemistry is that it is not limited to the selected solvent, and aqueous and organic solutions can be used. However, the main disadvantage is the intrinsic low Raman signal intensity. Different methods as well as new substrates were developed to improve the sensitivity and selectivity of this multirresponse technique. For researchers, a few experimental considerations related to Raman spectroelectrochemistry include electrode preparation, cell design, laser parameters, electrochemical sequence and data process. Methods RRS effect (Resonance Raman Scaterring) The Raman resonance effect produces an increase in Raman intensity up to 106 times. In this phenomenon, the monochromatic light interaction with the sample produces the transition of the molecules from the fundamental state to an excited electronic state, instead of a virtual state as in normal Raman spectroscopy. This phenomenon of increased intensity could be observed in materials such as carbon nanotubes. SERS (Surface-Enhanced Raman Scattering) Surface-Enhanced Raman Scattering (SERS) is a technique capable of increasing Raman signal intensity up to 1011 times. This phenomenon is based on the interaction of monochromatic light with materials that exhibit plasmonic properties. The most common metals used in SERS are nanostructured metals with plasmonic band (gold, silver or copper). Nanostructured electrode surfaces can be generated by depositing metallic nanostructures of these materials. A disadvantage of this phenomenon is, sometimes, the lack of reproducibility of the spectra due to the difficulty of obtaining identical nanostructured surfaces in each experiment. SOERS (Surface-Oxidation-Enhanced Raman Scattering) Surface-oxidation enhanced Raman scattering (SOERS) is a process similar to SERS, which allows the Raman signal to be enhanced when a silver electrode is oxidized in a particular electrolyte composition. This process is carried out at sufficiently positive potentials to ensure the oxidation of the electrode surface. There are significant differences with the SERS effect, but it is a phenomenon that also enhances the Raman signal. SHINERS (Shell-Isolated Nanoparticle-Enhanced Raman Spectroscopy) In SHINERS, metallic nanoparticles with plasmonic properties are coated with ultra-thin homogeneous silica or alumina layers, forming isolated nanoparticles. The metallic nucleus (Au or Ag) is responsible of the enhancement of the Raman signals of the nearby molecules, while the coating layers eliminate the influence of the metallic nucleus on the Raman and electrochemical signals by preventing the molecules from being directly adsorbed onto them. Silica and alumina coating can improve the chemical and thermal stability of nanoparticles. This fact has great importance in the in-situ study of catalytic reactions. The high sensitivity of the SHINERS surfaces makes these nanostructures a promising tool for the study of liquid-solid interfaces, especially in spectroelectrochemistry. TERS (Tip-Enhanced Raman Scattering) Tip-enhanced Raman scattering (TERS) is a technique that provides molecular information at nanoscale. In these experiments, metal nanostructures are replaced by a sharp metal tip of nanometric size, concentrating the roughness directly on a small region that improves the spatial resolution of scanning techniques in Raman spectroscopy. Configuration Different configurations can be used to perform Raman-SEC experiments. Raman scattering provides spectra with very weak Raman bands, therefore, a very well aligned optical configuration is required. Laser has to be focused on the electrode surface and an efficient collection of the scattered photons is mandatory. Many of the instruments used for Raman-SEC are based on the combination of a spectrometer, a potentiostat and a confocal microscope, since it is possible to focus and collect the scattered photons in a highly efficient way. Low resolution Raman spectrometers can be also used, providing suitable results. Using this setup, the sampling area is larger and average information about the electrode surface is obtained. Typical configurations in Raman-SEC: Normal configuration. The laser beam samples the electrode/solution interface in a normal way respect to the electrode surface. The scattered radiation is collected, and the monochromator allows passing only the light beam with wavelengths different from that of the laser used. Inverted microscope. In this configuration the electrode/solution is sampled from behind the electrode, using optically transparent electrodes (OTE). Angular configuration. This configuration is usually selected when electrochemical techniques are combined with TERS. Instrumentation The experimental setup to perform Raman spectroelectrochemistry consists of a light source, a spectrometer, a potentiostat, a spectroelectrochemical cell, a three-electrode system, radiation beam conducting devices, data collection and analysis devices. Nowadays, there are commercial instruments that integrate all these elements in a single instrument, significantly simplifying the performance of spectroelectrochemical experiments. Light source. It provides the monochromatic electromagnetic radiation that interacts with the sample during the electrochemical process. In Raman-SEC, the light source is usually a laser corresponding to the VIS or NIR regions, which commonly emits at 532, 633, 785 or 1064 nm, although there is the possibility of using many other lasers, including UV-lasers. Spectrometer. It records the scattered radiation and provides the Raman spectra of the molecules. In Raman-SEC, spectrometers are usually combined with confocal microscopes (micro-Raman) to remove the information out of the focus, obtaining an excellent spectral resolution. However, it is possible to work with low resolution Raman spectrometers obtaining very good results. Potentiostat/Galvanostat. It is the electronic device that allows controlling the potential of the working electrode respect to the reference electrode, or controlling the current that passes respect to the auxiliary electrode. Three-electrode system. It contains a working electrode, a reference electrode and an auxiliary electrode. This system can be simplified by using screen-printed electrodes that include all three electrodes in a single holder. Spectroelectrochemical cell (SEC cell). It is the device that includes the three-electrode system and allows the simultaneous recording of the Raman spectra of the species and the electrochemical signal. It is the link between optical and electrochemical techniques. Devices for conducting the radiation beam: lenses, mirrors and/or optical fibres. The last ones conduct the electromagnetic radiation over long distances with hardly any losses. In addition, they simplify the optical configurations since they allow working with a small amount of solution; in this way, it is easier to conduct and collect the light in the nearness of the electrode. Data collection and analysis devices. It consists of a computer to collect simultaneously the signals provided by the spectrometer and the electrochemical instrument. Using an appropriate software, the generated signals can be acquired, transformed, analyzed and interpreted. Applications In recent years Raman-SEC has become an important tool in the study of electrochemical processes and in the characterization of many molecules, providing specific in situ information about them. Some applications are: Materials: Raman-SEC is widely used in the study and characterization of new materials, such as graphene, carbon nanotubes or conductive polymers, among others. It is also applied in the study of dyes, organic molecules capable of forming monolayers on the electrode, and in the study of proteins. Qualitative and quantitative analysis: Raman-SEC can be applied to highly complex samples, such as the detection of melamine in milk, the identification of bacteria, the detection of DNA biomarkers and/or uric acid in urine, among others. In addition, very low concentrations can be detected. Energy. Raman-SEC were used in the study of solar cells, batteries and catalysts for fuel cells. Transfer processes at the liquid/liquid interfaces: Raman-SEC is used to monitor ion or electron transfer processes at polarizable interfaces between immiscible electrolyte solutions. References Raman spectroscopy Electrochemistry
Raman spectroelectrochemistry
[ "Chemistry" ]
1,926
[ "Electrochemistry" ]
64,366,263
https://en.wikipedia.org/wiki/Developable%20roller
In geometry, a developable roller is a convex solid whose surface consists of a single continuous, developable face. While rolling on a plane, most developable rollers develop their entire surface so that all the points on the surface touch the rolling plane. All developable rollers have ruled surfaces. Four families of developable rollers have been described to date: the prime polysphericons, the convex hulls of the two disc rollers (TDR convex hulls), the polycons and the Platonicons. Construction Each developable roller family is based on a different construction principle. The prime polysphericons are a subfamily of the polysphericon family. They are based on bodies made by rotating regular polygons around one of their longest diagonals. These bodies are cut in two at their symmetry plane and the two halves are reunited after being rotated at an offset angle relative to each other. All prime polysphericons have two edges made of one or more circular arcs and four vertices. All of them, but the sphericon, have surfaces that consist of one kind of conic surface and one, or more, conical or cylindrical frustum surfaces. Two-disc rollers are made of two congruent symmetrical circular or elliptical sectors. The sectors are joined to each other such that the planes in which they lie are perpendicular to each other, and their axes of symmetry coincide. The convex hulls of these structures constitute the members of the TDR convex hull family. All members of this family have two edges (the two circular or elliptical arcs). They may have either 4 vertices, as in the sphericon (which is a member of this family as well) or none, as in the oloid. Like the prime polysphericons the polycons are based on regular polygons but consist of identical pieces of only one type of cone with no frustum parts. The cone is created by rotating two adjacent edges of a regular polygon (and in most cases their extensions as well) around the polygon's axis of symmetry that passes through their common vertex. A polycon based on an n-gon (a polygon with n edges) has n edges and n + 2 vertices. The sphericon, which is a member of this family as well, has circular edges. The hexacon's edges are parabolic. All other polycons' edges are hyperbolic. Like the polycons, the Platonicons are made of only one type of conic surface. Their unique feature is that each one of them circumscribes one of the five Platonic solids. Unlike the other families, this family is not infinite. 14 Platonicons have been discovered to date. Rolling motion Unlike axially symmetrical bodies that, if unrestricted, can perform a linear rolling motion (like the sphere or the cylinder) or a circular one (like the cone), developable rollers meander while rolling. Their motion is linear only on average. In the case of the polycons and Platonicons, as well as some of the prime polysphericons, the path of their center of mass consists of circular arcs. In the case of the prime polysphericons that have surfaces that contain cylindrical parts the path is a combination of circular arcs and straight lines. A general expression for the shape of the path of the TDR convex hulls center of mass has yet to be derived. In order to maintain a smooth rolling motion the center of mass of a rolling body must maintain a constant height. All prime polysphericons, polycons, and platonicons and some of the TDR convex hulls share this property. Some of the TDR convex hulls, like the oloid, do not possess this property. In order for a TDR convex hull to maintain constant height the following must hold: Where a and b are the half minor and major axes of the elliptic arcs, respectively, and c is the distance between their centers. For example, in the case where the skeletal structure of the convex hull TDR consists of two circular segments with radius r, for the center of mass to be kept at constant height, the distance between the sectors' centers should be equal to r. References External links Sphericon series A list of the first members of the polysphericon family and a discussion about their various kinds. Geometric shapes Euclidean solid geometry
Developable roller
[ "Physics", "Mathematics" ]
916
[ "Geometric shapes", "Euclidean solid geometry", "Mathematical objects", "Space", "Geometric objects", "Spacetime" ]
64,366,502
https://en.wikipedia.org/wiki/Center%20for%20Quantum%20Spintronics
The Center for Quantum Spintronics (QuSpin) is a research center at the Department of Physics at the Norwegian University of Science and Technology (NTNU). In 2017, the Research Council of Norway designated QuSpin as a Center of Excellence (SFF) for the period 2017–2027. Spintronics, or spin electronics, is a field in condensed matter physics, a study of the physical effects associated with quantum mechanical spin. Electrons do not only have charge. They also have a spin, an apparent inner rotation, as if the electrons spin around their own axis. Spintronics has already contributed to a revolution in data storage, and was, among other things, the basis for Apple's music player iPod. The researchers at QuSpin aim to describe and develop new ways of controlling electrical signals. This may contribute to a major development in energy efficient information and communication technology. The results of their research have already attracted interest internationally, and have been discussed and published in several scientific journals, e.g. Nature and Science. In electronics, the electrical charge of electrons is used to store and process information. Electric currents generate a lot of heat, which is emitted to the surroundings. This is an increasing challenge, and limits how small and efficient electronic devices can be. QuSpin works to find new ways of controlling and utilizing electrons' intrinsic spin. The goal is to control the spin, and other quantum mechanical variables, using new combinations of nanoscale materials. Their research includes studies of quantum mechanical transport properties for superconducting, magnetic and topological materials. QuSpin has research activity in both theoretical and experimental physics. By the end of 2018, the center had a team of more than 60 members, of which 11 were professors and associate professors; three researchers; seven postdocs and 26 Ph.D. students. The center management consists (as of 2023) of four primary investigators: Professor Arne Brataas; Professor Jacob Linder; Professor Asle Sudbø (Center Leader) and Associate Professor Hendrik Bentmann. References External links Research in Norway Norwegian University of Science and Technology Spintronics
Center for Quantum Spintronics
[ "Physics", "Materials_science" ]
436
[ "Spintronics", "Condensed matter physics" ]
64,367,376
https://en.wikipedia.org/wiki/Araceli%20S%C3%A1nchez%20Urquijo
Araceli Sánchez Urquijo (17 February 1920 – 2010) was a Niños de Rusia child evacuee during the Spanish Civil War and the first woman to work as a civil engineer in Spain. Early life Araceli Sánchez Urquijo was born in Sestao in the Basque Country on 17 February 1920. Her parents were Jesusa Urquijo Aldasoro (1895 - 1984) and Benito Sánchez Garcia (1899-1959). She was the second of five children, with an elder sister Isabel, and younger siblings Oscar, Begoña and Esteban. Evacuation to the Soviet Union Between 1937 and 1938, thousands of children living in Republican held areas during the Spanish Civil War were evacuated abroad to save them from the dangers and deprivations of war as the Francoist troops encroached on their home areas. 2,895 children, mostly from the Basque Country, Asturias and Cantabria, were evacuated to the Soviet Union. They became known as the Niños de Rusia. Araceli Sánchez Urquijo was amongst the children who were chosen to go, leaving Spain from the port of Santurtzi in 1937. She sailed on the Habana from the port of Santurtzi on 13 June 1937, initially to Bordeaux. She was classed as a monitor because at 17 years old she was slightly too old to be evacuated as a minor, classed as between 1 and 16 years. The Habana was escorted by a Royal Navy ship, which meant that, despite a tense encounter with a Francoist navy ship, they were finally allowed to continue their voyage, with 4,500 children on board. At Bordeaux, 1,495 children transferred onto the merchant ship Sontay, destined for the port of Leningrad, which they reached after a seven-day voyage. The Sontay had an Asian crew, so communications were difficult because of the language barrier. The children travelled in the holds, in unfit conditions, and arrived in Leningrad dirty, with lice, colds or pneumonia. Araceli Sánchez Urquijo later remembered "rats as big as cats" in the holds. The children were immediately seen by medical professionals. The evacuation was the second to go to the USSR. Laura Irasuegi Otal also travelled on the ship and would go on to become a Soviet trained Spanish civil engineer. The Niños de Rusia were largely welcomed and well cared for in Russia, living in Las Casas de Niños, large children's houses. They were mostly educated in Spanish and taught to appreciate Spanish culture but using the Soviet method. Some of the children were encouraged to learn to speak Russian but not all learned it. The presence of the children was seen by the Soviet Union as a way of publicly supporting the future Spanish socialist republic which they hoped would emerge from the Civil War by caring for the next generation of their political elite. Araceli Sánchez Urquijo was settled in a Casas de Niños in Leningrad and was impressed by the quantity of toys available, but even more so by the length of the days due to the city's northern latitude. She later recalled her early years in Russia as the happiest of her life. The onset of the Second World War led to the children's houses being closed, the children being moved to more barrack like accommodation and integrated into Soviet schools. The invasion by the Nazi army put the Spanish children in danger and they suffered extreme hardship and deprivation alongside the wider Soviet population. There had been calls for the repatriation of the children to Francoist Spain since the late 1930s. They were not allowed to return to Spain as the diplomatic relationships between the two countries had foundered with the war, even though the Soviet leader Joseph Stalin now found the children's presence embarrassing. Education After the Second World War, most of the surviving Niños de Rusia settled in or near Moscow. Many were no longer children and were trained for careers. Araceli Sánchez Urquijo was one of 23 Spaniards (including 5 women) were among the first 45 hydropower engineers trained at the University of Moscow. In 1949 she graduated as a civil engineer specialising in hydraulics from the Moscow Power Engineering Institute. She later said "I owe everything to Russia and that I have had a hard time... What I did would have been impossible for a woman in Spain." Career Araceli Sánchez Urquijo began her career as an engineer working in Uzbekistan. For five years she worked in Central Asia building hydraulic power plants and power lines and was promoted to the sub-direction of a technological department. With the death of Stalin in 1953, diplomatic relations between Spain and the Soviet Union thawed a little and negotiations were reopened around the repatriation of the Niños de Rusia exiled during the Spanish Civil War. An agreement was reached which allowed the exiles to return to Spain and at the end of 1956, Araceli Sánchez Urquijo returned to Spain in the first wave of now adult Niños de Rusia, leaving on a ship from Odessa and arriving in Valencia. The returnees were welcomed by a large crowd, the press and the authorities. Her reunion with her family after nearly twenty years apart was "bittersweet and deeply emotional". Both her father and her sister, who had been a nurse in the war, had been imprisoned for their beliefs and work on behalf of the Basque people. The arrival back in Spain was difficult for most of the Niños de Rusia. They were held in suspicion as being left wing Soviet sympathisers or spies by the right wing Francoist regime. Their educational and professional qualifications were not in Spanish and were not considered to be of a similar quality as Spanish qualifications, so they struggled to get jobs they were qualified for. Araceli Sánchez Urquijo was subjected to interrogations by the Political-Social Brigade and American security and intelligence agents. In 1957 Araceli Sánchez Urquijo applied to work at the Isodel Sprecher engineering company in the Calle de Áncora in Madrid which specialised in the manufacture of electrical equipment and installation of production plants. As she attempted to enter the front door for her interview, the security guard told her "Cleaning women can only enter the factory when the workers have left". He is said to have been very surprised when he discovered that the woman in front of him was an engineer and one of the five candidates for the position on offer. The applicants underwent three tests and Sánchez scored four points above the second best placed candidate. Isodel's founder and director Clemente Cebrián Martínez (1908-2000), did not hesitate and appointed Sánchez. She later recalled "The engineers who competed with me did not want to accept the result. They rudely insulted me, denounced me to the General Directorate of Security for being a communist and requested that I be expelled from Spain." Despite their very different political beliefs, (Cebrián was a capitalist and respected by Franco's right wing regime) he supported and defended Sánchez in her work as the cutting edge engineering she had been involved with in the Soviet Union was invaluable to his business. She was appointed head of the Isolux project department, and was eventually in charge of more than 150 professionals. When engineers from other countries visited the company, Cebrián enjoyed introducing her as "This is the engineer Sánchez: she is a woman and a communist" to which she would reply with a smile that she was not a communist but a Marxist. Sánchez's qualifications were in Russian and she had not learned technical Spanish, so she developed her own Russian-Spanish engineering dictionary, whilst being careful not to show any misunderstandings to her colleagues. She was prohibited from leaving Spain and was not allowed a passport until 1975, after the death of Franco, which did present issues working in an international marketplace. Her work at Isodel involved adapting projects for the construction of hydraulic, electric, thermal and nuclear power plants. For several months she worked with Ernesto Botella, (father of Ana Botella, the first female Mayor of Madrid and father-in-law of former Spanish president José María Aznar) who was the head of workshops and wanted her in his department because he knew about her technological expertise. Her first two years were a continuous tussle with other engineers, as she described the plans they drew up as "a disaster and I returned them with the corresponding notes and corrections”. In the mid-1960s, the multinational Kellogg's Corporation launched a competition for the Repsol refinery electrical project in Puertollano. Araceli sent a project proposal with hundreds of plans to their London office. A few days later a telegram was received at Isodel requesting the presence of the engineer Sánchez in London. It was the first time that the company had won an international competition and Clemente was delighted. However, Franco threatened to shut the Isodel company down if Araceli left Spain. Fortunately the Kellogg engineers were sufficiently impressed to travel to Spain instead and Isodel was awarded the project. Araceli Sánchez Urquijo continued to work for Isodel until her retirement in 1987. By the time she retired she had employed 14 female draftsmen to work in her department. Personal life She was a founder member of the Club de Amigos de la UNESCO de Madrid, and was the proud possessor of membership card number one. In retirement she became president of the Izquierda Unida Social Organization for the Elderly and maintained a busy intellectual life. She maintained her socialist views throughout her life and in an interview in 1999 summed her beliefs up as "Ser los más honrados, los más solidarios y los que más ayudamos a los trabajadores" (To be the most honest, the most supportive and the one that helps the workers the most). She appeared in a documentary about Los niños de Rusia in 2001. Araceli Sánchez Urquijo died in Cabuérniga in 2010. Commemoration On 27 June 2023, "Bide-ingeniaritzako euskal emakume aitzindariak” an exhibition celebrating Basque Women Pioneers of Civil Engineering, was opened at the headquarters of the College of Civil Engineering in Bilbao. It told the story of two Basque women pioneers, Araceli Sánchez Urquijo and Laura Irasuegi Otal, both “gerrako umeak” children of the war, who trained in Moscow as civil engineers, a discipline traditionally dominated by men. References 1920 births 2010 deaths 20th-century Spanish engineers Basque women Spanish women engineers People from Sestao Exiles of the Spanish Civil War in the Soviet Union Spanish expatriates in the Soviet Union Civil engineers Women in engineering Soviet people Soviet women engineers Spanish emigrants Evacuations during the Spanish Civil War Refugees in the Soviet Union Child refugees
Araceli Sánchez Urquijo
[ "Engineering" ]
2,193
[ "Civil engineering", "Civil engineers" ]
64,367,549
https://en.wikipedia.org/wiki/GW190814
GW 190814 was a gravitational wave (GW) signal observed by the LIGO and Virgo detectors on 14 August 2019 at 21:10:39 UTC, and having a signal-to-noise ratio of 25 in the three-detector network. The signal was associated with the astronomical super event S190814bv, located 790 million light years away, in location area 18.5 deg2 towards Cetus or Sculptor. No optical counterpart was discovered despite an extensive search of the probability region. Discovery In June 2020, astronomers reported details of a compact binary merging, in the "mass gap" of cosmic collisions, of a first-ever "mystery object", either an extremely heavy neutron star (that was theorized not to exist) or a too-light black hole, with a black hole, that was detected as the gravitational wave GW190814. The mass of the lighter component is estimated to be 2.6 times the mass of the Sun ( ≈ ), placing it in the aforementioned mass gap between neutron stars and black holes. Despite an intensive search, no optical counterpart to the gravitational wave was observed. The lack of emitted light could be consistent with either a situation in which a black hole entirely consumed a neutron star or the merger of two black holes. See also Gravitational-wave astronomy List of gravitational wave observations Multi-messenger astronomy Notes References External links (24 June 2020; Science Fellow) (24 June 2020; LIGO Scientific Collaboration) (23 June 2020; Max Planck Institute for Gravitational Physics) (23 June 2020; Gravitational-wave Open Science Center (GWOSC)) Black holes Gravitational waves Neutron stars Theory of relativity August 2019 2019 in science 2019 in outer space
GW190814
[ "Physics", "Astronomy" ]
350
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Waves", "Density", "Theory of relativity", "Stellar phenomena", "Gravitational waves", "Astronomical objects" ]
64,367,582
https://en.wikipedia.org/wiki/Anne%20Chamney
Anne Rosemary Chamney CEng MIMechE (16 April 1931 – 9 December 2008) was a British mechanical engineer specialising in medical equipment. She is best known for her invention of a novel oxygen tent which was much cheaper than existing tents, much lighter and therefore easier to transport. Early life Anne Rosemary Chamney was born in Amersham on 16 April 1931 to Eleanor Margery Hampshire and Ronald Martin Chamney. She had one older brother John, born in 1928. According to the 1911 census, her father Ronald was an engineer with the National Telephone Company and held a BSc in engineering. As a young child, Chamney was ambidextrous. She attended an all girls school from the age of nine until she was 16. She earned an MS in biomechanics at the University of Surrey and a PhD in physiology which focussed on the effect of carbon monoxide during pregnancy in rats, which influenced later research into the effect of smoking on humans during pregnancy. Career Chamney studied at the Royal Aeronautical Society and became an apprentice at the De Havilland Aircraft Company in Hatfield from 1953 to 1958. She moved to become a Technical Assistant in the Medical Development Group at the British Oxygen Company between 1959 and 1961. Chamney patented an apparatus for humidifying gases in 1960 whilst working there. Later she became a senior technician at University College Hospital Medical School in London where she evaluated hospital equipment. Whilst working there, in 1966 she invented of a novel oxygen tent which was much cheaper than existing tents, it was also lighter and therefore easier to transport. The oxygen tent was published in The Lancet in 1967 and received international publicity, with coverage in the United States stating that her invention cost only $50, when other oxygen tents cost up to $750. She credited being able to work closely with medical staff and developing clinical knowledge as being vital to the development of relevant and useful medical equipment. By 1985, Chamney was Chief Technician in the Department of Anesthesia at the Royal Free Hospital in Hampstead. Chamney was awarded the first James Clayton Prize in Medical Engineering from the Institution of Mechanical Engineers, and received an additional award in acknowledgement of her research and development work. Chamney was also a Fellow of the Irish Genealogical Research Society and a member of the Women's Engineering Society. Anne Chamney died on 9 December 2008 and was cremated on 16 December at Hendon Cemetery and Crematorium in Barnet, London. Selected publications Wayne, D.J., and Chamney, A.R. (1969) Oxygen tent performance. Physics in Medicine & Biology, 14(9) Oxygen tent performance Wayne, D.J., and Chamney, A.R. (1969) Oxygen tents: A comparison of two techniques. Anaesthesia, 24(4) Oxygen tents.: A comparison of two techniques Chamney, A.R. (1969) Humidification Requirements and Techniques. Including a Review of the Performance of Equipment in Current Use, 24(4) Humidification requirements and techniques.: Including a review of the performance of equipment in current use References 1931 births 2008 deaths Alumni of the University of Surrey Mechanical engineers British women engineers Medical devices Women's Engineering Society British inventors Women inventors People from Amersham
Anne Chamney
[ "Biology" ]
663
[ "Medical devices", "Medical technology" ]
64,368,349
https://en.wikipedia.org/wiki/Alexios%20Polychronakos
Alexios Polychronakos (born 1959, in Greece) is a theoretical physicist. He studied electrical engineering at the National Technical University of Athens (diploma in 1982) and did graduate work in theoretical physics at the California Institute of Technology (Ph.D. 1987 ) under the supervision of John Preskill. Polychronakos is a professor of physics at the City College of New York. He is considered an authority on quantum field theory, quantum statistics, anyons, integrable systems, and quantum fluids, having authored over 110 refereed papers. He is a Fellow of the American Physical Society (2012), cited for "For important contributions to the field of statistical mechanics and integrable systems, including the Polychronakos model and the exchange operator formalism, fractional statistics, matrix model description of quantum Hall systems as well as other areas such as noncommutative geometry". References External links Polychronakos' profile at CUNY Inspire profile Google scholar profile 20th-century Greek physicists 21st-century American physicists California Institute of Technology alumni Living people Particle physicists Fellows of the American Physical Society Theoretical physicists Mathematical physicists 1959 births
Alexios Polychronakos
[ "Physics" ]
238
[ "Theoretical physics", "Particle physicists", "Particle physics", "Theoretical physicists" ]
64,368,363
https://en.wikipedia.org/wiki/Matthew%20Fuchter
Matthew John Fuchter is a British chemist who is a Professor of Chemistry at the University of Oxford. His research focuses on the development and application of novel functional molecular systems to a broad range of areas; from materials to medicine. He has been awarded both the Harrison-Meldola Memorial Prize (2014) and the Corday–Morgan Prizes (2021) of the Royal Society of Chemistry. In 2020 he was a finalist for the Blavatnik Awards for Young Scientists. Early life and education Fuchter earned a master's degree (MSci) in chemistry at the University of Bristol, where he was awarded the Richard Dixon prize. It was during his undergraduate degree that he first became interested in organic synthesis. As a graduate student he moved to Imperial College London, where he worked with Anthony Barrett on the synthesis and applications of porphyrazines, including as therapeutic agents. During his doctoral studies Barrett and Fuchter collaborated with Brian M. Hoffman at Northwestern University. Research and career After completing his PhD, Fuchter moved to Australia, for postdoctoral research at CSIRO and the University of Melbourne, where he worked with Andrew Bruce Holmes. In 2007 Fuchter returned to the United Kingdom, where he began his independent academic career at the School of Pharmacy, University of London (now UCL School of Pharmacy). Less than one year later he was appointed a Lecturer at Imperial College London, where he was promoted to Reader (Associate Professor) in 2015 and Professor in 2019. Fuchter develops photoswitchable molecules, chiral materials and new pharmaceuticals. Fuchter is interested in how considerations of chirality can be applied to the development of novel approaches in chiral optoelectronic materials and devices. In particular, he focusses on the introduction of chiral-optical (so-called chiroptical) properties into optoelectronic materials. Amongst these materials, Fuchter has extensively evaluated the use of chiral small molecule additives (helicenes) to induce chiroptical properties into light emitting polymers for the realisation of chiral (circularly polarised, CP) OLEDs. He has also investigated the application of such materials in circularly polarised photodetectors, which are devices that are capable of detecting circularly polarised light. As well as using chiral functional materials for light emission and detection, Fuchter has investigated the charge transport properties of enantiopure and racemic chiral functional materials. Fuchter has also developed novel molecular photoswitches – molecules that can be cleanly and reversibly interconverted between two states using light – with a focus on heteroaromatic versions of azobenzene. The arylazopyrazole switches developed by Fuchter out perform the ubiquitous azobenzene switches, demonstrating complete photoswitching in both directions and thermal half-lives of the Z isomer of up to 46 years. Fuchter continues to apply these switches to a range of photoaddressable applications from photopharmacology to energy storage. Alongside his work on functional material discovery, Fuchter works in medicinal chemistry and develops small molecule ligands that can either inhibit or stimulate the activity of disease relevant proteins. While he has worked on many drug targets, he has specialised in proteins involved in the transcriptional and epigenetic processes of disease. A particular interest has been the development of inhibitors for the histone-lysine methyltransferase enzymes in the Plasmodium parasite that causes human malaria. In 2018 one of the cancer drugs developed by Fuchter, together with Anthony Barrett, Simak Ali and Charles Coombes entered a phase 1 clinical trial, and as of 2020, it is in phase 2. The drug, which was designed using computational chemistry, inhibits the cyclin-dependent kinase 7 (CDK7), a transcriptional regulatory protein that also regulates the cell cycle. Certain cancers rely on CDK7, so inhibition of this enzyme has potential to have a significant impact on cancer pathogenesis. In 2024 Fuchter joined the University of Oxford as a Professor of Chemistry and the Sydney Bailey Fellow in Chemistry at St Peter’s College Oxford. Academic service Fuchter serves on the editorial board of MedChemComm. He is an elected council member of the Royal Society of Chemistry organic division. Fuchter is co-Director of the Imperial College London Centre for Drug Discovery Science. Awards and honours 2014 Royal Society of ChemistryHarrison-Meldola Memorial Prize 2014 Elected a Fellow of the Royal Society of Chemistry (FRSC) 2015 Thieme Medical Publishers Chemistry Journal Awardee 2017 Imperial College London President's Award for Excellence in Research 2017 Imperial College London President’s Medal for Excellence in Innovation and Entrepreneurship 2018 Tetrahedron Young Investigator Award 2018 Engineering and Physical Sciences Research Council (EPSRC) Established career fellowship 2020 Blavatnik Awards for Young Scientists 2021 Royal Society of Chemistry Corday–Morgan Prize 2022 Royal Society of Chemistry Stephanie L. Kwolek Award 2023 Royal Society of Chemistry Biological and Medicinal Chemistry Sector Malcolm Campbell Memorial Prize 2023 Elected Fellow of the European Academy of Sciences and Arts Selected publications References Living people Year of birth missing (living people) British chemists Alumni of the University of Bristol Alumni of Imperial College London Academics of Imperial College London Medicinal chemistry Fellows of the Royal Society of Chemistry
Matthew Fuchter
[ "Chemistry", "Biology" ]
1,102
[ "Biochemistry", "nan", "Medicinal chemistry" ]
64,370,232
https://en.wikipedia.org/wiki/Agglomerin
Agglomerins are bacterial natural products, identified as metabolites of Pantoea agglomerans which was isolated in 1989 from river water in Kobe, Japan. They belong to the class of tetronate antibiotics, which include tetronomycin, tetronasin, and abyssomicin C. The members of the agglomerins differ only in the composition of the acyl chain attached to the tetronate ring. They possess antibiotic activity against anaerobic bacteria and weak activity against aerobic bacteria in vitro. The structures were solved in 1990. Agglomerin A is the major component (38%), followed by agglomerin B (30%), agglomerin C (24%), and agglomerin D (8%). Biosynthesis The biosynthetic gene cluster for agglomerins is 12 kb, and codes for 7 open reading frames. The glyceryl-S-ACP is derived from D-1,3-bisphosphoglycerate by Agg2 (glyceryl-S-ACP synthase) and Agg3 (acyl carrier protein). The acyl chain is taken from primary metabolism as a 3-oxoacyl-CoA thioester. The glyceryl-S-ACP and 3-oxoacyl-CoA thioester are joined by Agg1, a FabH-like ketosynthase, forming new C-C and C-O bonds. The primary alcohol of the intermediate 4 is then acylated by Agg4, using acetyl-CoA, before the abstraction of a proton and concomitant loss of acetate catalyzed by Agg5 to generate the exocyclic double bond. References Antibiotics Lactones Natural products
Agglomerin
[ "Chemistry", "Biology" ]
390
[ "Natural products", "Biotechnology products", "Antibiotics", "Medicinal chemistry", "Biocides" ]
62,056,795
https://en.wikipedia.org/wiki/Ullim
Ullim () is a brand name of an Android-based tablet computers family which is sold in North Korea. The tablets are marketed and sold by Pyongyang Informatics Company. The "Ullim" tablet is one of four tablet devices marketed by separate companies in North Korea. It is on sale for 120 US dollars for a 7-inch model, and 210 US dollars for a 10.1-inch model. Both units have a 1.5-GHz (gigahertz) dual core CPU (central processing unit). History The first of the Ullium tablet line were introduced in 2014. The tablet is based on a tablet called Z100 which is produced by Chinese company called Hoozo. The tablets, however, are modified for greater control, over the government-approved intranet Kwangmyong. In 2015, the tablet started production. In 2016, it was reported that the demand for the tablet exceeded the available supply. In response, the government restricted sales of the device. The state also mandated that nobody can buy more than one tablet. This resulted in used tablets being sold roughly for original price. Applications and software The operating system on which the tablet runs is Android version 4.4.2 "KitKat". It has modifications which gives the government significantly more control over by whom can the intranet be accessed. Users require a dongle to access intranet by Wi-Fi, LAN or dial-up. The tablet have basic apps like gallery though none of the Google apps such as Gmail are available. Applications for education, cooking and games are pre-installed. The tablet PC can access the Kwangmyong intranet through Wi-Fi. It is also loaded with a local set of apps to match Microsoft Office – North Korea's software suite called "Changdok". The tablet has a level of surveillance and control which was not previously seen in North Korean electronics. The "Red Flag" program which runs as a background process captures a screenshot every time the user opens an application, records the browser history and ensures that the core operating system is not modified. The installation of applications is limited to an approved whitelist. The devices also come with "Trace Viewer": software that stores data and prevents users from deleting data. The tablet is able to access media only if it has the digital certificate either NATISIGN (authorized by the North Korean government) or SELFSIGN (created on the tablet itself). The table features watermarking of created files. Each of the created documents contains "fingerprints" of the device/owner. See also Samjiyon tablet computer References Tablet computers introduced in 2014 Information technology in North Korea
Ullim
[ "Technology" ]
546
[ "Mobile computer stubs", "Mobile technology stubs" ]
62,056,938
https://en.wikipedia.org/wiki/NGC%204380
NGC 4380 is an unbarred spiral galaxy located in the constellation of Virgo. Located about 52.2 million light-years (16 Megaparsecs) away, is a member of the Virgo Cluster, a large galaxy cluster. It was discovered on March 10, 1826, by the astronomer John Herschel. Gallery References External links 4380 Unbarred spiral galaxies Virgo (constellation) Virgo Cluster 040507
NGC 4380
[ "Astronomy" ]
91
[ "Virgo (constellation)", "Constellations" ]
62,058,082
https://en.wikipedia.org/wiki/Cycloartane
Cycloartane is a triterpene, also known as 4,4,14-trimethyl-9,19-cyclo-5alpha,9beta-cholestane. Its derivative cycloartenol is the starting point for the synthesis of almost all plant steroids. See also Lanostane Cycloastragenol Cycloartenyl ferulate References Triterpenes
Cycloartane
[ "Chemistry" ]
95
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
62,058,825
https://en.wikipedia.org/wiki/Sergei%20Starchenko
Sergei Stepanovich Starchenko (Сергей Степанович Старченко) is a mathematical logician who was born and grew up in the Soviet Union and now works in the USA. Starchenko graduated from the Novosibirsk State University in 1983 with M.S. and then in 1987 received his Ph.D. (Russian Candidate degree) there. His doctoral dissertation Number of models of Horn theories was written under the supervision of Evgenii Andreevich Palyutin. Starchenko was an assistant professor of mathematics at Vanderbilt University and is now a full professor at the University of Notre Dame. 2013 he received the Karp Prize with Ya’acov Peterzil for collaborative work with two other mathematicians. With Peterzil he applied the theory of o-minimal structures to problems in algebra and real and complex analysis. In 2010 Starchenko was, along with Peterzil, Invited Speaker with the talk Tame complex analysis and o-minimality at the International Congress of Mathematicians in Hyderabad. Starchenko became a Fellow of the American Mathematical Society in the class of 2017. Selected publications with Y. Peterzil: Geometry, Calculus and Zil'ber Conjecture, Bulletin of Symbolic Logic, vol. 2, 1996, pp. 72–83. with Y. Peterzil: A trichotomy theorem for o-minimal structures, Proc. London Math. Soc., vol. 77, 1998, pp. 481–523 with Y. Peterzil and A. Pillay: Definably simple groups in o-minimal structures, Transactions American Mathematical Society, vol. 352, 2000, pp. 4397–4419 with Y. Peterzil: Uniform definability of the Weierstrass ℘-functions and generalized tori of dimension one, Selecta Math. (N.S.), vol. 10, 2004, pp. 525–550. with Y. Peterzil: Definability of restricted theta functions and families of abelian varieties, Duke Math. J., vol. 162, 2013, pp., 731–765. with Peterzil: Mild manifolds and a non-standard Riemann existence theorem, Selecta Math. (N.S.), vol. 14, 2009, pp. 275–298. On the tomography theorem by P. Schapira: in: Model theory with applications to algebra and analysis. vol. 1, London Math. Soc. Lecture Note Ser., 349, Cambridge Univ. Press, Cambridge, 2008, pp. 283–292 with Rahim Moosa: K-analytic versus ccm-analytic sets in nonstandard compact complex manifolds, Fund. Math., vol. 198, 2008, pp. 139–148. ] References External links Sergei Starchenko, University of Notre Dame, selected publications with online links Living people Novosibirsk State University alumni University of Notre Dame faculty Fellows of the American Mathematical Society Soviet mathematicians 20th-century Russian mathematicians 21st-century Russian mathematicians Model theorists Year of birth missing (living people)
Sergei Starchenko
[ "Mathematics" ]
642
[ "Model theorists", "Model theory" ]
62,060,268
https://en.wikipedia.org/wiki/Dutch%20Furniture%20Awards
The Dutch Furniture Awards is a former annual furniture design competition in the Netherlands, organized from 1985 to 1998. This was an initiative of the Jaarbeurs Utrecht and the Vereniging van Vakbeurs Meubel (VVM). Overview This design prize was awarded annually. In 1985 it started with three prices for furniture designs: the Award for the best Dutch furniture design, the Style prize, and the Furniture of the year. In the following year a fourth prize was introduced, the Prize for Young Designers. In recent years, the Style for Industrial Product Quality replaced the style prize. In addition to a main prize, each category has already been awarded one or more honorable mentions each year. Also, with some regularity, a grand prize was not awarded in a certain category if the jury felt that the product quality in that particular category had not been sufficient that year. The entries of the Dutch Furniture Awards were exhibited annually. This was for a longer time at an annual International Furniture Fair in the Utrecht Fair. In 1997 this was at the Kunsthal Rotterdam, and at the Woonbeurs in the Prins Bernhardhoeve in Zuidlaren. In 1998 the ceremony took place in the Naardense Promerskazerne. The last presentation in 1999 took place again in the Jaarbeurs during the Interdecor home exhibition in Utrecht. The jury The jury usually consisted of three people per category with a well-known designer and a furniture manufacturer, regularly supplemented by past prize winners. Known permanent judges were Sem Aardewerk, Willem van Ast, Gerard van den Berg, Jan des Bouvrie, Rob Eckhardt, Ton Haas and Jan Pesman. Other jury members included Thijs Asselbergs in 1985, and Karel Boonzaaijer Award winners 1985-1999 See also Dutch Design Dutch Design Awards Dutch design week Rotterdam Design Award References Design awards Dutch awards Dutch design 1985 establishments in the Netherlands Awards established in 1985
Dutch Furniture Awards
[ "Engineering" ]
401
[ "Design", "Design awards" ]
62,064,684
https://en.wikipedia.org/wiki/Adriana%20Lita
Adriana Eleni Lita is a Romanian materials scientist who is a member of the faint photonics group at National Institute of Standards and Technology. She works on fabrication and development of single-photon detectors such as transition-edge sensors and superconducting nanowire single-photon detector devices. Life Lita earned a B.S. in physics from the University of Bucharest. She completed a Ph.D. in materials science and engineering at University of Michigan in 2000. Her dissertation was titled Correlation between microstructure and surface structure evolution in polycrystalline films. Lita's doctoral advisor was John E. Sanchez, Jr. In 2003, Lita joined the faint photonics group at National Institute of Standards and Technology (NIST) Boulder. She works on fabrication and development of single-photon detectors such as transition-edge sensors (TES) and superconducting nanowire single-photon detector (SNSPD) devices. Her work includes development of record high quantum efficiency TES devices optimized at various wavelengths from UV to near IR, integration of TES with optical waveguides platforms for photonic circuits, as well as materials development for SNSPDs. Her research has included Bell test experiments and the practical implementation of quantum key distribution. In 2021, Lita was awarded the Department of Commerce Silver Medal. Selected publications See also Timeline of women in science in the United States References Living people Year of birth missing (living people) Place of birth missing (living people) National Institute of Standards and Technology people 21st-century American women scientists Women materials scientists and engineers American materials scientists Nationality missing University of Michigan alumni University of Bucharest alumni 21st-century Romanian women 21st-century Romanian scientists Romanian women scientists Romanian emigrants to the United States Expatriate academics in the United States
Adriana Lita
[ "Materials_science", "Technology" ]
368
[ "Women materials scientists and engineers", "Materials scientists and engineers", "Women in science and technology" ]
62,065,916
https://en.wikipedia.org/wiki/Fungal%20DNA%20barcoding
Fungal DNA barcoding is the process of identifying species of the biological kingdom Fungi through the amplification and sequencing of specific DNA sequences and their comparison with sequences deposited in a DNA barcode database such as the ISHAM reference database, or the Barcode of Life Data System (BOLD). In this attempt, DNA barcoding relies on universal genes that are ideally present in all fungi with the same degree of sequence variation. The interspecific variation, i.e., the variation between species, in the chosen DNA barcode gene should exceed the intraspecific (within-species) variation. A fundamental problem in fungal systematics is the existence of teleomorphic and anamorphic stages in their life cycles. These morphs usually differ drastically in their phenotypic appearance, preventing a straightforward association of the asexual anamorph with the sexual teleomorph. Moreover, fungal species can comprise multiple strains that can vary in their morphology or in traits such as carbon- and nitrogen utilisation, which has often led to their description as different species, eventually producing long lists of synonyms. Fungal DNA barcoding can help to identify and associate anamorphic and teleomorphic stages of fungi, and through that to reduce the confusing multitude of fungus names. For this reason, mycologists were among the first to spearhead the investigation of species discrimination by means of DNA sequences, at least 10 years earlier than the DNA barcoding proposal for animals by Paul D. N. Hebert and colleagues in 2003, who popularised the term "DNA barcoding". The success of identification of fungi by means of DNA barcode sequences stands and falls with the quantitative (completeness) and qualitative (level of identification) aspect of the reference database. Without a database covering a broad taxonomic range of fungi, many identification queries will not result in a satisfyingly close match. Likewise, without a substantial curatorial effort to maintain the records at a high taxonomic level of identification, queries – even when they might have a close or exact match in the reference database – will not be informative if the closest match is only identified to phylum or class level. Another crucial prerequisite for DNA barcoding is the ability to unambiguously trace the provenance of DNA barcode data back to the originally sampled specimen, the so-called voucher specimen. This is common practice in biology along with the description of new taxa, where the voucher specimens, on which the taxonomic description is based, become the type specimens. When the identity of a certain taxon (or a genetic sequence in the case of DNA barcoding) is in doubt, the original specimen can be re-examined to review and ideally solve the issue. Voucher specimens should be clearly labelled as such, including a permanent voucher identifier that unambiguously connects the specimen with the DNA barcode data derived from it. Furthermore, these voucher specimens should be deposited in publicly accessible repositories like scientific collections or herbaria to preserve them for future reference and to facilitate research involving the deposited specimens. Barcode DNA markers Internal Transcribed Spacer (ITS) – the primary fungal barcode In fungi, the Internal transcribed spacer (ITS) is a roughly 600 base pairs long region in the ribosomal tandem repeat gene cluster of the nuclear genome. The region is flanked by the DNA sequences for the ribosomal small subunit (SSU) or 18S subunit at the 5' end, and by the large subunit (LSU) or 28S subunit at the 3' end. The Internal Transcribed Spacer itself consists of two parts, ITS1 and ITS2, which are separated from each other by the 5.8S subunit nested between them. Like the flanking 18S and 28S subunits, the 5.8S subunit contains a highly conserved DNA sequence, as they code for structural parts of the ribosome, which is a key component in intracellular protein synthesis. Due to several advantages of ITS (see below) and a comprehensive amount of sequence data accumulated in the 1990s and early 2000s, Begerow et al. (2010) and Schoch et al. (2012) proposed the ITS region as primary DNA barcode region for the genetic identification of fungi. UNITE is an open ITS barcoding database for fungi and all other eukaryotes. Primers The conserved flanking regions of 18S and 28S serve as anchor points for the primers used for PCR amplification of the ITS region. Moreover, the conserved nested 5.8S region allows for the construction of "internal" primers, i.e., primers attaching to complementary sequences within the ITS region. White et al. (1990) proposed such internal primers, named ITS2 and ITS3, along with the flanking primers ITS1 and ITS4 in the 18S and the 28S subunit, respectively. Due to their almost universal applicability to ITS sequencing in fungi, these primers are still in wide use today. Optimised primers specifically for ITS sequencing in Dikarya (comprising Basidiomycota and Ascomycota) have been proposed by Toju et al. (2012). For the majority of fungi, the ITS primers proposed by White et al. (1990) have become the standard primers used for PCR amplification. These primers are: Forward primers: ITS1: ITS2: ITS5: Reverse primers: ITS3: ITS4: Advantages and shortcomings A major advantage of using the ITS region as molecular marker and fungal DNA barcode is that the entire ribosomal gene cluster is arranged in tandem repeats, i.e., in multiple copies. This allows for its PCR amplification and Sanger sequencing even from small material samples (given the DNA is not fragmented due to age or other degenerative influences). Hence, a high PCR success rate is usually observed when amplifying ITS. However, this success rate varies greatly among fungal groups, from 65% in non-Dikarya (including the now paraphyletic Mucoromycotina, the Chytridiomycota and the Blastocladiomycota) to 100% in Saccharomycotina and Basidiomycota (with the exception of very low success in Pucciniomycotina). Furthermore, the choice of primers for ITS amplification can introduce biases towards certain taxonomic fungus groups. For example, the "universal" ITS primers fail to amplify about 10% of the tested fungal specimens. The tandem repeats of the ribosomal gene cluster cause the problem of significant intragenomic sequence heterogeneity observed among ITS copies of several fungal groups. In Sanger sequencing, this will cause ITS sequence reads of different lengths to superpose each other, potentially rendering the resulting chromatograph unreadable. Furthermore, because of the non-coding nature of the ITS region that can lead to a substantial amount of indels, it is impossible to consistently align ITS sequences from highly divergent species for further bigger-scale phylogenetic analyses. The degree of intragenomic sequence heterogeneity can be investigated in more detail through molecular cloning of the initially PCR-amplified ITS sequences, followed by sequencing of the clones. This procedure of initial PCR amplification, followed by cloning of the amplicons and finally sequencing of the cloned PCR products is the most common approach of obtaining ITS sequences for DNA metabarcoding of environmental samples, in which a multitude of different fungal species can be present simultaneously. However, this approach of sequencing after cloning was rarely done for the ITS sequences that make up the reference libraries used for DNA barcode-aided identification, thus potentially giving an underestimate of the existing ITS sequence variation in many samples. The weighted arithmetic mean of the intraspecific (within-species) ITS variability among fungi is 2.51%. This variability, however, can range from 0% for example in Serpula lacrymans (n=93 samples) over 0.19% in Tuber melanosporum (n=179) up to 15.72% in Rhizoctonia solani (n=608), or even 24.75% in Pisolithus tinctorius (n=113). In cases of high intraspecific ITS variability, the application of a threshold of 3% sequence variability – a canonical upper value for intraspecific variation – will therefore lead to a higher estimate of operational taxonomic units (OTUs), i.e., putative species, than there actually are in a sample. In the case of medically relevant fungal species, a more strict threshold of 2.5% ITS variability allows only around 75% of all species to be accurately identified to the species level. On the other hand, morphologically well-defined, but evolutionarily young species complexes or sibling species may only differ (if at all) in a few nucleotides of the ITS sequences. Solely relying on ITS barcode data for the identification of such species pairs or complexes may thus obscure the actual diversity and might lead to misidentification if not accompanied by the investigation of morphological and ecological features and/or comparison of additional diagnostic genetic markers. For some taxa, ITS (or its ITS2 part) is not variable enough as fungal DNA barcode, as for example has been shown in Aspergillus, Cladosporium, Fusarium and Penicillium. Efforts to define a universally applicable threshold value of ITS variability that demarcates intraspecific from interspecific (between-species) variability thus remain futile. Nonetheless, the probability of correct species identification with the ITS region is high in the Dikarya, and especially so in Basidiomycota, where even the ITS1 part is often sufficient to identify the species. However, its discrimination power is partly superseded by that of the DNA-directed RNA polymerase II subunit RPB1 (see also below). Due to the shortcomings of ITS as primary fungal DNA barcode, the necessity of establishing a second DNA barcode marker was expressed. Several attempts were made to establish other genetic markers that could serve as additional DNA barcodes, similar to the situation in plants, where the plastidial genes rbcL, matK and trnH-psbA, as well as the nuclear ITS are often used in combination for DNA barcoding. Translational elongation factor 1α (TEF1α) – the secondary fungal barcode The translational elongation factor 1α is part of the eucaryotic elongation factor 1 complex, whose main function is to facilitate the elongation of the amino acid chain of a polypeptide during the translation process of gene expression. Stielow et al. (2015) investigated the TEF1α gene, among a number of others, as potential genetic marker for fungal DNA barcoding. The TEF1α gene coding for the translational elongation factor 1α is generally considered to have a slow mutation rate, and it is therefore generally better suited for investigating older splits deeper in the phylogenetic history of an organism group. Despite this, the authors conclude that TEF1α is the most promising candidate for an additional DNA barcode marker in fungi as it also features sequence regions of higher mutation rates. Following this, a quality-controlled reference database was established and merged with the previously existing ISHAM-ITS database for fungal ITS DNA barcodes to form the ISHAM database. TEF1α has been successfully used to identify a new species of Cantharellus from Texas and distinguish it from a morphologically similar species. In the genera Ochroconis and Verruconis (Sympoventuriaceae, Venturiales), however, the marker does not allow distinction of all species. TEF1α has also been used in phylogenetic analyses at the genus level, e.g. in the case of Cantharellus and the entomopathogenic Beauveria, and for the phylogenetics of early-diverging fungal lineages. Primers TEF1α primers used in the broad-scale screening of the performance of DNA barcode gene candidates of Stielow et al. (2015) were the forward primer EF1-983F with the sequence , and the reverse primer EF1-1567R with the sequence . In addition, a number of new primers was developed, with the primer pair in bold resulting in a high average amplification success of 88%: Forward primers: EF1-1018F: EF1-1002F: Al33_alternative_f: EF1_alternative_3f: Reverse primers: EF1-1620R: EF1-1688R: EF1_alternative_3r: Primers used for the investigation of Rhizophydiales and especially Batrachochytrium dendrobatidis, a pathogen of amphibia, are the forward primer tef1F with the nucleotide sequence , and the reverse primer tef1R with the sequence . These primers also successfully amplified the majority of Cantharellus species investigated by Buyck et al. (2014), with the exception of a few species for which more specific primers were developed: the forward primer tef-1Fcanth with the sequence , and the reverse primer tef-1Rcanth with the sequence . D1/D2 domain of the LSU ribosomal RNA The D1/D2 domain is part of the nuclear large subunit (28S) ribosomal RNA, and it is therefore located in the same ribosomal tandem repeat gene cluster as the Internal Transcribed Spacer (ITS). But unlike the non-coding ITS sequences, the D1/D2 domain contains coding sequence. With about 600 base pairs it is about the same nucleotide sequence length as ITS, which makes amplification and sequencing rather straightforward, an advantage that has led to the accumulation of an extensive amount of D1/D2 sequence data especially for yeasts. Regarding the molecular identification of basidiomycetous yeasts, D1/D2 (or ITS) can be used alone. However, Fell et al. (2000) and Scorzetti et al. (2002) recommend the combined analysis of the D1/D2 and ITS regions, a practice that later became the standard required information for describing new taxa of asco- and basidiomycetous yeasts. When attempting to identify early diverging fungal lineages, the study of Schoch et al. (2012), comparing the identification performance of different genetic markers, showed that the large subunit (as well as the small subunit) of the ribosomal RNA performs better than ITS or RPB1. Primers For basidiomycetous yeasts, the forward primer F63 with the sequence , and the reverse primer LR3 with the sequence have been successfully used for PCR amplification of the D1/D23 domain. The D1/D2 domain of ascomycetous yeasts like Candida can be amplified with the forward primer NL-1 (same as F63) and the reverse primer NL-4 (same as LR3). RNA polymerase II subunit RPB1 The RNA polymerase II subunit RPB1 is the largest subunit of the RNA polymerase II. In Saccharomyces cerevisiae, it is encoded by the RPO21 gene. PCR amplification success of RPB1 is very taxon-dependent, ranging from 70 to 80% in Ascomycota to 14% in early diverging fungal lineages. Apart from the early diverging lineages, RPB1 has a high rate of species identification in all fungal groups. In the species-rich Pezizomycotina it even outperforms ITS. In a study comparing the identification performance of four genes, RPB1 was among the most effective genes when combining two genes in the analysis: combined analysis with either ITS or with the large subunit ribosomal RNA yielded the highest identification success. Other studies also used RPB2, the second-largest subunit of the RNA polymerase II, e.g. for studying the phylogenetic relationships among species of the genus Cantharellus or for a phylogenetic study shedding light on the relationships among early-diverging lineages in the fungal kingdom. Primers Primers successfully amplifying RPB1 especially in Ascomycota are the forward primer RPB1-Af with the sequence , and the reverse primer RPB1-Ac-RPB1-Cr with the sequence . Intergenic Spacer (IGS) of ribosomal RNA genes The Intergenic Spacer (IGS) is the region of non-coding DNA between individual tandem repeats of the ribosomal gene cluster in the nuclear genome, as opposed to the Internal Transcribed Spacer (ITS) that is situated within these tandem repeats. IGS has been successfully used for the differentiation of strains of Xanthophyllomyces dendrorhous as well as for species distinction in the psychrophilic genus Mrakia (Cystofilobasidiales). Due to these results, IGS has been recommended as a genetic marker for additional differentiation (along with D1/D2 and ITS) of closely related species and even strains within one species in basidiomycete yeasts. The recent discovery of additional non-coding RNA genes in the IGS region of some basidiomycetes cautions against uncritical use of IGS sequences for DNA barcoding and phylogenetic purposes. Other genetic markers The cytochrome c oxidase subunit I (COI) gene outperforms ITS in DNA barcoding of Penicillium (Ascomycota) species, with species-specific barcodes for 66% of the investigated species versus 25% in the case of ITS. Furthermore, a part of the β-Tubulin A (BenA) gene exhibits a higher taxonomic resolution in distinguishing Penicillium species as compared to COI and ITS. In the closely related Aspergillus niger complex, however, COI is not variable enough for species discrimination. In Fusarium, COI exhibits paralogues in many cases, and homologous copies are not variable enough to distinguish species. COI also performs poorly in the identification of basidiomycote rusts of the order Pucciniales due to the presence of introns. Even when the obstacle of introns is overcome, ITS and the LSU rRNA (28S) outperform COI as DNA barcode marker. In the subdivision Agaricomycotina, PCR amplification success was poor for COI, even with multiple primer combinations. Successfully sequenced COI samples also included introns and possible paralogous copies, as reported for Fusarium. Agaricus bisporus was found to contain up to 19 introns, making the COI gene of this species the longest recorded, with 29,902 nucleotides. Apart from the substantial troubles of sequencing COI, COI and ITS generally perform equally well in distinguishing basidiomycote mushrooms.Topoisomerase I (TOP1) was investigated as additional DNA barcode candidate by Lewis et al. (2011) based on proteome data, with the developed universal primer pair being subsequently tested on actual samples by Stielow et al. (2015). The forward primer TOP1_501-F with the sequence (where the first section marks the universal M13 forward primer tail, the second part consisting of ACGAT a spacer, and the third part the actual primer) and reverse the primer TOP1_501-R with (the first section marking the universal M13 reverse primer tail, the second part the actual TOP1 reverse primer) amplify a fragment of approximately 800 base pairs. TOP1 was found to be a promising DNA barcode candidate marker for ascomycetes, where it can distinguish species in Fusarium and Penicillium – genera, in which the primary ITS barcode performs poorly. However, poor amplification success with the TOP1 universal primers is observed in early-diverging fungal lineages and basidiomycetes except Pucciniomycotina (where ITS PCR success is poor). Like TOP1, the Phosphoglycerate kinase (PGK)''' was among the genetic markers investigated by Lewis et al. (2011) and Stielow et al. (2015) as potential additional fungal DNA barcodes. A number of universal primers was developed, with the PGK533 primer pair, amplifying a circa 1,000 base pair fragment, being the most successful in most fungi except Basidiomycetes. Like TOP1, PGK is superior to ITS in species differentiation in ascomycete genera like Penicillium and Fusarium, and both PGK and TOP1 perform as good as TEF1α in distinguishing closely related species in these genera. Applications Food safety A citizen science project investigated the consensus between the labelling of dried, commercially sold mushrooms and the DNA barcoding results from these mushrooms. All samples were found to be correctly labelled. However, an obstacle was the unreliability of ITS reference databases in terms of the level of identification, as the two databases (GenBank and UNITE) used for ITS sequence comparison gave different identification results in some of the samples. Correct labelling of mushrooms intended for consumption was also investigated by Raja et al. (2016), who used the ITS region for DNA barcoding from dried mushrooms, mycelium powders, and dietary supplement capsules. In only 30% of the 33 samples did the product label correctly state the binomial fungus name. In another 30%, the genus name was correct, but the species epithet did not match, and in 15% of the cases not even the genus name of the binomial name given on the product label matched the result of the obtained ITS barcode. For the remaining 25% of the samples, no ITS sequence could be obtained. Xiang et al. (2013) showed that using ITS sequences, the commercially highly valuable the caterpillar fungus Ophiocordyceps sinensis and its counterfeit versions (O. nutans, O. robertsii, Cordyceps cicadae, C. gunnii, C. militaris, and the plant Ligularia hodgsonii) can be reliably identified to the species level. Pathogenic fungi A study by Vi Hoang et al. (2019) focused on the identification accuracy of pathogenic fungi using both the primary (ITS) and secondary (TEF1α) barcode markers. Their results show that in Diutina (a segregate of Candida) and Pichia, species identification is straightforward with either the ITS or the TEF1α as well as with a combination of both. In the Lodderomyces assemblage, which contains three of the five most common pathogenic Candida species (C. albicans, C. dubliniensis, and C. parapsilosis), ITS failed to distinguish Candida orthopsilosis and C. parapsilosis, which are part of the Candida parapsilosis complex of closely related species. TEF1α, on the other hand, allowed identification of all investigated species of the Lodderomyces clade. Similar results were obtained for Scedosporium species, which are attributed to a wide range of localised to invasive diseases: ITS could not distinguish between S. apiospermum and S. boydii, whereas with TEF1α all investigated species of this genus could be accurately identified. This study therefore underlines the usefulness of applying more than one DNA barcoding marker for fungal species identification. Conservation of cultural heritage Fungal DNA barcoding has been successfully applied to the investigation of foxing phenomena, a major concern in the conservation of paper documents. Sequeira et al. (2019) sequenced ITS from foxing stains and found Chaetomium globosum, Ch. murorum, Ch. nigricolor, Chaetomium sp., Eurotium rubrum, Myxotrichum deflexum, Penicillium chrysogenum, P. citrinum, P. commune, Penicillium sp. and Stachybotrys chartarum to inhabit the investigated paper stains. Another study investigated fungi that act as biodeteriorating agents in the Old Cathedral of Coimbra, part of the University of Coimbra, a UNESCO World Heritage Site. Sequencing the ITS barcode of ten samples with classical Sanger as well as with Illumina next-generation sequencing techniques, they identified 49 fungal species. Aspergillus versicolor, Cladosporium cladosporioides, C. sphaerospermum, C. tenuissimum, Epicoccum nigrum, Parengyodontium album, Penicillium brevicompactum, P. crustosum, P. glabrum, Talaromyces amestolkiae and T. stollii were the most common species isolated from the samples. Another study concerning objects of cultural heritage investigated the fungal diversity on a canvas painting by Paula Rego using the ITS2 subregion of the ITS'' marker. Altogether, 387 OTUs (putative species) in 117 genera of 13 different classes of fungi were observed. See also DNA barcoding Microbial DNA barcoding Pollen DNA barcoding DNA barcoding in diet assessment Consortium for the Barcode of Life References Further reading External links Aftol primer listing (as used in James et al. 2006's six-gene phylogeny) Fungal morphology and anatomy DNA barcoding Fungi Molecular genetics Bioinformatics
Fungal DNA barcoding
[ "Chemistry", "Engineering", "Biology" ]
5,428
[ "Genetics techniques", "Biological engineering", "Fungi", "DNA barcoding", "Bioinformatics", "Molecular genetics", "Molecular biology", "Phylogenetics" ]
62,067,395
https://en.wikipedia.org/wiki/Rotterdam%20Design%20Award
The Rotterdam Design Award (Rotterdamse Designprijs) was an annual and later biennial design award in the Netherlands from 1993 to 2013. In the first five editions the work of the nominees were exhibited in the Kunsthal, and afterwards in Museum Boijmans Van Beuningen. The winners were selected by an international jury during this exhibition, and were announced at the end of the exhibition. The winners received an amount of € 20,000, which could be spent freely. History The prize was organized annually from 1993 to 1997, after which it became a biennial prize. No edition took place in 2005. In the first years the prize was awarded to individual products of designers, architects and other participants in the field of design. Since 2007 the conceptual vision and performance in the field of the designer became its major criterion. The awarded was inaugurated by the Rotterdamse Kunststichting, where Christine de Baan was director of the Rotterdam Design Prize from 1993 tot 2000. In 2007 Thimo te Duits & Gerard Forde managed the award and wrote the catalog. Towards the end in 2011 the Rotterdam design prize was organized by Stichting Designprijs Rotterdam, Museum Boijmans Van Beuningen and Premsela, Dutch Institute for Design and Fashion. Award winners 1993–2013 See also Dutch Design Awards Dutch Furniture Awards Selected publications Thimo te Duits & Gerard Forde, Designprijs Rotterdam 2007, Rotterdam : Stichting Designprijs, 2007. References Design awards Dutch awards Dutch design 1993 establishments in the Netherlands Awards established in 1993
Rotterdam Design Award
[ "Engineering" ]
317
[ "Design", "Design awards" ]
62,067,518
https://en.wikipedia.org/wiki/Stephen%20Eales
Stephen Eales is a professor of astrophysics at Cardiff University, where he is currently head of the Astronomy Group. In 2015, he was awarded the Herschel Medal from the Royal Astronomical Society for outstanding contributions to observational astrophysics. He also writes articles and books about astronomy. Research His main research field is the new field of submillimetre astronomy, in particular using submillimetre observations to investigate the origin and evolution of galaxies. He has led a number of large submillimetre observing programmes. In particular, with Loretta Dunne he led the Herschel ATLAS the largest survey of the extragalactic sky carried out with the Herschel Space Observatory. Bibliography Origins – how the planets, stars, galaxies and the universe began (Springer 2007 ). Planets and Planetary Systems (textbook) (John Wiley and Sons 2009, ) References Footnotes Sources Winners of RAS medals in 2015 Smoking Supernovae and Dusty Galaxies, Sky and Telescope 2004 The Final Frontier, Astronomy Now, 1997, Vol. 11, No. 6, p. 41 Cool dust and baby stars, Physics World, Volume 26, 1 Pilbratt, G. et al. 2010, Herschel Space Observatory – an ESA facility for far-infrared and submillimetre astronomy, Astronomy and Astrophysics, 518, L1 British astrophysicists Year of birth missing (living people) Living people Academics of Cardiff University
Stephen Eales
[ "Astronomy" ]
284
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
62,067,745
https://en.wikipedia.org/wiki/Visiting%20the%20sick
Visiting the sick, either at hospital or their home, is a recommended philanthropic deed in different cultures and religions, including Christianity, Judaism and Islam and is considered an aspect of benevolence and a work of mercy. In Judaism, for instance, the act is called bikur cholim and is considered a part of mitzvah (commandment). In Christianity it may be done by relatives or friends or formally by a chaplain or minister. Format Visiting the sick is mainly performed on a voluntarily basis. The purpose of the visit is to reflect on shared feeling with the sick person and to spend some warm quality time with them, providing them with inspiration and positive feelings that can help them fight their sickness and get well soon. See also Anointing of the sick References Philanthropy Commandments
Visiting the sick
[ "Biology" ]
160
[ "Philanthropy", "Behavior", "Altruism" ]
47,509,871
https://en.wikipedia.org/wiki/Penicillium%20sajarovii
Penicillium sajarovii is a species of fungus in the genus Penicillium. References Further reading sajarovii Fungi described in 1981 Fungus species
Penicillium sajarovii
[ "Biology" ]
36
[ "Fungi", "Fungus species" ]
47,510,014
https://en.wikipedia.org/wiki/KVM%20Splitter
A KVM (Keyboard Video Mouse) Splitter, also known as a Reverse KVM switch, is a hardware device that allows users to control a single computer from one or more sets of keyboards, video monitors, and mice. With a KVM splitter, users access the connected computer consecutively rather than simultaneously. It differs from a KVM Switch which allows multiple computers to be controlled, usually, by a single keyboard, monitor and mouse. Types There are two main functional designs of KVM splitters: Emulation In an emulation based design, the KVM splitter feeds the emulated controls from the active keyboard and mouse to the connected computer. A PS/2 KVM splitter must be emulation based, because PS/2 computers only have two PS/2 connectors – one designated for a keyboard and one for a mouse. In contrast, a USB computer tends to have multiple USB ports, which are not designated for a specific device. Keyboards, mice, external storage devices, printers, etc. can be connected to the USB ports on a computer. USB KVM splitters can also be emulation based, but they tend to be built using a hub-based design. Hub Based When a KVM splitter is hub based, it functions similar to having multiple keyboards and mice connected to a single computer. Instead of sending one emulated signal to the computer based on an active keyboard and mouse, the hub-based splitter continuously feeds the active controls and emulation is not needed. Signals Common input and output video signals for KVM splitters include HDMI, DVI, and VGA. Keyboard and mouse input and output signals are PS/2 or USB. Some KVM splitters also offer additional USB input ports to connect peripheral devices, such as external hard disk drives and printers, which can be shared across all users. Monitor As one computer is feeding video for multiple monitors, there are a few ways KVM splitters can handle the potentially different DDC and EDID transmissions from the monitors. Pass-through: the KVM splitter transfers the EDID data directly from a display connected to the primary port to a source. If all monitors are identical, this will not be an issue. However, if different monitors are used, the EDID from the primary monitor may not be compatible with others. Built-in: the KVM splitter fakes the EDID data using an EEPROM with a generic EDID. This provides less compatibility as there is no communication between the monitors and the computer, and monitors that do not support the generic EDID may not work. EDID learning: the KVM splitter reads EDID from all connected monitors and creates a new EDID table with the common resolutions amongst the displays. This ensures the best compatibility when many different monitors are used. Usage A KVM Splitter can be used by multiple users to control a single computer from multiple locations. Any user can access the computer and its programs from their workstation. There are a few activation modes KVM splitters operate in, which limits simultaneous user activity. User specific: only one selectable user is active, other users can view what is happening. Instantaneous mode: different users can immediately access the computer after the last keystroke or mouse movement by the active user. Delay mode: a selectable delay, e.g. 5 seconds, occurs before another user can access the computer after the last keystroke or mouse movement by the active user. The method of switching from one user to another depends on the KVM splitter and supported modes of operation. Rotary switches, active electronic switches, push buttons or automatic activation via keystrokes/mouse are some of the switching methods commonly used. KVM Splitters can be combined with KVM switches and KVM extenders (which provide a way to relocate keyboard, monitor, and mouse to a far distance from the computer) to set up more complex configurations. For example, multiple users can access the same computer without the need for physical proximity to the computer. A common industrial application is to allow access to a computer located in a cleanroom from outside, to restrict the level of contamination. By providing access to multiple users through a KVM splitter, pollutants can be minimized by reducing the need for the users to enter the cleanroom. See also Rackmount KVM Audio and video interfaces and connectors References External links Computer peripherals Input/output Computer connectors
KVM Splitter
[ "Technology" ]
905
[ "Computer peripherals", "Components" ]
47,510,219
https://en.wikipedia.org/wiki/Fluidmesh%20Networks
Fluidmesh Networks was a hardware and software manufacturer of wireless point-to-point networks, wireless point-to-multipoint networks, and wireless mesh networks. Fluidmesh products are used in video-surveillance, enterprise, industrial, railway, maritime, and military projects. Corporate history Fluidmesh was founded in 2005 by four Italian engineers: Umberto Malesci, Cosimo Malesci, Andrea Orioli, and Torquato Bertani. Fluidmesh was a spin-off company from MIT where Umberto Malesci and Cosimo Malesci were graduate students in the Department of Engineering. In 2005, Umberto Malesci was a graduate student working at the MIT Computer Science and Artificial Intelligence Laboratory with Prof. Samuel Madden (MIT) when he developed Fluidmesh's initial software based on the Roofnet open-source project leveraging Click Modular Router. The Company was initially incubated at the Politecnico di Milano, in Milan, Italy. Over the years, Fluidmesh Networks expanded into the United States, Europe, the Middle East and Latin America, and obtained exposure on the Italian national press and television as a successful example of innovation-driven entrepreneurship in the high tech space. Within ten years, Fluidmesh had sold and installed approximately 24,000 miles of wireless links. In 2010, Fluidmesh partnered with CCTV camera manufacturer, Pelco. In April 2011, Fluidmesh Networks announced it had been acquired by Generation 3 Capital and Waveland Investments, two private equity firms based in Chicago. In 2016, Fluidmesh Networks and Cisco announced a partnership to combine Cisco Connected Rail Solutions and Fluidmesh train-to-ground wireless technology into a single solution. On April 6, 2020, Cisco announced its intent to acquire Fluidmesh. The acquisition was completed July 7, 2020. Products and services Trackside WiFi and Mobile Connectivity for Trains and Railroads Internet of Things (IoT) for Vessels and Maritime Applications Wireless Backhaul for fixed wireless networks References External links Computer companies established in 2005 Networking companies of the United States Networking hardware companies Wireless networking hardware 2005 establishments in New York City Cisco Systems acquisitions 2020 mergers and acquisitions American companies established in 2005 Defunct computer companies of the United States Defunct computer hardware companies Defunct computer companies based in New York (state)
Fluidmesh Networks
[ "Technology" ]
468
[ "Wireless networking hardware", "Wireless networking" ]
47,510,943
https://en.wikipedia.org/wiki/Fluctuation%20X-ray%20scattering
Fluctuation X-ray scattering (FXS) is an X-ray scattering technique similar to small-angle X-ray scattering (SAXS), but is performed using X-ray exposures below sample rotational diffusion times. This technique, ideally performed with an ultra-bright X-ray light source, such as a free electron laser, results in data containing significantly more information as compared to traditional scattering methods. FXS can be used for the determination of (large) macromolecular structures, but has also found applications in the characterization of metallic nanostructures, magnetic domains and colloids. The most general setup of FXS is a situation in which fast diffraction snapshots of models are taken which over a long time period undergo a full 3D rotation. A particularly interesting subclass of FXS is the 2D case where the sample can be viewed as a 2-dimensional system with particles exhibiting random in-plane rotations. In this case, an analytical solution exists relation the FXS data to the structure. In absence of symmetry constraints, no analytical data-to-structure relation for the 3D case is available, although various iterative procedures have been developed. Overview An FXS experiment consists of collecting a large number of X-ray snapshots of samples in a different random configuration. By computing angular intensity correlations for each image and averaging these over all snapshots, the average 2-point correlation function can be subjected to a finite Legendre transform, resulting in a collection of so-called Bl(q,q') curves, where l is the Legendre polynomial order and q / q' the momentum transfer or inverse resolution of the data. Mathematical background Given a particle with density distribution , the associated three-dimensional complex structure factor is obtained via a Fourier transform The intensity function corresponding to the complex structure factor is equal to where denotes complex conjugation. Expressing as a spherical harmonics series, one obtains The average angular intensity correlation as obtained from many diffraction images is then It can be shown that where with equal to the X-ray wavelength used, and is a Legendre Polynome. The set of curves can be obtained via a finite Legendre transform from the observed autocorrelation and are thus directly related to the structure via the above expressions. Additional relations can be obtained by obtaining the real space autocorrelation of the density: A subsequent expansion of in a spherical harmonics series, results in radial expansion coefficients that are related to the intensity function via a Hankel transform A concise overview of these relations has been published elsewhere Basic relations A generalized Guinier law describing the low resolution behavior of the data can be derived from the above expressions: Values of and can be obtained from a least squares analyses of the low resolution data. The falloff of the data at higher resolution is governed by Porod laws. It can be shown that the Porod laws derived for SAXS/WAXS data hold here as well, ultimately resulting in: for particles with well-defined interfaces. Structure determination from FXS data Currently, there are three routes to determine molecular structure from its corresponding FXS data. Algebraic phasing By assuming a specific symmetric configuration of the final model, relations between expansion coefficients describing the scattering pattern of the underlying species can be exploited to determine a diffraction pattern consistent with the measure correlation data. This approach has been shown to be feasible for icosahedral and helical models. Reverse Monte Carlo By representing the to-be-determined structure as an assembly of independent scattering voxels, structure determination from FXS data is transformed into a global optimisation problem and can be solved using simulated annealing. Multi-tiered iterative phasing The multi-tiered iterative phasing algorithm (M-TIP) overcomes convergence issues associated with the reverse Monte Carlo procedure and eliminates the need to use or derive specific symmetry constraints as needed by the Algebraic method. The M-TIP algorithm utilizes non-trivial projections that modifies a set of trial structure factors such that corresponding match observed values. The real-space image , as obtained by a Fourier Transform of is subsequently modified to enforce symmetry, positivity and compactness. The M-TIP procedure can start from a random point and has good convergence properties. References X-ray scattering
Fluctuation X-ray scattering
[ "Chemistry" ]
878
[ "X-ray scattering", "Scattering" ]
47,511,346
https://en.wikipedia.org/wiki/Glossary%20of%20virology
This glossary of virology is a list of definitions of terms and concepts used in virology, the study of viruses, particularly in the description of viruses and their actions. Related fields include microbiology, molecular biology, and genetics. A B C D E G H I K L M N O P Q R S T U V Z See also Glossary of biology Glossary of genetics Glossary of scientific naming Introduction to viruses List of viruses References Virology Virology Wikipedia glossaries using description lists
Glossary of virology
[ "Biology" ]
106
[ "Glossaries of biology" ]
47,511,501
https://en.wikipedia.org/wiki/Swedish%20Commission%20on%20Security%20and%20Integrity%20Protection
The Swedish Commission on Security and Integrity Protection () is a Swedish administrative authority sorting under the Ministry of Justice responsible for supervising law enforcement agencies' use of secret surveillance techniques, assumed identities and other associated activities. The commission also supervise the processing of personal data by the Swedish Police Authority. It is also obliged to check whether someone has been the subject of secret surveillance or subject to the processing of personal data, at the request of an individual, and if it was done within bounds of applicable legislation. See also Swedish Economic Crime Authority Swedish Police Authority Swedish Security Service References External links Government agencies of Sweden Information privacy Privacy
Swedish Commission on Security and Integrity Protection
[ "Engineering" ]
122
[ "Cybersecurity engineering", "Information privacy" ]
47,512,011
https://en.wikipedia.org/wiki/Alpha-aminoadipic%20and%20alpha-ketoadipic%20aciduria
Alpha-aminoadipic and alpha-ketoadipic aciduria is an autosomal recessive metabolic disorder characterized by an increased urinary excretion of alpha-ketoadipic acid and alpha-aminoadipic acid. It is caused by mutations in DHTKD1, which encodes the E1 subunit of the oxoglutarate dehydrogenase complex (alpha-ketoglutarate dehydrogenase complex). References Autosomal recessive disorders Metabolic disorders
Alpha-aminoadipic and alpha-ketoadipic aciduria
[ "Chemistry", "Biology" ]
106
[ "Biotechnology stubs", "Biochemistry stubs", "Biochemistry", "Metabolic disorders", "Metabolism" ]
47,512,065
https://en.wikipedia.org/wiki/Open%20Location%20Code
The Open Location Code (OLC) is a geocode based on a system of regular grids for identifying an area anywhere on the Earth. It was developed at Google's Zürich engineering office, and released late October 2014. Location codes created by the OLC system are referred to as "plus codes". Open Location Code is a way of encoding location into a form that is easier to use than showing coordinates in the usual form of latitude and longitude. Plus codes are designed to be used like street addresses and may be especially useful in places where there is no formal system to identify buildings, such as street names, house numbers, and post codes. Plus codes are derived from latitude and longitude coordinates, so they already exist everywhere. They are similar in length to a telephone number (e.g., 849VCWC8+R9) but can often be shortened to only four or six digits when combined with a locality (e.g., CWC8+R9, Mountain View, California). Locations close to each other have similar codes. They can be encoded or decoded offline. The character set avoids similar-looking characters to reduce confusion and errors and avoids vowels to make it unlikely that a code spells existing words. Plus codes are not case-sensitive and can therefore be easily exchanged over the phone. Since August 2015, Google Maps has supported plus codes in its search engine. The shortened plus code is displayed for a location, may be copied, clicked, or transcribed, and can be entered into the address box (followed by the town or city name if not local and using shortened code) to display the location on the map. The algorithm is licensed under the Apache License 2.0 and is available on GitHub. Applications Plus codes are increasingly being used for addressing purposes in places that aren't well-served by the traditional street address system. This includes the many unnamed streets in Cape Verde, multiple slums in India, and even some Native American reservations in the United States. In Laxmi Nagar, Pune, the nonprofit Shelter Associates used codes to bring delivery services to specific homes and businesses in the slum for the first time in 2020-21. Plus codes are also being used by the International Rescue Committee in Somalia for immunization and family planning programs. Specification The Open Location Code system is based on latitudes and longitudes in WGS84 coordinates. Each code describes an area bounded by two parallels and two meridians out of a fixed grid, identified by the southwest corner and its size. The largest grid has blocks of 20 by 20 degrees (9 rows and 18 columns), and is divided in 20 by 20 subblocks up to four times. From that level onwards, division is in 5 by 4 subblocks. The table shows the various block sizes at their maximum near the equator. The block width decreases with distance from the equator. The full grid uses offsets from the South Pole (−90°) and the antimeridian (−180°) expressed in base 20 representation. To avoid misreading or spelling objectionable words, the encoding excludes vowels and symbols that may be easily confused with each other. The following table shows the mapping. The code begins with up to five pairs of digits, each consisting of one digit representing latitude and one representing longitude. The biggest blocks have just two digits. After 8 digits, a plus sign "+" is inserted in the code as a delimiter to aid with visual parsing. After 10 digits at each subdivision, subblocks are coded in a single code digit as follows: Areas larger than an 8-digit block can be specified by replacing an even number of trailing digits before the + sign with the digit 0, with nothing after the + sign. Example Consider, for example, zooming in on the Merlion fountain () in Singapore, which has Open Location code "6PH57VP3+PR6". It lies in the block around the equator bounded by −10° South and +10° North, and between 100° and 120° East. It has offsets 80° from the South Pole, and 280° from the anti-meridian; or, 4 (=80/20) and 14 (=280/20) as the first base-20 digits, coded as "6" and "P". Thus, the code is "6P". This may be padded as 6P000000+. Now, refine this block to a subblock between 1° and 2° N and 103° and 104° E. This adds 11° and 3° to the SW corner. So the base-20 coordinate codes added are "H" and "5". The result is padded to 6PH50000+. After four further refinements, one lands on Merlion Park as 6PH57VP3+PR. The next step requires dividing the square so far used, to refine the position into a 4-by-5 grid, and finding the cell to which the coordinates are pointing. This is the cell named "6". BASE20 Formula Alternatively, use formula BASE(Degrees from South or West * power(20, 4) , 20) in any Spreadsheet or Calculator to compute the OLC Code. For the coordinates from the previous example: 1.286785N = 91.286785 from South Pole, in Base20 = 4B.5EE(5) in alphanumeric = which is 6H.7PP in PlusCode digits. 103.854503E = 283.854503 from Anti-Meridian, in Base20 = E3.H1G(0) in alphanumeric = which is P5.V3R in PlusCode digits. Combining latitude and longitude alternatively, 6P H5 7V P3 PR. The last leftover in Base20, (5)/20 latitude and (0)/20 longitude gives 6 in the 4-by-5 grid. Therefore, the resulting Open Location Code is 6PH57VP3+PR6. Common usage and shortening It is common to omit the first 4 characters from the code and add an approximate location, such as a city, state, or country. The above example then becomes "7VP3+PR6 Singapore". This is supported by the Google Maps app and the plus.codes website, and also by non-Google apps. These short forms of plus codes can be used in lieu of a house number in a neighborhood. Shortened codes cannot be unambiguously encoded or decoded without context. The specification does not rely on any specific database of contextual reference location place names and their exact locations, but there are a variety of geocoding databases which map names to latitude and longitude. Disambiguation requires narrowing the possibilities to within about 40 km of the referenced location. The coordinates of the user's current location can be also used for context, if applicable. Comparisons shortened codes can be seen in the tabel below. References External links , with a video explanation Geographic coordinate systems Geocodes 2014 introductions
Open Location Code
[ "Mathematics" ]
1,462
[ "Geographic coordinate systems", "Coordinate systems" ]
47,512,251
https://en.wikipedia.org/wiki/Eremaea%20%C3%97%20phoenicea
Eremaea × phoenicea is a plant in the myrtle family, Myrtaceae and is endemic to the south-west of Western Australia. It is thought to be a stabilised hybrid between two subspecies of Eremaea. It is an erect to spreading shrub with pointed, elliptic leaves and small groups of flowers, a shade of pink to red, on the ends of the branches. Description Eremaea × phoenicea is an erect, sometimes spreading shrub growing to a height of about . The leaves are long, wide, narrow elliptic to egg-shaped with the narrower end towards the base and the other end tapering to a point. They have a covering of fine hairs and one, rarely three veins on the lower surface. The flowers are rose-coloured to red and occur in groups of one to four on the end of branches formed the previous year. The outer surface of the flower cup (the hypanthium) is hairy and there are 5 petals long. The stamens, which give the flower its colour, are arranged in 5 bundles, each containing 19 to 26 stamens. Flowering occurs from October to November and is followed by fruits which are woody capsules. The capsules are more or less urn-shaped, long with a smooth surface. Taxonomy and naming Eremaea × phoenicea was first formally described in 1993 by Nuytsia in Nuytsia (journal) from a specimen found near Eneabba. Hnatiuk considers Eremaea x phoenicea to be a stabilised hybrid between Eremaea beaufortioides and Eremaea violacea subsp. rhaphiophylla. That view is supported by isozyme studies. The name phoenicea is derived from the Ancient Greek word φοῖνιξ (phoînix) meaning “purple” or "crimson" alluding to the flower colour of this species. Distribution and habitat Eremaea × phoenicea occurs in the Irwin district in the Geraldton Sandplains biogeographic region where it grows in sand in kwongan and heath. Conservation Eremaea × phoenicea is classified as "not threatened" by the Western Australian Government Department of Parks and Wildlife. References phoenicea Myrtales of Australia Plants described in 1993 Endemic flora of Western Australia Hybrid plants
Eremaea × phoenicea
[ "Biology" ]
482
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
47,512,346
https://en.wikipedia.org/wiki/Helvella%20semiobruta
Helvella semiobruta is a species of fungus in the family Helvellaceae. Originally found in the country of France, it was described as new to science in 1976. It has also been collected in Greece, and Cyprus, where it grows in maquis shrubland. References Further reading External links semiobruta Fungi described in 1976 Fungi of Europe Fungus species
Helvella semiobruta
[ "Biology" ]
81
[ "Fungi", "Fungus species" ]
47,512,383
https://en.wikipedia.org/wiki/Helvella%20zhongtiaoensis
Helvella zhongtiaoensis is a species of fungus in the family Helvellaceae. It is found in China, where it grows in the forest under Pinus tabulaeformis. The fungus was described as new to science in 1990 by Jin-Zhong Cao and Bo Liu. References External links zhongtiaoensis Fungi described in 1990 Fungi of China Fungus species
Helvella zhongtiaoensis
[ "Biology" ]
83
[ "Fungi", "Fungus species" ]
47,512,516
https://en.wikipedia.org/wiki/Exidiopsis%20macroacantha
Exidiopsis macroacantha is a species of fungus in the family Auriculariaceae. Originally found in São Paulo State, Brazil, where it was growing on rotting wood, it was described as new to science in 1969 by U.S. mycologist Kenneth Wells. It has also been recorded in Costa Rica. The specific epithet macroacantha, derived from the Greek words macro ("long") and acantha ("spine"), refers to the characteristically long and thick-walled cystidia. References External links Auriculariales Fungi described in 1969 Fungi of Central America Fungi of Brazil Fungus species
Exidiopsis macroacantha
[ "Biology" ]
132
[ "Fungi", "Fungus species" ]
47,512,564
https://en.wikipedia.org/wiki/Badri%20Roysam
Badrinath "Badri" Roysam (born 1961) is an Indian-American professor and researcher. He is the current chairman of the Department of Electrical and Computer Engineering at the University of Houston Cullen College of Engineering. Dr. Roysam is notable as the creator of the FARSIGHT project, which is a collaborative effort to create an open source software toolkit to analyze multidimensional images. Roysam's work as a researcher focuses on cancer immunotherapy and neuroscience. In addition to interdisciplinary collaborations such as FARSIGHT, Dr. Roysam is also a proponent of the Electrical Power Analytics Consortium, which aims to improve the state of the power grid in the hurricane-prone Houston area. Education Roysam received his Bachelor of Technology degree from the Indian Institute of Technology (IIT) in 1984. He then went on to earn his master's degree from Washington University in St. Louis in 1987, and his Doctor of Science degree from the same institution in 1989. Career Roysam began his career at Rensselaer Polytechnic Institute in Troy, New York in 1989, where he was a professor of electrical and computer engineering. Here, he became director of the Rensselaer branch of the Center for Subsurface Imaging and Sensing Systems (CenSISS) from 2001 until his departure in 2010. CenSISS is a multi-institution and multidisciplinary NSF funded center. In 2006, it was endowed by the Bernard Marshall Gordon foundation and renamed the Bernard Marshall Gordon Center for Subsurface Imaging and Sensing Systems (Gordon-CenSISS). Roysam initiated the development of FARSIGHT while at RPI, with the intention of developing an interdisciplinary resource for imaging tools. The development of the FARSIGHT project triggered interest, and subsequently funding, from federal institutions such as DARPA and the NIH. The success of FARSIGHT also led to industry collaborations, such as with Kitware, a local New York City company founded by an RPI alumnus. In 2010, Roysam left RPI for the University of Houston, becoming the Hugh Roy and Lillie Cranz Cullen endowed professor, as well as an administrative role as the chairman of the Electrical and Computer Engineering Department. Here he continued his work with FARSIGHT, and was part of the collaborative effort to develop a tool for the analysis of high dimensional data (STrenD). In recent years, Roysam has been recognized for his research in cancer immunotherapy using bioinformatics. Working in partnership with M.D. Anderson Cancer Center and the Chemical and Biomolecular Engineering Department of the University of Houston, Roysam and his colleagues have developed software that can single out cancer cells and profile their interactions on a cell-to-cell level. This allows close study of immune system cells and how they can be used to neutralize cancer cells. References External links [ The Farsight Toolkit Wiki], which provides a catalog of information relating to the FARSIGHT project, its source code, and its users and contributors 1961 births Living people American people of Indian descent Indian Institutes of Technology alumni Rensselaer Polytechnic Institute faculty Electrical engineers University of Houston faculty Washington University in St. Louis alumni
Badri Roysam
[ "Engineering" ]
647
[ "Electrical engineering", "Electrical engineers" ]
47,512,587
https://en.wikipedia.org/wiki/Effects%20of%20climate%20change%20on%20mental%20health
The effects of climate change on mental health and wellbeing are being documented as the consequences of climate change become more tangible and impactful. This is especially the case for vulnerable populations and those with pre-existing serious mental illness. There are three broad pathways by which these effects can take place: directly, indirectly or via awareness. The direct pathway includes stress-related conditions caused by exposure to extreme weather events. These include post-traumatic stress disorder (PTSD). Scientific studies have linked mental health to several climate-related exposures. These include heat, humidity, rainfall, drought, wildfires and floods. The indirect pathway can be disruption to economic and social activities. An example is when an area of farmland is less able to produce food. The third pathway can be of mere awareness of the climate change threat, even by individuals who are not otherwise affected by it. This especially manifests in the form of anxiety over the quality of life for future generations. An additional aspect to consider is the detrimental impact climate change can have on green or blue natural spaces, which have been proven to have beneficial impact on mental health. Impacts of anthropogenic climate change, such as freshwater pollution or deforestation, degrade these landscapes and reduce public access to them. Even when the green and blue spaces are intact, their accessibility is not equal across society, which is an issue of environmental justice and economic inequality. Mental health outcomes have been measured by several different indicators. These include psychiatric hospital admissions, mortality, self-harm and suicide rates. People with pre-existing mental illness, Indigenous peoples, migrants and refugees, and children and adolescents are especially vulnerable. The emotional responses to the threat of climate change can include eco-anxiety, ecological grief and eco-anger. Such emotions can be rational responses to the degradation of the natural world and may lead to adaptive action. Assessing the exact mental health effects of climate change is difficult; increases in heat extremes pose risks to mental health which can manifest themselves in increased mental health-related hospital admissions and suicidality. Pathways Mental health is a state of well-being where an individual can recognize their abilities, handle daily stresses of life, productively work and be able to contribute to their community. There are three main causal pathways by which climate change impacts mental health: directly, indirectly or via awareness (or "psychosocial"). In some cases, people may be affected via more than one pathway at once. Various studies use different nomenclature to designate the three causal pathways. e.g. some designate the "awareness" pathway using the term "Indirect impact," while grouping "indirect effects" via financial and social disruption under "psychosocial". Impacts from direct pathway The direct pathway includes stress-related conditions being caused by exposure to extreme weather events, such as heat waves, droughts, floods and wildfires. These conditions can result in trauma-related events, such as dislocation from climate-change induced natural disasters, such as flooding or fire, losing friends and family, or other traumatic events. The effect of being exposed to such events can be increased mental health illnesses such as post-traumatic stress disorder and acute stress disorder, depression, and generalized anxiety disorder. These effects often occur simultaneously, as well as individually. A large amount of literature exists concerning the association between disasters and mental health (without explicitly linking an increase in frequency and severity to climate change). Most commonly this is short term stress, from which people can often soon make a rapid recovery. But sometimes chronic conditions set in, especially among those who have been exposed to multiple events, such as post traumatic stress, somatoform disorder or long term anxiety. A swift response by authorities to restore a sense of order and security can substantially reduce the risk of any long term psychological impact for most people. Though individuals who already had mental ill health, especially psychosis, can need intensive care, which can be challenging to deliver if local mental health services were disrupted by the extreme weather. Physical health can be severely impacted by climate change (see also effects of climate change on human health). The deterioration of a person's physical health can also lead to a deterioration in a person's mental health. The less extreme direct manifestations of climate change can also have direct psychological effects. The single most well studied linkage between weather and human behavior is that between temperature and aggression. Various reviews conclude that high temperatures cause people to become bad tempered, leading to increased physical violence. Increased temperatures and heatwaves Several studies have shown that there is a correlation between elevated temperatures and psychiatric hospital admissions for a range of mental and neurological disorders (dementia, mood disorders, anxiety disorders, schizophrenia, bipolar disorder, somatoform disorders, and disorders of psychological development). Mortality has also been found to be influenced by high ambient temperatures for people living with mental illness and neurological conditions. Another European study supports this finding with increased mortality risk for people with psychiatric disorders during heat waves from 2000 to 2008 in Rome and Stockholm, particularly for older people (75+) and women. Projections of mortality under different climate change scenarios in China also estimate increasing trends in heat-related excess mortality for mental disorders but a decreasing trend in cold-related excess mortality. Several studies from Asia found that fluctuating temperatures influenced mental health and well-being, impacting productivity and livelihoods. For example, long-term exposure to high and low temperatures in Taiwan resulted in a 7% increase of major depressive disorder incidence per 1 °C increment in regions with an average annual temperature above the median 23 °C. Suicide rates Temperature has also been associated with self-harm and suicide rates. Using data from the US and Mexico, suicide rates were found to increase by 0.7% and 3.1%, respectively, for a 1 °C increase in monthly average temperature. Increasing temperatures are associated with increases in aggressive behavior and rising crime rates, leading to increases in homicides and assaults, as well as increased suicide rates in young men and older adults. Higher ambient temperatures are also associated with emergency department visits for mental health, suicides, and self-reporting of poor mental health. It is projected that in the coming decades, suicide rates in the United States and Mexico will increase due to increasing ambient temperatures. Assuming that there is no reduction in the current rate of greenhouse gas emissions, it is projected that by 2050, there will be an additional 9,000 to 40,000 suicides in the United States and Mexico, which is a rate comparable to the one estimated after the impact of economic recessions, suicide prevention programs, and gun restriction laws. The study also showed an increase in depressive language and suicidal ideation used on social media posts correlated with an increase in temperatures. In India, higher temperatures during growing seasons for crops have also been associated with increased suicides, at a rate of an additional 67 deaths per year per 1 °C additional degree. Wildfires Studies from North America have shown that experiences of evacuation and isolation due to wildfires, as well as feelings of fear, stress, and uncertainty, contributed to acute and long-term negative impacts on mental and emotional well-being. Prolonged smoke events were linked to respiratory problems, extended time indoors, and disruptions to livelihood and land-based activities, which negatively affected mental well-being. Similar findings were reported in an Australian study, with increased rates of stress, depression, anxiety and posttraumatic stress disorder being correlated with bushfire exposure severity. Floods An Australian study in rural communities concluded that the threat of drought and flood are intertwined and contributed to decreased well-being from stress, anxiety, loss, and fear. A cohort study from the UK looking at the long-term impact of flooding found psychological morbidity persisted for at least three years after the flooding event. Increased carbon dioxide concentrations Drivers of climate change may also have a physiological effects on the brain, in addition to their psychological impacts. By the end of the 21st century people could be exposed to indoor levels of up to 1400 ppm, triple the amount commonly experienced outdoors today. This may cut humans' basic decision-making ability indoors by ~25% and complex strategic thinking by ~50% due to carbon dioxide toxicity. Impacts from indirect pathway Climate change can affect wellbeing and mental health also through indirect consequences, such as "loss of land, flight and migration, exposure to violence, change of social, ecological, economic or cultural environment". Indirect effects on mental health can also occur via impacts on physical health. Physical health and mental health have a reciprocal relationship, so any climate change related effect that affects physical health can potentially indirectly affect mental health too. In several parts of the world, climate change significantly impacts people's financial income, for example, by reducing agricultural output. This can cause significant stress, which in turn can lead to depression, suicidal ideation, and other negative psychological conditions. Consequences can be especially severe if financial stress is coupled with significant disruption to social life, such as relocation to camps. Effective government interventions, similar to those used to relief the stress from a financial crisis, can alleviate the negative conditions caused by such disruption. Having to migrate due to an extreme weather event or conflict exacerbated by climate change can lead to increased rates of physical illnesses and psychological distress. Impacts from increasing awareness pathway The third pathway can be of mere awareness of the climate change threat, even by individuals who have not personally experienced any direct negative impacts. This can cause psychological distress, anxiety (eco-anxiety), and grief (eco-grief). The increasing "awareness of the existential dimension of climate change" can influence people's wellbeing or challenge their mental health, especially for children and adolescents. Awareness for climate change in young people has grown in Europe, as evidenced by the “Fridays for Future” movement that started in summer 2018. This can lead to higher emotional distress amongst young people, as well as feelings of fear, sadness, and anger, apocalyptic and pessimistic feelings – which can lead to grief, anxiety and hopelessness – all factors which can impact people's mental health. This effect has been compared to nuclear anxiety which occurred during the Cold War. Conditions such as eco-anxiety are very rarely severe enough to require clinical treatment. While unpleasant and thus classified as negative, such conditions have been described as valid rational responses to the reality of climate change. Types of mental health outcomes There are a multitude of mental illnesses that affect everyone differently. The types of mental health outcomes that are related to the effects of climate change (for example during heat waves) can be grouped as follows: Clinical disorders Trauma related disorders Post-traumatic stress disorder (PTSD) Acute stress disorder Depression Anxiety Self-harm and suicide Sub-clinical conditions Psychological distress Environmental and climate specific constructs Climate anxiety (eco-anxiety) Solastalgia Psychiatric-related hopitalisations and deaths (admissions for mental and neurological disorders, including dementia, mood disorders, anxiety disorders, schizophrenia, bipolar disorder, somatoform disorders, and disorders of psychological development) Exacerbation of pre-existing mental illness Potential neurodevelopmental impacts Vulnerable populations and life stages Climate change does not impact everyone equally; those of lower economic and social status are at greater risk and experience more devastating impacts. People with pre-existing mental illness Higher temperatures can affect people taking certain psychotropic medications (including hypnotics, anxiolytics, and antipsychotics). They can have an increased risk of heatstroke and death as a result of high temperatures. Indigenous peoples Inuit communities Qualitative studies reporting the unique mental health impacts of climate change on Inuit communities in Canada have described a loss of place-based solace, land-based activities such as hunting, and cultural identity due to changing weather and local landscapes. Climate change has devastating effects on Indigenous peoples' psychological wellbeing as it impacts them directly and indirectly. As their lifestyles are often closely linked to the land, climate change directly impacts their physical health and financial stability in quantifiable ways. There is also a concerning correlation between severe mental health issues among Indigenous peoples worldwide and environmental changes. The connection and value Indigenous cultures ascribe to land means that damage to or separation from it, directly impacts mental health. For many, their country is interwoven with psychological aspects such as their identity, community and rituals. Inadequate government responses which neglect Indigenous knowledge further worsen negative psychological effects linked to climate change. This produces the risk of cultural homogenization due to global adaptation efforts to climate change and the disruption of cultural traditions due to forced relocation. Countries with lower socio-economic status and minority groups in high socio-economic areas are disproportionately affected by the climate crisis. This has created climate migrants due to worsening environmental conditions and catastrophic climate events. Changes in sea levels and ice formation cause great impacts in Indigenous communities. The changes can lead to shifts in emotions such as anger, fear, anxiety, a sense of loss, etc.; as well as to changes in behavior such as withdrawal, aggression, and increased substance use. A sense of loss due to the changes in traditional weather prediction and navigation techniques has been observed, especially among younger generations where it results in feelings of cultural dislocation and dissociation as well as changes in identity. Climate change is likely to continue affecting Indigenous communities and their mental health for the next decades. Another study indicated that the cumulative effect of repeated exposure to climate change events and related stressors would be likely to lead to some form of mental illness. The effect of climate change on Inuit youth has also been studied, with concern for Elders, reduced connection to the land, challenges to cultural activities, among other things having an effect on mental health on youth. Indigenous peoples of Australia Studies conducted with Aboriginal and Torres Strait Islander peoples from Australia also highlight the environmental impacts of climate change on emotional wellbeing, including increased community distress from deteriorating the connection to country. Heat also appeared to be associated with suicide incidence in Australia's Indigenous populations; however, other socio-demographic factors may play a more critical role than meteorological factors. Children Climate change is a serious threat to children and adolescent mental health. Children's mental health, their rights, and climate change need to be seen as interlinked topics, not separate points. Children and young adults are the most vulnerable to climate change impacts. Many of the climate change impacts which affect children's physical health also lead to psychological and mental health consequences. Children who live in geographic locations that are most susceptible to the impacts of climate change, and/or with weaker infrastructure and fewer supports and services suffer the worst impacts. The impacts of climate change on children include them being at a high risk of mental health consequences like PTSD, depression, anxiety, phobias, sleep disorders, attachment disorders, and substance abuse. These conditions can lead to problems with emotion regulation, cognition, learning, behavior, language development, and academic performance. Adolescents Lack of political advocacy and change, with an increase in media attention, has brought upon ecological grief, which has had particular impacts on adolescent mental health. Climate change affects adolescents differently and in a multitude of ways. Many of these ways intersect as each adolescent processes their trauma and distress. Adolescents with pre-existing mental illnesses experience an elevated risk of ecological grief and distress. While these feelings are not directly harmful to the adolescent's physical health and conditions, they are unpleasant and a rising issue. Ecological grief, distress, anxiety, and anger are the most popular emotions sparked among adolescents. Psychologists, specifically climate psychologists, are experiencing difficulties in originating the source of these emotions, and methods to aid those in need and prevent those not as affected. Being forced to move, or displacement is becoming more common as the climate crisis rises. Forced displacement may be caused by natural disasters, reduction of food or food security, famine, water scarcity, or other environmental impacts. This displacement alone evokes feelings of grief and loss by being forced to move from a place of comfort to someplace unknown. Reduction of food, famine and water scarcity will indirectly impact an adolescent's health by invoking fear and anxiety, as well as grief and loss. For adolescents, relationships are important. Displacements can put strains on an adolescent's social relationships, as well as prevent them from further developing their social skills and relationships. Community conflict can also indirectly impact an adolescent's mental health. The community may experience conflicting views on how to approach climate change, climate change methods, and climate change awareness. Surrounded by negative emotions and situations can heavily weigh on a developing adolescent. They may not want to personally experience this conflict with others and pull back from social interactions. They may possess different ideas, but struggle to get someone to listen due to their age. Feelings of hopelessness, helplessness, and fear become prevalent. Environmental and climate-specific constructs Climate anxiety (eco-anxiety) Eco-grief Solastalgia Co-benefits While most study on the psychological impact of climate change finds negative effects, there may be some positive impacts via direct or indirect pathways. Climate activism Direct experience of the negative impacts of climate change may also lead to personal changes that can be seen as positive. Direct experiences of environmental events such as flooding have resulted in greater psychological salience and concern for climate change, which in turn predicts intentions, behaviors. and policy support for climate change. At a personal level, emotions like worry and anxiety are a normal, if uncomfortable, part of life. They can be seen as part of a defense system that identifies threats and deals with them. From this perspective, anxiety can be useful in motivating people to seek information and take action on a problem. Anxiety and worry are more likely to be associated with engagement when people feel that they can do things. Feelings of agency can be strengthened by including people in participatory decision-making. Problem-focused and meaning-focused coping skills can also be promoted. Problem-focused coping involves information gathering and trying to find out what you personally can do. Meaning-focused coping involves behaviors such as identifying positive information, focusing on constructive sources of hope, and trusting that other people are also doing their part. A sense of agency, coping skills, and social support are all important in building general resilience. Education may benefit from a focus around emotional awareness and the development of sustainable emotion-regulation strategies. For some individuals, the increased engagement caused by the shared struggle against climate change reduces social isolation and loneliness. At a community level, learning about the science of climate change and taking collective action in response to the threat, can increase altruism and social cohesion, strengthen social bonds, and improve resilience. Such positive social impact is generally associated only with communities that had somewhat high social cohesion in the first place, prompting community leaders to act to improve social resiliency before climate-related disruption becomes too severe. Mitigation and adaptation efforts There are potential mental health benefits of mitigation actions taken by individuals, such as active transport, increased physical activity, and healthier diets. Healthy living has been associated with improved mental health and overall well-being, supported by research on factors such as exercise, nutrition, sleep, stress management, and social connections. History Early investigation of the mental health impacts for climate change began in the 20th century, and became more topical in the 21st century. See also Barriers to pro-environmental behaviour Brain health and pollution Climate psychology Effects of climate change on human health Politics of climate change Psychological impact of climate change References Effects of climate change Environmental psychology Climate change and society Mental health Environment and health
Effects of climate change on mental health
[ "Environmental_science" ]
3,968
[ "Environmental social science", "Environmental psychology" ]
47,512,765
https://en.wikipedia.org/wiki/Pteridiospora%20spinosispora
Pteridiospora spinosispora is a species of fungus in the class Dothideomycetes. Taxonomy The fungus was discovered in 1963, isolated from the mycorrhizae of sweetgum (Liquidambar styraciflua). The type locality was near the Mississippi River in northern Mississippi; it was later reported growing with the roots of green ash (Fraxinus pennsylvanica). The species was first mentioned in a 1966 report, where it was described as an "unidentified sphaeriaceous ascomycete". Filer formally described the fungus in 1969. Description The fruitbodies of the fungus are small, dull black, and spherical, measuring 114–251 by 114–251 μm, with thick walls (up to 24 μm); They occur singly or in dense groups. Underlying the fruitbodies is a small, thin-walled mat of mycelium. The club-shaped asci (spore-bearing cells) measure 85 by 25 μm. The ascospores are black and spiny, measuring 21–25 by 12–20 μm (with the spines 2–5 μm); they contain a single septum. The ornamented spores clearly distinguish P. spinosispora from other members of Pteridiospora. References External links Enigmatic Dothideomycetes taxa Fungi described in 1969 Fungi of the United States Fungi without expected TNC conservation status Fungus species
Pteridiospora spinosispora
[ "Biology" ]
303
[ "Fungi", "Fungus species" ]
47,513,006
https://en.wikipedia.org/wiki/Platypeltella%20angustispora
Platypeltella angustispora is a species of fungus in the family Microthyriaceae. It was described as new to science in 1969 by Marie Farr and Flora Pollack, both scientists working at the United States Department of Agriculture. The fungus was found growing on several collections of diseased plant material (Chamaedorea species) sent from Mexico, and intercepted by quarantine inspectors. It was initially thought to be an unknown Pyrenomycetes species of the family Asterinaceae (according to the taxonomical concepts of the time). After studying the material, Farr and Pollack described it as a new species of Platypeltella, clearly distinguishable from the type species (P. smilacis) by the size and shape of its ascospores. References External links Fungi described in 1969 Fungi of Mexico Microthyriales Taxa named by Marie Leonore Farr Fungi without expected TNC conservation status Fungus species
Platypeltella angustispora
[ "Biology" ]
199
[ "Fungi", "Fungus species" ]
47,515,694
https://en.wikipedia.org/wiki/Chromosome%20territories
In cell biology, chromosome territories are regions of the nucleus preferentially occupied by particular chromosomes. Interphase chromosomes are long DNA strands that are extensively folded, and are often described as appearing like a bowl of spaghetti. The chromosome territory concept holds that despite this apparent disorder, chromosomes largely occupy defined regions of the nucleus. Most eukaryotes are thought to have chromosome territories, although the budding yeast S. cerevisiae is an exception to this. Characteristics Chromosome territories are spheroid with diameters on the order of one to few micrometers. Nuclear compartments devoid of DNA called interchromatin compartments have been reported to tunnel into chromosome territories to facilitate molecular diffusion into the otherwise tightly packed chromosome-occupied regions. History and experimental support The concept of chromosome territories was proposed by Carl Rabl in 1885 based on studies of Salamandra maculata. Chromosome territories have gained recognition using fluorescence labeling techniques (fluorescence in situ hybridization). Studies of genomic proximity using techniques like chromosome conformation capture have supported the chromosome territory concept by showing that DNA-DNA contacts predominantly happen within particular chromosomes. See also References Molecular biology Nuclear organization
Chromosome territories
[ "Chemistry", "Biology" ]
233
[ "Biochemistry", "Nuclear organization", "Cellular processes", "Molecular biology" ]
47,515,759
https://en.wikipedia.org/wiki/Alvernaviridae
Alvernaviridae is a family of non-enveloped positive-strand RNA viruses. Dinoflagellates serve as natural hosts. There is one genus in this family, Dinornavirus, which contains one species: Heterocapsa circularisquama RNA virus 01. Diseases associated with this family include host population control, possibly through lysis of the host cell. Structure Viruses in Alvernaviridae are non-enveloped, with icosahedral and spherical geometries, and T=3 symmetry. The diameter is around 34 nm. Genome Genomes are linear and non-segmented, around 4.4kb in length. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive-strand RNA virus replication model in the cytoplasm. Positive-strand RNA virus transcription is the method of transcription. The virus is assembled in the cytoplasm. Dinoflagellates serve as the natural host. References External links Viralzone: Alvernaviridae ICTV Virus families Riboviria
Alvernaviridae
[ "Biology" ]
228
[ "Viruses", "Riboviria" ]
47,515,762
https://en.wikipedia.org/wiki/Dictionary%20of%20Irish%20Architects
The Dictionary of Irish Architects is an online database which contains biographical and bibliographical information on architects, builders and craftsmen born or working in Ireland during the period 1720 to 1940, and information on the buildings on which they worked. Although it is principally devoted to architects, it includes engineers who designed buildings and structures, some builders, some artists and craftsmen, and some amateurs and writers on architectural subjects. The dictionary was initially devised and created by Ana Martha Rowan. Architects from Britain and elsewhere who never resided in Ireland but designed buildings there are not given full biographical treatment, and only their Irish works are listed. Irish-born architects who emigrated are similarly treated; their careers after their departure from Ireland are not described in detail, and only their Irish works are listed in full. The Dictionary of Irish Architects was created and compiled in the Irish Architectural Archive (IAA) over a period of thirty years. It was made publicly available online in January 2009. According to the IAA it remains a "work in progress" with new data added and updated since its initial release. As of 2018, it reportedly contained 6,700 entries. References External links Online databases Architecture in Ireland Architects Online encyclopedias Irish architectural history
Dictionary of Irish Architects
[ "Technology", "Engineering" ]
243
[ "Architecture stubs", "Computing stubs", "Architecture", "Computer network stubs" ]
47,515,820
https://en.wikipedia.org/wiki/Carmotetraviridae
Carmotetraviridae is a family of positive-strand RNA viruses. There is only one genus in this family, Alphacarmotetravirus, which has one species: Providence virus. Lepidopteran insects serve as natural hosts. Structure Viruses in Carmotetraviridae are non-enveloped, with icosahedral geometries, and T = 4 symmetry. The virion diameter is around 40 nm. Genome Genomes are linear, around 6.1kb in length. The genome codes for two proteins, and has three open reading frames. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription is the method of transcription. Translation takes place by suppression of termination. Lepidopteran insects serve as the natural host. Transmission routes are oral and tissue tropism is the midgut. References External links Viralzone: Carmotetraviridae ICTV Virus families Riboviria
Carmotetraviridae
[ "Biology" ]
216
[ "Viruses", "Riboviria" ]
47,515,850
https://en.wikipedia.org/wiki/Megabirnaviridae
Megabirnaviridae is a family of double-stranded RNA viruses with one genus Megabirnavirus which infects fungi. The group name derives from member's bipartite dsRNA genome and mega that is greater genome size (16 kbp) than families Birnaviridae (6 kbp) and Picobirnaviridae (4 kbp). There is only one species in this family: Rosellinia necatrix megabirnavirus 1. Diseases associated with this family include: reduced host virulence. Structure Viruses in the family Megabirnaviridae are non-enveloped, with icosahedral geometries, and T=1 symmetry. The diameter is around 50 nm. Genome The genome is composed of two double-stranded RNA segments of 7.2–8.9 kbp each and of a total length of 16.1 kbp. The genome codes for four proteins. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the double-stranded RNA virus replication model. Double-stranded RNA virus transcription is the method of transcription. The virus exits the host cell by cell to cell movement. Fungi serve as the natural host. Transmission routes are parental and sexual. Taxonomy The family Megabirnaviridae has one genus which has one species: Megabirnavirus Rosellinia necatrix megabirnavirus 1 References External links ICTV Report: Megabirnaviridae Viralzone: Megabirnaviridae Virus families Riboviria
Megabirnaviridae
[ "Biology" ]
321
[ "Viruses", "Riboviria" ]
47,515,964
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Note%205
The Samsung Galaxy Note 5 (stylized as SΛMSUNG Galaxy Note5) is an Android-based smartphone developed, produced and marketed by Samsung Electronics. Unveiled on 13 August 2015, it is the successor to the Galaxy Note 4 and part of the Samsung Galaxy Note series. The Galaxy Note 5 carries over hardware and software features from the Galaxy S6, including a changed design with a glass backing, improved camera, and fingerprint scanner. The precluded camera software also includes built in livestreaming functionality as well as features meant for use with the device's bundled, spring-loaded stylus. The device was released together with the Samsung Galaxy S6 Edge+. The device received positive reviews from critics, who praised the upgraded build quality over prior models, along with improvements to its performance, camera, and other changes. Similarly to the S6, Samsung was criticized for making the Galaxy Note 5's battery non-removable, and removing the ability to expand its storage via microSD. It was argued that these changes potentially alienated power users—especially because the Galaxy Note series had historically been oriented towards this segment of the overall market. The Galaxy Note 5 is briefly succeeded by the Samsung Galaxy Note 7, released in August 2016. However, that device was ultimately recalled and pulled from the market after repeated incidents where batteries overheated and caught on fire. The discontinued Note 7 was later re-launched as Galaxy Note Fan Edition in July 2017, while a fully-fledged successor, the Samsung Galaxy Note 8, was released in September 2017. Specifications Hardware Design The Galaxy Note 5 adopts a similar design and construction to the Galaxy S6, featuring a unibody metal frame and glass backing, although unlike the standard S6, the back of the device is curved. It is offered in dark blue, white, gold, and silver color finishes. The storage slot for the S Pen stylus uses a spring-loaded mechanism to eject the pen for more convenience. Due to the design, inserting the pen in reverse could cause it to get stuck. This issue is known as "pengate". Functionality Like on the Galaxy S6, Mobile high-definition link (MHL), a feature introduced with the Galaxy S II and Galaxy Note in 2011, has been removed from the series with the Galaxy Note 5. The Galaxy Note 5 has a non-removable 3,000 mAh lithium-ion battery and supports the Qi open interface standard. The Note 5 features a 5.7-inch 1440p Super AMOLED display. It is powered by a 64-bit Exynos 7 Octa 7420 system-on-chip, consisting of four 2.1 GHz Cortex-A57 cores, and four 1.5 GHz Cortex-A53 cores, and 4 GB of LPDDR4 RAM. The Galaxy Note 5 is available with either 32 GB or 64 GB of storage (a special "Winter Edition" exclusive to South Korea offers 128 GB storage), and utilizes a 3020 mAh battery with wireless and fast charging (Qualcomm Quick Charge 2.0) support. Samsung claims wired fast charging and wireless fast charging to be able to charge the phone entirely in 90 and 120 minutes respectively. Similarly to the S6, the Note 5 does not offer expandable storage or the ability to remove the battery, unlike its predecessor. As with the S6, the fingerprint scanner in the home button now uses a touch-based scanning mechanism rather than swipe-based, and the device also supports Samsung Pay. Camera The 16-megapixel rear-facing camera is identical to the Galaxy S6, with an f/1.9 aperture, optical image stabilization, object tracking autofocus, and real-time HDR. Video recording is supported at 2160p with 30 frames per second, 1080p with 60fps and 720p with 120fps. Software The Galaxy Note 5 shipped with Android 5.1.1 Lollipop. The new "Screen off memo" feature allows the phone to be awoken directly to a note screen when the stylus is removed. The Camera app on the Note 5 also allows public and private livestreaming directly to YouTube, and supports export of RAW images. In February 2016, Samsung began to release Android 6.0.1 Marshmallow for the Galaxy Note 5. The Galaxy Note 5 also gradually received the Nougat (Android 7.0) update with TouchWiz Grace UX during the first and second quarters of 2017. European release The Galaxy Note 5 was not released in Europe, in favor of solely marketing the S6 Edge+ in the region. Samsung European Vice President of Brand and Marketing Rory O'Neill explained that the decision was based upon market research showing that consumers in the region primarily viewed large-screen phones as being oriented towards entertainment, and not productivity. Reception Reviews The Verge complimented the higher-quality build of the Galaxy Note 5, describing it as being a "more humane device" due to its lighter build with thinner bezels in comparison to the Galaxy Note 4, along with its display, performance and additional S Pen features. However, the Galaxy Note 5 was panned for not offering a removable battery, expandable storage, or a 128 GB model, considering these oversights to be inappropriate for a device in a series that was "unapologetically meant for power users." The device was also described as being the result of Samsung "[holding themselves] back", having dropped the "old, unfettered excessiveness of the old Note" in favour of developing a "consumer-friendly" device with only minor upgrades over the S6. TechRadar shared a similar degree of positivity towards the Galaxy Note 5, noting that "the sacrifices Samsung felt it needed to make to get to that premium Note 5 design have turned off some longtime users. Thankfully, there's a lot more to like about this phone upgrade than dislike." Issues Following its release, it was discovered that inserting the pen into the Note 5's storage slot backwards could result in permanent damage to the spring mechanism, making the stylus become stuck, or damaging the sensor that detects when the S Pen is removed; all of these scenarios render the stylus unusable. This issue was dubbed "pengate". Samsung was aware of this issue and stated that it had provided a warning against backward pen insertion in the Galaxy Note 5's manual, but placed more prominent warning labels on the device itself on later shipments. In January 2016, it was reported that the design of the mechanism had been revised to allow the safe ejection of a pen accidentally inserted backwards, without causing damage to the sensor. This was achieved by using a stronger, differently designed clip that accommodated the pen being stored both ways. It is not possible to confirm which model a device is without opening it. Sales In its first three days on sale, over 75,000 units of the Note 5 (together with the S6 Edge+) were sold in South Korea, exceeding the rate of sales of the previous year's models. A study by AnTuTu detailed that this smartphone was one of the most popular Android devices in the first half of 2016. Other The device has been featured in the music video of Focus, a song by Ariana Grande, with her writing the phrase "Focus on me" using the stylus. References External links Samsung smartphones Samsung mobile phones Samsung Galaxy 5 Mobile phones introduced in 2015 Discontinued flagship smartphones Discontinued Samsung Galaxy smartphones Mobile phones with stylus Mobile phones with 4K video recording
Samsung Galaxy Note 5
[ "Technology" ]
1,550
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
47,516,047
https://en.wikipedia.org/wiki/Permutotetraviridae
Permutotetraviridae is a family of viruses. Lepidopteran insects serve as natural hosts. The family contains one genus that has two species. Diseases associated with this family include: infection outcome varies from unapparent to lethal. Taxonomy Permutotetraviridae has one genus which contains two species: Genus: Alphapermutotetravirus Euprosterna elaeasa virus Thosea asigna virus Structure Viruses in Permutotetraviridae are non-enveloped, with icosahedral geometries, and T=4 symmetry. The diameter is around 40 nm. Genomes are linear, around 5.6kb in length. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription is the method of transcription. Lepidopteran insectes serve as the natural host. Transmission routes are oral. References External links Viralzone: Permutotetraviridae ICTV Virus families Riboviria
Permutotetraviridae
[ "Biology" ]
219
[ "Viruses", "Riboviria" ]
47,516,088
https://en.wikipedia.org/wiki/Quadriviridae
Quadriviridae is a family of double-stranded RNA viruses with a single genus Quadrivirus. The fungi Rosellinia necatrix serves as a natural host. The name of the group derives from the quadripartite genome of its members where in Latin quad means four. There is only one species in this family: Rosellinia necatrix quadrivirus 1. Structure Mycoviruses in the family Quadriviridae have a non-enveloped isometric capsid which consists of 60 copies of heterodimers of the structural proteins P2 and P4. The diameter of the capsid is around 48 nm. Genome Family member genomes are composed of double-stranded RNA. They are divided in to four segments which each code for a protein. The length of the different segments are between 3.5 and 5.0 kbp. The total genome is around 16.8 kbp. Inside the capsid with the genome there is also the RNA-dependent RNA polymerase. Life cycle Quadriviruses are transmitted internally. They are propagated during cell division and hyphal anastomosis. Viral replication occurs in the cytoplasm. It follows the double-stranded RNA virus replication model. Double-stranded RNA virus transcription is the method of transcription. The fungi Rosellinia necatrix serves as a natural host. Taxonomy The family Quadrivirdae has one genus Quadrivirus which contains the species: Rosellinia necatrix quadrivirus 1 References External links ICTV Report: Quadriviridae Viralzone: Quadriviridae Virus families Riboviria
Quadriviridae
[ "Biology" ]
332
[ "Viruses", "Riboviria" ]
47,516,955
https://en.wikipedia.org/wiki/Filters%20in%20topology
Filters in topology, a subfield of mathematics, can be used to study topological spaces and define all basic topological notions such as convergence, continuity, compactness, and more. Filters, which are special families of subsets of some given set, also provide a common framework for defining various types of limits of functions such as limits from the left/right, to infinity, to a point or a set, and many others. Special types of filters called have many useful technical properties and they may often be used in place of arbitrary filters. Filters have generalizations called (also known as ) and , all of which appear naturally and repeatedly throughout topology. Examples include neighborhood filters/bases/subbases and uniformities. Every filter is a prefilter and both are filter subbases. Every prefilter and filter subbase is contained in a unique smallest filter, which they are said to . This establishes a relationship between filters and prefilters that may often be exploited to allow one to use whichever of these two notions is more technically convenient. There is a certain preorder on families of sets (subordination), denoted by that helps to determine exactly when and how one notion (filter, prefilter, etc.) can or cannot be used in place of another. This preorder's importance is amplified by the fact that it also defines the notion of filter convergence, where by definition, a filter (or prefilter) to a point if and only if where is that point's neighborhood filter. Consequently, subordination also plays an important role in many concepts that are related to convergence, such as cluster points and limits of functions. In addition, the relation which denotes and is expressed by saying that also establishes a relationship in which is to as a subsequence is to a sequence (that is, the relation which is called , is for filters the analog of "is a subsequence of"). Filters were introduced by Henri Cartan in 1937 and subsequently used by Bourbaki in their book as an alternative to the similar notion of a net developed in 1922 by E. H. Moore and H. L. Smith. Filters can also be used to characterize the notions of sequence and net convergence. But unlike sequence and net convergence, filter convergence is defined in terms of subsets of the topological space and so it provides a notion of convergence that is completely intrinsic to the topological space; indeed, the category of topological spaces can be equivalently defined entirely in terms of filters. Every net induces a canonical filter and dually, every filter induces a canonical net, where this induced net (resp. induced filter) converges to a point if and only if the same is true of the original filter (resp. net). This characterization also holds for many other definitions such as cluster points. These relationships make it possible to switch between filters and nets, and they often also allow one to choose whichever of these two notions (filter or net) is more convenient for the problem at hand. However, assuming that "subnet" is defined using either of its most popular definitions (which are those given by Willard and by Kelley), then in general, this relationship does extend to subordinate filters and subnets because as detailed below, there exist subordinate filters whose filter/subordinate-filter relationship cannot be described in terms of the corresponding net/subnet relationship; this issue can however be resolved by using a less commonly encountered definition of "subnet", which is that of an AA-subnet. Thus filters/prefilters and this single preorder provide a framework that seamlessly ties together fundamental topological concepts such as topological spaces (via neighborhood filters), neighborhood bases, convergence, various limits of functions, continuity, compactness, sequences (via sequential filters), the filter equivalent of "subsequence" (subordination), uniform spaces, and more; concepts that otherwise seem relatively disparate and whose relationships are less clear. Motivation Archetypical example of a filter The archetypical example of a filter is the at a point in a topological space which is the family of sets consisting of all neighborhoods of By definition, a neighborhood of some given point is any subset whose topological interior contains this point; that is, such that Importantly, neighborhoods are required to be open sets; those are called . Listed below are those fundamental properties of neighborhood filters that ultimately became the definition of a "filter." A is a set of subsets of that satisfies all of the following conditions: :    –  just as since is always a neighborhood of (and of anything else that it contains); :    –  just as no neighborhood of is empty; :   If  –  just as the intersection of any two neighborhoods of is again a neighborhood of ; :   If then  –  just as any subset of that includes a neighborhood of will necessarily a neighborhood of (this follows from and the definition of "a neighborhood of "). Generalizing sequence convergence by using sets − determining sequence convergence without the sequence A is by definition a map from the natural numbers into the space The original notion of convergence in a topological space was that of a sequence converging to some given point in a space, such as a metric space. With metrizable spaces (or more generally first-countable spaces or Fréchet–Urysohn spaces), sequences usually suffices to characterize, or "describe", most topological properties, such as the closures of subsets or continuity of functions. But there are many spaces where sequences can be used to describe even basic topological properties like closure or continuity. This failure of sequences was the motivation for defining notions such as nets and filters, which fail to characterize topological properties. Nets directly generalize the notion of a sequence since nets are, by definition, maps from an arbitrary directed set into the space A sequence is just a net whose domain is with the natural ordering. Nets have their own notion of convergence, which is a direct generalization of sequence convergence. Filters generalize sequence convergence in a different way by considering the values of a sequence. To see how this is done, consider a sequence which is by definition just a function whose value at is denoted by rather than by the usual parentheses notation that is commonly used for arbitrary functions. Knowing only the image (sometimes called "the range") of the sequence is not enough to characterize its convergence; multiple sets are needed. It turns out that the needed sets are the following, which are called the of the sequence : These sets completely determine this sequence's convergence (or non-convergence) because given any point, this sequence converges to it if and only if for every neighborhood (of this point), there is some integer such that contains all of the points This can be reworded as: every neighborhood must contain some set of the form as a subset. Or more briefly: every neighborhood must contain some tail as a subset. It is this characterization that can be used with the above family of tails to determine convergence (or non-convergence) of the sequence Specifically, with the family of in hand, the is no longer needed to determine convergence of this sequence (no matter what topology is placed on ). By generalizing this observation, the notion of "convergence" can be extended from sequences/functions to families of sets. The above set of tails of a sequence is in general not a filter but it does "" a filter via taking its (which consists of all supersets of all tails). The same is true of other important families of sets such as any neighborhood basis at a given point, which in general is also not a filter but does generate a filter via its upward closure (in particular, it generates the neighborhood filter at that point). The properties that these families share led to the notion of a , also called a , which by definition is any family having the minimal properties necessary and sufficient for it to generate a filter via taking its upward closure. Nets versus filters − advantages and disadvantages Filters and nets each have their own advantages and drawbacks and there's no reason to use one notion exclusively over the other. Depending on what is being proved, a proof may be made significantly easier by using one of these notions instead of the other. Both filters and nets can be used to completely characterize any given topology. Nets are direct generalizations of sequences and can often be used similarly to sequences, so the learning curve for nets is typically much less steep than that for filters. However, filters, and especially ultrafilters, have many more uses outside of topology, such as in set theory, mathematical logic, model theory (ultraproducts, for example), abstract algebra, combinatorics, dynamics, order theory, generalized convergence spaces, Cauchy spaces, and in the definition and use of hyperreal numbers. Like sequences, nets are and so they have the . For example, like sequences, nets can be "plugged into" other functions, where "plugging in" is just function composition. Theorems related to functions and function composition may then be applied to nets. One example is the universal property of inverse limits, which is defined in terms of composition of functions rather than sets and it is more readily applied to functions like nets than to sets like filters (a prominent example of an inverse limit is the Cartesian product). Filters may be awkward to use in certain situations, such as when switching between a filter on a space and a filter on a dense subspace In contrast to nets, filters (and prefilters) are families of and so they have the . For example, if is surjective then the under of an arbitrary filter or prefilter is both easily defined and guaranteed to be a prefilter on 's domain, whereas it is less clear how to pullback (unambiguously/without choice) an arbitrary sequence (or net) so as to obtain a sequence or net in the domain (unless is also injective and consequently a bijection, which is a stringent requirement). Similarly, the intersection of any collection of filters is once again a filter whereas it is not clear what this could mean for sequences or nets. Because filters are composed of subsets of the very topological space that is under consideration, topological set operations (such as closure or interior) may be applied to the sets that constitute the filter. Taking the closure of all the sets in a filter is sometimes useful in functional analysis for instance. Theorems and results about images or preimages of sets under a function may also be applied to the sets that constitute a filter; an example of such a result might be one of continuity's characterizations in terms of preimages of open/closed sets or in terms of the interior/closure operators. Special types of filters called have many useful properties that can significantly help in proving results. One downside of nets is their dependence on the directed sets that constitute their domains, which in general may be entirely unrelated to the space In fact, the class of nets in a given set is too large to even be a set (it is a proper class); this is because nets in can have domains of cardinality. In contrast, the collection of all filters (and of all prefilters) on is a set whose cardinality is no larger than that of Similar to a topology on a filter on is "intrinsic to " in the sense that both structures consist of subsets of and neither definition requires any set that cannot be constructed from (such as or other directed sets, which sequences and nets require). Preliminaries, notation, and basic notions In this article, upper case Roman letters like and denote sets (but not families unless indicated otherwise) and will denote the power set of A subset of a power set is called (or simply, ) where it is if it is a subset of Families of sets will be denoted by upper case calligraphy letters such as , , and . Whenever these assumptions are needed, then it should be assumed that is non-empty and that etc. are families of sets over The terms "prefilter" and "filter base" are synonyms and will be used interchangeably. Warning about competing definitions and notation There are unfortunately several terms in the theory of filters that are defined differently by different authors. These include some of the most important terms such as "filter." While different definitions of the same term usually have significant overlap, due to the very technical nature of filters (and point–set topology), these differences in definitions nevertheless often have important consequences. When reading mathematical literature, it is recommended that readers check how the terminology related to filters is defined by the author. For this reason, this article will clearly state all definitions as they are used. Unfortunately, not all notation related to filters is well established and some notation varies greatly across the literature (for example, the notation for the set of all prefilters on a set) so in such cases this article uses whatever notation is most self describing or easily remembered. The theory of filters and prefilters is well developed and has a plethora of definitions and notations, many of which are now unceremoniously listed to prevent this article from becoming prolix and to allow for the easy look up of notation and definitions. Their important properties are described later. Sets operations The or in of a family of sets is and similarly the of is Throughout, is a map. Topology notation Denote the set of all topologies on a set Suppose is any subset, and is any point. If then Nets and their tails A is a set together with a preorder, which will be denoted by (unless explicitly indicated otherwise), that makes into an () ; this means that for all there exists some such that For any indices the notation is defined to mean while is defined to mean that holds but it is true that (if is antisymmetric then this is equivalent to ). A is a map from a non-empty directed set into The notation will be used to denote a net with domain Warning about using strict comparison If is a net and then it is possible for the set which is called , to be empty (for example, this happens if is an upper bound of the directed set ). In this case, the family would contain the empty set, which would prevent it from being a prefilter (defined later). This is the (important) reason for defining as rather than or even and it is for this reason that in general, when dealing with the prefilter of tails of a net, the strict inequality may not be used interchangeably with the inequality Filters and prefilters The following is a list of properties that a family of sets may possess and they form the defining properties of filters, prefilters, and filter subbases. Whenever it is necessary, it should be assumed that Many of the properties of defined above and below, such as "proper" and "directed downward," do not depend on so mentioning the set is optional when using such terms. Definitions involving being "upward closed in " such as that of "filter on " do depend on so the set should be mentioned if it is not clear from context. There are no prefilters on (nor are there any nets valued in ), which is why this article, like most authors, will automatically assume without comment that whenever this assumption is needed. Basic examples Named examples The singleton set is called the or It is the unique filter on because it is a subset of every filter on ; however, it need not be a subset of every prefilter on The dual ideal is also called (despite not actually being a filter). It is the only dual ideal on that is not a filter on If is a topological space and then the neighborhood filter at is a filter on By definition, a family is called a (resp. a ) at if and only if is a prefilter (resp. is a filter subbase) and the filter on that generates is equal to the neighborhood filter The subfamily of open neighborhoods is a filter base for Both prefilters also form a bases for topologies on with the topology generated being coarser than This example immediately generalizes from neighborhoods of points to neighborhoods of non-empty subsets is an if for some sequence of points is an or a on if is a filter on generated by some elementary prefilter. The filter of tails generated by a sequence that is not eventually constant is necessarily an ultrafilter. Every principal filter on a countable set is sequential as is every cofinite filter on a countably infinite set. The intersection of finitely many sequential filters is again sequential. The set of all cofinite subsets of (meaning those sets whose complement in is finite) is proper if and only if is infinite (or equivalently, is infinite), in which case is a filter on known as the or the on If is finite then is equal to the dual ideal which is not a filter. If is infinite then the family of complements of singleton sets is a filter subbase that generates the Fréchet filter on As with any family of sets over that contains the kernel of the Fréchet filter on is the empty set: The intersection of all elements in any non-empty family is itself a filter on called the or of which is why it may be denoted by Said differently, Because every filter on has as a subset, this intersection is never empty. By definition, the infimum is the finest/largest (relative to ) filter contained as a subset of each member of If are filters then their infimum in is the filter If are prefilters then is a prefilter that is coarser than both (that is, ); indeed, it is one of the finest such prefilters, meaning that if is a prefilter such that then necessarily More generally, if are non−empty families and if then and is a greatest element of Let and let The or of denoted by is the smallest (relative to ) dual ideal on containing every element of as a subset; that is, it is the smallest (relative to ) dual ideal on containing as a subset. This dual ideal is where is the -system generated by As with any non-empty family of sets, is contained in filter on if and only if it is a filter subbase, or equivalently, if and only if is a filter on in which case this family is the smallest (relative to ) filter on containing every element of as a subset and necessarily Let and let The or of denoted by if it exists, is by definition the smallest (relative to ) filter on containing every element of as a subset. If it exists then necessarily (as defined above) and will also be equal to the intersection of all filters on containing This supremum of exists if and only if the dual ideal is a filter on The least upper bound of a family of filters may fail to be a filter. Indeed, if contains at least two distinct elements then there exist filters for which there does exist a filter that contains both If is not a filter subbase then the supremum of does not exist and the same is true of its supremum in but their supremum in the set of all dual ideals on will exist (it being the degenerate filter ). If are prefilters (resp. filters on ) then is a prefilter (resp. a filter) if and only if it is non-degenerate (or said differently, if and only if mesh), in which case it is coarsest prefilters (resp. coarsest filter) on that is finer (with respect to ) than both this means that if is any prefilter (resp. any filter) such that then necessarily in which case it is denoted by Other examples Let and let which makes a prefilter and a filter subbase that is not closed under finite intersections. Because is a prefilter, the smallest prefilter containing is The -system generated by is In particular, the smallest prefilter containing the filter subbase is equal to the set of all finite intersections of sets in The filter on generated by is All three of the -system generates, and are examples of fixed, principal, ultra prefilters that are principal at the point is also an ultrafilter on Let be a topological space, and define where is necessarily finer than If is non-empty (resp. non-degenerate, a filter subbase, a prefilter, closed under finite unions) then the same is true of If is a filter on then is a prefilter but not necessarily a filter on although is a filter on equivalent to The set of all dense open subsets of a (non-empty) topological space is a proper -system and so also a prefilter. If the space is a Baire space, then the set of all countable intersections of dense open subsets is a -system and a prefilter that is finer than If (with ) then the set of all such that has finite Lebesgue measure is a proper -system and a free prefilter that is also a proper subset of The prefilters and are equivalent and so generate the same filter on Since is a Baire space, every countable intersection of sets in is dense in (and also comeagre and non-meager) so the set of all countable intersections of elements of is a prefilter and -system; it is also finer than, and not equivalent to, Ultrafilters There are many other characterizations of "ultrafilter" and "ultra prefilter," which are listed in the article on ultrafilters. Important properties of ultrafilters are also described in that article. The ultrafilter lemma The following important theorem is due to Alfred Tarski (1930). A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it. Assuming the axioms of Zermelo–Fraenkel (ZF), the ultrafilter lemma follows from the Axiom of choice (in particular from Zorn's lemma) but is strictly weaker than it. The ultrafilter lemma implies the Axiom of choice for finite sets. If dealing with Hausdorff spaces, then most basic results (as encountered in introductory courses) in Topology (such as Tychonoff's theorem for compact Hausdorff spaces and the Alexander subbase theorem) and in functional analysis (such as the Hahn–Banach theorem) can be proven using only the ultrafilter lemma; the full strength of the axiom of choice might not be needed. Kernels The kernel is useful in classifying properties of prefilters and other families of sets. If then and this set is also equal to the kernel of the -system that is generated by In particular, if is a filter subbase then the kernels of all of the following sets are equal: (1) (2) the -system generated by and (3) the filter generated by If is a map then Equivalent families have equal kernels. Two principal families are equivalent if and only if their kernels are equal. Classifying families by their kernels If is a principal filter on then and and is also the smallest prefilter that generates Family of examples: For any non-empty the family is free but it is a filter subbase if and only if no finite union of the form covers in which case the filter that it generates will also be free. In particular, is a filter subbase if is countable (for example, the primes), a meager set in a set of finite measure, or a bounded subset of If is a singleton set then is a subbase for the Fréchet filter on Characterizing fixed ultra prefilters If a family of sets is fixed (that is, ) then is ultra if and only if some element of is a singleton set, in which case will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter is ultra if and only if is a singleton set. Every filter on that is principal at a single point is an ultrafilter, and if in addition is finite, then there are no ultrafilters on other than these. The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point. Finer/coarser, subordination, and meshing The preorder that is defined below is of fundamental importance for the use of prefilters (and filters) in topology. For instance, this preorder is used to define the prefilter equivalent of "subsequence", where "" can be interpreted as " is a subsequence of " (so "subordinate to" is the prefilter equivalent of "subsequence of"). It is also used to define prefilter convergence in a topological space. The definition of meshes with which is closely related to the preorder is used in topology to define cluster points. Two families of sets and are , indicated by writing if If do not mesh then they are . If then are said to if mesh, or equivalently, if the of which is the family does not contain the empty set, where the trace is also called the of Example: If is a subsequence of then is subordinate to in symbols: and also Stated in plain English, the prefilter of tails of a subsequence is always subordinate to that of the original sequence. To see this, let be arbitrary (or equivalently, let be arbitrary) and it remains to show that this set contains some For the set to contain it is sufficient to have Since are strictly increasing integers, there exists such that and so holds, as desired. Consequently, The left hand side will be a subset of the right hand side if (for instance) every point of is unique (that is, when is injective) and is the even-indexed subsequence because under these conditions, every tail (for every ) of the subsequence will belong to the right hand side filter but not to the left hand side filter. For another example, if is any family then always holds and furthermore, A non-empty family that is coarser than a filter subbase must itself be a filter subbase. Every filter subbase is coarser than both the -system that it generates and the filter that it generates. If are families such that the family is ultra, and then is necessarily ultra. It follows that any family that is equivalent to an ultra family will necessarily ultra. In particular, if is a prefilter then either both and the filter it generates are ultra or neither one is ultra. The relation is reflexive and transitive, which makes it into a preorder on The relation is antisymmetric but if has more than one point then it is symmetric. Equivalent families of sets The preorder induces its canonical equivalence relation on where for all is to if any of the following equivalent conditions hold: The upward closures of are equal. Two upward closed (in ) subsets of are equivalent if and only if they are equal. If then necessarily and is equivalent to Every equivalence class other than contains a unique representative (that is, element of the equivalence class) that is upward closed in Properties preserved between equivalent families Let be arbitrary and let be any family of sets. If are equivalent (which implies that ) then for each of the statements/properties listed below, either it is true of or else it is false of : Not empty Proper (that is, is not an element) Moreover, any two degenerate families are necessarily equivalent. Filter subbase Prefilter In which case generate the same filter on (that is, their upward closures in are equal). Free Principal Ultra Is equal to the trivial filter In words, this means that the only subset of that is equivalent to the trivial filter the trivial filter. In general, this conclusion of equality does not extend to non−trivial filters (one exception is when both families are filters). Meshes with Is finer than Is coarser than Is equivalent to Missing from the above list is the word "filter" because this property is preserved by equivalence. However, if are filters on then they are equivalent if and only if they are equal; this characterization does extend to prefilters. Equivalence of prefilters and filter subbases If is a prefilter on then the following families are always equivalent to each other: ; the -system generated by ; the filter on generated by ; and moreover, these three families all generate the same filter on (that is, the upward closures in of these families are equal). In particular, every prefilter is equivalent to the filter that it generates. By transitivity, two prefilters are equivalent if and only if they generate the same filter. Every prefilter is equivalent to exactly one filter on which is the filter that it generates (that is, the prefilter's upward closure). Said differently, every equivalence class of prefilters contains exactly one representative that is a filter. In this way, filters can be considered as just being distinguished elements of these equivalence classes of prefilters. A filter subbase that is also a prefilter can be equivalent to the prefilter (or filter) that it generates. In contrast, every prefilter is equivalent to the filter that it generates. This is why prefilters can, by and large, be used interchangeably with the filters that they generate while filter subbases cannot. Set theoretic properties and constructions relevant to topology Trace and meshing If is a prefilter (resp. filter) on then the trace of which is the family is a prefilter (resp. a filter) if and only if mesh (that is, ), in which case the trace of is said to be . The trace is always finer than the original family; that is, If is ultra and if mesh then the trace is ultra. If is an ultrafilter on then the trace of is a filter on if and only if For example, suppose that is a filter on is such that Then mesh and generates a filter on that is strictly finer than When prefilters mesh Given non-empty families the family satisfies and If is proper (resp. a prefilter, a filter subbase) then this is also true of both In order to make any meaningful deductions about from needs to be proper (that is, which is the motivation for the definition of "mesh". In this case, is a prefilter (resp. filter subbase) if and only if this is true of both Said differently, if are prefilters then they mesh if and only if is a prefilter. Generalizing gives a well known characterization of "mesh" entirely in terms of subordination (that is, ): Two prefilters (resp. filter subbases) mesh if and only if there exists a prefilter (resp. filter subbase) such that and If the least upper bound of two filters exists in then this least upper bound is equal to Images and preimages under functions Throughout, will be maps between non-empty sets. Images of prefilters Let Many of the properties that may have are preserved under images of maps; notable exceptions include being upward closed, being closed under finite intersections, and being a filter, which are not necessarily preserved. Explicitly, if one of the following properties is true of then it will necessarily also be true of (although possibly not on the codomain unless is surjective): ultra, ultrafilter, filter, prefilter, filter subbase, dual ideal, upward closed, proper/non-degenerate, ideal, closed under finite unions, downward closed, directed upward. Moreover, if is a prefilter then so are both The image under a map of an ultra set is again ultra and if is an ultra prefilter then so is If is a filter then is a filter on the range but it is a filter on the codomain if and only if is surjective. Otherwise it is just a prefilter on and its upward closure must be taken in to obtain a filter. The upward closure of is where if is upward closed in (that is, a filter) then this simplifies to: If then taking to be the inclusion map shows that any prefilter (resp. ultra prefilter, filter subbase) on is also a prefilter (resp. ultra prefilter, filter subbase) on Preimages of prefilters Let Under the assumption that is surjective: is a prefilter (resp. filter subbase, -system, closed under finite unions, proper) if and only if this is true of However, if is an ultrafilter on then even if is surjective (which would make a prefilter), it is nevertheless still possible for the prefilter to be neither ultra nor a filter on If is not surjective then denote the trace of by where in this case particular case the trace satisfies: and consequently also: This last equality and the fact that the trace is a family of sets over means that to draw conclusions about the trace can be used in place of and the can be used in place of For example: is a prefilter (resp. filter subbase, -system, proper) if and only if this is true of In this way, the case where is not (necessarily) surjective can be reduced down to the case of a surjective function (which is a case that was described at the start of this subsection). Even if is an ultrafilter on if is not surjective then it is nevertheless possible that which would make degenerate as well. The next characterization shows that degeneracy is the only obstacle. If is a prefilter then the following are equivalent: is a prefilter; is a prefilter; ; meshes with and moreover, if is a prefilter then so is If and if denotes the inclusion map then the trace of is equal to This observation allows the results in this subsection to be applied to investigating the trace on a set. Subordination is preserved by images and preimages The relation is preserved under both images and preimages of families of sets. This means that for families Moreover, the following relations always hold for family of sets : where equality will hold if is surjective. Furthermore, If then and where equality will hold if is injective. Products of prefilters Suppose is a family of one or more non-empty sets, whose product will be denoted by and for every index let denote the canonical projection. Let be non−empty families, also indexed by such that for each The of the families is defined identically to how the basic open subsets of the product topology are defined (had all of these been topologies). That is, both the notations denote the family of all cylinder subsets such that for all but finitely many and where for any one of these finitely many exceptions (that is, for any such that necessarily ). When every is a filter subbase then the family is a filter subbase for the filter on generated by If is a filter subbase then the filter on that it generates is called the . If every is a prefilter on then will be a prefilter on and moreover, this prefilter is equal to the coarsest prefilter such that for every However, may fail to be a filter on even if every is a filter on Convergence, limits, and cluster points Throughout, is a topological space. Prefilters vs. filters With respect to maps and subsets, the property of being a prefilter is in general more well behaved and better preserved than the property of being a filter. For instance, the image of a prefilter under some map is again a prefilter; but the image of a filter under a non-surjective map is a filter on the codomain, although it will be a prefilter. The situation is the same with preimages under non-injective maps (even if the map is surjective). If is a proper subset then any filter on will not be a filter on although it will be a prefilter. One advantage that filters have is that they are distinguished representatives of their equivalence class (relative to ), meaning that any equivalence class of prefilters contains a unique filter. This property may be useful when dealing with equivalence classes of prefilters (for instance, they are useful in the construction of completions of uniform spaces via Cauchy filters). The many properties that characterize ultrafilters are also often useful. They are used to, for example, construct the Stone–Čech compactification. The use of ultrafilters generally requires that the ultrafilter lemma be assumed. But in the many fields where the axiom of choice (or the Hahn–Banach theorem) is assumed, the ultrafilter lemma necessarily holds and does not require an addition assumption. A note on intuition Suppose that is a non-principal filter on an infinite set has one "upward" property (that of being closed upward) and one "downward" property (that of being directed downward). Starting with any there always exists some that is a subset of ; this may be continued ad infinitum to get a sequence of sets in with each being a subset of The same is true going "upward", for if then there is no set in that contains as a proper subset. Thus when it comes to limiting behavior (which is a topic central to the field of topology), going "upward" leads to a dead end, while going "downward" is typically fruitful. So to gain understanding and intuition about how filters (and prefilter) relate to concepts in topology, the "downward" property is usually the one to concentrate on. This is also why so many topological properties can be described by using only prefilters, rather than requiring filters (which only differ from prefilters in that they are also upward closed). The "upward" property of filters is less important for topological intuition but it is sometimes useful to have for technical reasons. For example, with respect to every filter subbase is contained in a unique smallest filter but there may not exist a unique smallest prefilter containing it. Limits and convergence A family is said to to a point of if Explicitly, means that every neighborhood contains some as a subset (that is, ); thus the following then holds: In words, a family converges to a point or subset if and only if it is than the neighborhood filter at A family converging to a point may be indicated by writing and saying that is a of if this limit is a point (and not a subset), then is also called a . As usual, is defined to mean that and is the limit point of that is, if also (If the notation "" did not also require that the limit point be unique then the equals sign would no longer be guaranteed to be transitive). The set of all limit points of is denoted by In the above definitions, it suffices to check that is finer than some (or equivalently, finer than every) neighborhood base in of the point (for example, such as or when ). Examples If is Euclidean space and denotes the Euclidean norm (which is the distance from the origin, defined as usual), then all of the following families converge to the origin: the prefilter of all open balls centered at the origin, where the prefilter of all closed balls centered at the origin, where This prefilter is equivalent to the one above. the prefilter where is a union of spheres centered at the origin having progressively smaller radii. This family consists of the sets as ranges over the positive integers. any of the families above but with the radius ranging over (or over any other positive decreasing sequence) instead of over all positive reals. Drawing or imagining any one of these sequences of sets when has dimension suggests that intuitively, these sets "should" converge to the origin (and indeed they do). This is the intuition that the above definition of a "convergent prefilter" make rigorous. Although was assumed to be the Euclidean norm, the example above remains valid for any other norm on The one and only limit point in of the free prefilter is since every open ball around the origin contains some open interval of this form. The fixed prefilter does not converges in to any and so although does converge to the since However, not every fixed prefilter converges to its kernel. For instance, the fixed prefilter also has kernel but does not converges (in ) to it. The free prefilter of intervals does not converge (in ) to any point. The same is also true of the prefilter because it is equivalent to and equivalent families have the same limits. In fact, if is any prefilter in any topological space then for every More generally, because the only neighborhood of is itself (that is, ), every non-empty family (including every filter subbase) converges to For any point its neighborhood filter always converges to More generally, any neighborhood basis at converges to A point is always a limit point of the principle ultra prefilter and of the ultrafilter that it generates. The empty family does not converge to any point. Basic properties If converges to a point then the same is true of any family finer than This has many important consequences. One consequence is that the limit points of a family are the same as the limit points of its upward closure: In particular, the limit points of a prefilter are the same as the limit points of the filter that it generates. Another consequence is that if a family converges to a point then the same is true of the family's trace/restriction to any given subset of If is a prefilter and then converges to a point of if and only if this is true of the trace If a filter subbase converges to a point then do the filter and the -system that it generates, although the converse is not guaranteed. For example, the filter subbase does not converge to in although the (principle ultra) filter that it generates does. Given the following are equivalent for a prefilter converges to converges to There exists a family equivalent to that converges to Because subordination is transitive, if and moreover, for every both and the maximal/ultrafilter converge to Thus every topological space induces a canonical convergence defined by At the other extreme, the neighborhood filter is the smallest (that is, coarsest) filter on that converges to that is, any filter converging to must contain as a subset. Said differently, the family of filters that converge to consists exactly of those filter on that contain as a subset. Consequently, the finer the topology on then the prefilters exist that have any limit points in Cluster points A family is said to a point of if it meshes with the neighborhood filter of that is, if Explicitly, this means that and every neighborhood of In particular, a point is a or an of a family if meshes with the neighborhood filter at The set of all cluster points of is denoted by where the subscript may be dropped if not needed. In the above definitions, it suffices to check that meshes with some (or equivalently, meshes with every) neighborhood base in of When is a prefilter then the definition of " mesh" can be characterized entirely in terms of the subordination preorder Two equivalent families of sets have the exact same limit points and also the same cluster points. No matter the topology, for every both and the principal ultrafilter cluster at If clusters to a point then the same is true of any family coarser than Consequently, the cluster points of a family are the same as the cluster points of its upward closure: In particular, the cluster points of a prefilter are the same as the cluster points of the filter that it generates. Given the following are equivalent for a prefilter : clusters at The family generated by clusters at There exists a family equivalent to that clusters at for every neighborhood of If is a filter on then for every neighborhood There exists a prefilter subordinate to (that is, ) that converges to This is the filter equivalent of " is a cluster point of a sequence if and only if there exists a subsequence converging to In particular, if is a cluster point of a prefilter then is a prefilter subordinate to that converges to The set of all cluster points of a prefilter satisfies Consequently, the set of all cluster points of prefilter is a closed subset of This also justifies the notation for the set of cluster points. In particular, if is non-empty (so that is a prefilter) then since both sides are equal to Properties and relationships Just like sequences and nets, it is possible for a prefilter on a topological space of infinite cardinality to not have cluster points or limit points. If is a limit point of then is necessarily a limit point of any family than (that is, if then ). In contrast, if is a cluster point of then is necessarily a cluster point of any family than (that is, if mesh and then mesh). Equivalent families and subordination Any two equivalent families can be used in the definitions of "limit of" and "cluster at" because their equivalency guarantees that if and only if and also that if and only if In essence, the preorder is incapable of distinguishing between equivalent families. Given two prefilters, whether or not they mesh can be characterized entirely in terms of subordination. Thus the two most fundamental concepts related to (pre)filters to Topology (that is, limit and cluster points) can both be defined in terms of the subordination relation. This is why the preorder is of such great importance in applying (pre)filters to Topology. Limit and cluster point relationships and sufficient conditions Every limit point of a non-degenerate family is also a cluster point; in symbols: This is because if is a limit point of then mesh, which makes a cluster point of But in general, a cluster point need not be a limit point. For instance, every point in any given non-empty subset is a cluster point of the principle prefilter (no matter what topology is on ) but if is Hausdorff and has more than one point then this prefilter has no limit points; the same is true of the filter that this prefilter generates. However, every cluster point of an prefilter is a limit point. Consequently, the limit points of an prefilter are the same as its cluster points: that is to say, a given point is a cluster point of an ultra prefilter if and only if converges to that point. Although a cluster point of a filter need not be a limit point, there will always exist a finer filter that does converge to it; in particular, if clusters at then is a filter subbase whose generated filter converges to If is a filter subbase such that then In particular, any limit point of a filter subbase subordinate to is necessarily also a cluster point of If is a cluster point of a prefilter then is a prefilter subordinate to that converges to If and if is a prefilter on then every cluster point of belongs to and any point in is a limit point of a filter on Primitive sets A subset is called if it is the set of limit points of some ultrafilter (or equivalently, some ultra prefilter). That is, if there exists an ultrafilter such that is equal to which recall denotes the set of limit points of Since limit points are the same as cluster points for ultra prefilters, a subset is primitive if and only if it is equal to the set of cluster points of some ultra prefilter For example, every closed singleton subset is primitive. The image of a primitive subset of under a continuous map is contained in a primitive subset of Assume that are two primitive subset of If is an open subset of that intersects then for any ultrafilter such that In addition, if are distinct then there exists some and some ultrafilters such that and Other results If is a complete lattice then: The limit inferior of is the infimum of the set of all cluster points of The limit superior of is the supremum of the set of all cluster points of is a convergent prefilter if and only if its limit inferior and limit superior agree; in this case, the value on which they agree is the limit of the prefilter. Limits of functions defined as limits of prefilters Suppose is a map from a set into a topological space and If is a limit point (respectively, a cluster point) of then is called a or (respectively, a ) Explicitly, is a limit of with respect to if and only if which can be written as (by definition of this notation) and stated as If the limit is unique then the arrow may be replaced with an equals sign The neighborhood filter can be replaced with any family equivalent to it and the same is true of The definition of a convergent net is a special case of the above definition of a limit of a function. Specifically, if is a net then where the left hand side states that is a limit while the right hand side states that is a limit with respect to (as just defined above). The table below shows how various types of limits encountered in analysis and topology can be defined in terms of the convergence of images (under ) of particular prefilters on the domain This shows that prefilters provide a general framework into which many of the various definitions of limits fit. The limits in the left-most column are defined in their usual way with their obvious definitions. Throughout, let be a map between topological spaces, If is Hausdorff then all arrows in the table may be replaced with equal signs and may be replaced with By defining different prefilters, many other notions of limits can be defined; for example, Divergence to infinity Divergence of a real-valued function to infinity can be defined/characterized by using the prefilters where along if and only if and similarly, along if and only if The family can be replaced by any family equivalent to it, such as for instance (in real analysis, this would correspond to replacing the strict inequality in the definition with and the same is true of and So for example, if then if and only if holds. Similarly, if and only if or equivalently, if and only if More generally, if is valued in (or some other seminormed vector space) and if then if and only if holds, where Filters and nets This section will describe the relationships between prefilters and nets in great detail because of how important these details are applying filters to topology − particularly in switching from utilizing nets to utilizing filters and vice verse. Nets to prefilters In the definitions below, the first statement is the standard definition of a limit point of a net (respectively, a cluster point of a net) and it is gradually reworded until the corresponding filter concept is reached. If is a map and is a net in then Prefilters to nets A is a pair consisting of a non-empty set and an element For any family let Define a canonical preorder on pointed sets by declaring There is a canonical map defined by If then the tail of the assignment starting at is Although is not, in general, a partially ordered set, it is a directed set if (and only if) is a prefilter. So the most immediate choice for the definition of "the net in induced by a prefilter " is the assignment from into If is a prefilter on is a net in and the prefilter associated with is ; that is: This would not necessarily be true had been defined on a proper subset of If is a net in then it is in general true that is equal to because, for example, the domain of may be of a completely different cardinality than that of (since unlike the domain of the domain of an arbitrary net in could have cardinality). Partially ordered net The domain of the canonical net is in general not partially ordered. However, in 1955 Bruns and Schmidt discovered a construction (detailed here: Filter (set theory)#Partially ordered net) that allows for the canonical net to have a domain that is both partially ordered and directed; this was independently rediscovered by Albert Wilansky in 1970. Because the tails of this partially ordered net are identical to the tails of (since both are equal to the prefilter ), there is typically nothing lost by assuming that the domain of the net associated with a prefilter is both directed partially ordered. If can further be assumed that the partially ordered domain is also a dense order. Subordinate filters and subnets The notion of " is subordinate to " (written ) is for filters and prefilters what " is a subsequence of " is for sequences. For example, if denotes the set of tails of and if denotes the set of tails of the subsequence (where ) then (which by definition means ) is true but is in general false. If is a net in a topological space and if is the neighborhood filter at a point then If is an surjective open map, and is a prefilter on that converges to then there exist a prefilter on such that and is equivalent to (that is, ). Subordination analogs of results involving subsequences The following results are the prefilter analogs of statements involving subsequences. The condition "" which is also written is the analog of " is a subsequence of " So "finer than" and "subordinate to" is the prefilter analog of "subsequence of." Some people prefer saying "subordinate to" instead of "finer than" because it is more reminiscent of "subsequence of." Non-equivalence of subnets and subordinate filters Subnets in the sense of Willard and subnets in the sense of Kelley are the most commonly used definitions of "subnet." The first definition of a subnet ("Kelley-subnet") was introduced by John L. Kelley in 1955. Stephen Willard introduced in 1970 his own variant ("Willard-subnet") of Kelley's definition of subnet. AA-subnets were introduced independently by Smiley (1957), Aarnes and Andenaes (1972), and Murdeshwar (1983); AA-subnets were studied in great detail by Aarnes and Andenaes but they are not often used. A subset of a preordered space is or in if for every there exists some such that If contains a tail of then is said to be in }}; explicitly, this means that there exists some such that (that is, for all satisfying ). A subset is eventual if and only if its complement is not frequent (which is termed ). A map between two preordered sets is if whenever satisfy then Kelley did not require the map to be order preserving while the definition of an AA-subnet does away entirely with any map between the two nets' domains and instead focuses entirely on − the nets' common codomain. Every Willard-subnet is a Kelley-subnet and both are AA-subnets. In particular, if is a Willard-subnet or a Kelley-subnet of then Example: If and is a constant sequence and if and then is an AA-subnet of but it is neither a Willard-subnet nor a Kelley-subnet of AA-subnets have a defining characterization that immediately shows that they are fully interchangeable with sub(ordinate)filters. Explicitly, what is meant is that the following statement is true for AA-subnets: If are prefilters then if and only if is an AA-subnet of If "AA-subnet" is replaced by "Willard-subnet" or "Kelley-subnet" then the above statement becomes . In particular, as this counter-example demonstrates, the problem is that the following statement is in general false: statement: If are prefilters such that is a Kelley-subnet of Since every Willard-subnet is a Kelley-subnet, this statement thus remains false if the word "Kelley-subnet" is replaced with "Willard-subnet". If "subnet" is defined to mean Willard-subnet or Kelley-subnet then nets and filters are not completely interchangeable because there exists a filter–sub(ordinate)filter relationships that cannot be expressed in terms of a net–subnet relationship between the two induced nets. In particular, the problem is that Kelley-subnets and Willard-subnets are fully interchangeable with subordinate filters. If the notion of "subnet" is not used or if "subnet" is defined to mean AA-subnet, then this ceases to be a problem and so it becomes correct to say that nets and filters are interchangeable. Despite the fact that AA-subnets do not have the problem that Willard and Kelley subnets have, they are not widely used or known about. Topologies and prefilters Throughout, is a topological space. Examples of relationships between filters and topologies Bases and prefilters Let be a family of sets that covers and define for every The definition of a base for some topology can be immediately reworded as: is a base for some topology on if and only if is a filter base for every If is a topology on and then the definitions of is a basis (resp. subbase) for can be reworded as: is a base (resp. subbase) for if and only if for every is a filter base (resp. filter subbase) that generates the neighborhood filter of at Neighborhood filters The archetypical example of a filter is the set of all neighborhoods of a point in a topological space. Any neighborhood basis of a point in (or of a subset of) a topological space is a prefilter. In fact, the definition of a neighborhood base can be equivalently restated as: "a neighborhood base is any prefilter that is equivalent the neighborhood filter." Neighborhood bases at points are examples of prefilters that are fixed but may or may not be principal. If has its usual topology and if then any neighborhood filter base of is fixed by (in fact, it is even true that ) but is principal since In contrast, a topological space has the discrete topology if and only if the neighborhood filter of every point is a principal filter generated by exactly one point. This shows that a non-principal filter on an infinite set is not necessarily free. The neighborhood filter of every point in topological space is fixed since its kernel contains (and possibly other points if, for instance, is not a T1 space). This is also true of any neighborhood basis at For any point in a T1 space (for example, a Hausdorff space), the kernel of the neighborhood filter of is equal to the singleton set However, it is possible for a neighborhood filter at a point to be principal but discrete (that is, not principal at a point). A neighborhood basis of a point in a topological space is principal if and only if the kernel of is an open set. If in addition the space is T1 then so that this basis is principal if and only if is an open set. Generating topologies from filters and prefilters Suppose is not empty (and ). If is a filter on then is a topology on but the converse is in general false. This shows that in a sense, filters are topologies. Topologies of the form where is an filter on are an even more specialized subclass of such topologies; they have the property that proper subset is open or closed, but (unlike the discrete topology) never both. These spaces are, in particular, examples of door spaces. If is a prefilter (resp. filter subbase, -system, proper) on then the same is true of both and the set of all possible unions of one or more elements of If is closed under finite intersections then the set is a topology on with both being bases for it. If the -system covers then both are also bases for If is a topology on then is a prefilter (or equivalently, a -system) if and only if it has the finite intersection property (that is, it is a filter subbase), in which case a subset will be a basis for if and only if is equivalent to in which case will be a prefilter. Topological properties and prefilters Neighborhoods and topologies The neighborhood filter of a nonempty subset in a topological space is equal to the intersection of all neighborhood filters of all points in A subset is open in if and only if whenever is a filter on and then Suppose are topologies on Then is finer than (that is, ) if and only if whenever is a filter on if then Consequently, if and only if for every filter and every if and only if However, it is possible that while also for every filter converges to point of if and only if converges to point of Closure If is a prefilter on a subset then every cluster point of belongs to If is a non-empty subset, then the following are equivalent: is a limit point of a prefilter on Explicitly: there exists a prefilter such that is a limit point of a filter on There exists a prefilter such that The prefilter meshes with the neighborhood filter Said differently, is a cluster point of the prefilter The prefilter meshes with some (or equivalently, with every) filter base for (that is, with every neighborhood basis at ). The following are equivalent: is a limit points of There exists a prefilter such that Closed sets If is not empty then the following are equivalent: is a closed subset of If is a prefilter on such that then If is a prefilter on such that is an accumulation points of then If is such that the neighborhood filter meshes with then Hausdorffness The following are equivalent: is a Hausdorff space. Every prefilter on converges to at most one point in The above statement but with the word "prefilter" replaced by any one of the following: filter, ultra prefilter, ultrafilter. Compactness As discussed in this article, the Ultrafilter Lemma is closely related to many important theorems involving compactness. The following are equivalent: is a compact space. Every ultrafilter on converges to at least one point in That this condition implies compactness can be proven by using only the ultrafilter lemma. That compactness implies this condition can be proven without the ultrafilter lemma (or even the axiom of choice). The above statement but with the word "ultrafilter" replaced by "ultra prefilter". For every filter there exists a filter such that and converges to some point of The above statement but with each instance of the word "filter" replaced by: prefilter. Every filter on has at least one cluster point in That this condition is equivalent to compactness can be proven by using only the ultrafilter lemma. The above statement but with the word "filter" replaced by "prefilter". Alexander subbase theorem: There exists a subbase such that every cover of by sets in has a finite subcover. That this condition is equivalent to compactness can be proven by using only the ultrafilter lemma. If is the set of all complements of compact subsets of a given topological space then is a filter on if and only if is compact. Continuity Let be a map between topological spaces Given the following are equivalent: is continuous at Definition: For every neighborhood of there exists some neighborhood of such that If is a filter on such that then The above statement but with the word "filter" replaced by "prefilter". The following are equivalent: is continuous. If is a prefilter on such that then If is a limit point of a prefilter then is a limit point of Any one of the above two statements but with the word "prefilter" replaced by "filter". If is a prefilter on is a cluster point of is continuous, then is a cluster point in of the prefilter A subset of a topological space is dense in if and only if for every the trace of the neighborhood filter along does not contain the empty set (in which case it will be a filter on ). Suppose is a continuous map into a Hausdorff regular space and that is a dense subset of a topological space Then has a continuous extension if and only if for every the prefilter converges to some point in Furthermore, this continuous extension will be unique whenever it exists. Products Suppose is a non-empty family of non-empty topological spaces and that is a family of prefilters where each is a prefilter on Then the product of these prefilters (defined above) is a prefilter on the product space which as usual, is endowed with the product topology. If then if and only if Suppose are topological spaces, is a prefilter on having as a cluster point, and is a prefilter on having as a cluster point. Then is a cluster point of in the product space However, if then there exist sequences such that both of these sequences have a cluster point in but the sequence does have a cluster point in Example application: The ultrafilter lemma along with the axioms of ZF imply Tychonoff's theorem for compact Hausdorff spaces: Let be compact topological spaces. Assume that the ultrafilter lemma holds (because of the Hausdorff assumption, this proof does need the full strength of the axiom of choice; the ultrafilter lemma suffices). Let be given the product topology (which makes a Hausdorff space) and for every let denote this product's projections. If then is compact and the proof is complete so assume Despite the fact that because the axiom of choice is not assumed, the projection maps are not guaranteed to be surjective. Let be an ultrafilter on and for every let denote the ultrafilter on generated by the ultra prefilter Because is compact and Hausdorff, the ultrafilter converges to a unique limit point (because of 's uniqueness, this definition does not require the axiom of choice). Let where satisfies for every The characterization of convergence in the product topology that was given above implies that Thus every ultrafilter on converges to some point of which implies that is compact (recall that this implication's proof only required the ultrafilter lemma). Examples of applications of prefilters Uniformities and Cauchy prefilters A uniform space is a set equipped with a filter on that has certain properties. A or is a prefilter on whose upward closure is a uniform space. A prefilter on a uniform space with uniformity is called a if for every entourage there exists some that is , which means that A is a minimal element (with respect to or equivalently, to ) of the set of all Cauchy filters on Examples of minimal Cauchy filters include the neighborhood filter of any point Every convergent filter on a uniform space is Cauchy. Moreover, every cluster point of a Cauchy filter is a limit point. A uniform space is called (resp. ) if every Cauchy prefilter (resp. every elementary Cauchy prefilter) on converges to at least one point of (replacing all instance of the word "prefilter" with "filter" results in equivalent statement). Every compact uniform space is complete because any Cauchy filter has a cluster point (by compactness), which is necessarily also a limit point (since the filter is Cauchy). Uniform spaces were the result of attempts to generalize notions such as "uniform continuity" and "uniform convergence" that are present in metric spaces. Every topological vector space, and more generally, every topological group can be made into a uniform space in a canonical way. Every uniformity also generates a canonical induced topology. Filters and prefilters play an important role in the theory of uniform spaces. For example, the completion of a Hausdorff uniform space (even if it is not metrizable) is typically constructed by using minimal Cauchy filters. Nets are less ideal for this construction because their domains are extremely varied (for example, the class of all Cauchy nets is not a set); sequences cannot be used in the general case because the topology might not be metrizable, first-countable, or even sequential. The set of all on a Hausdorff topological vector space (TVS) can made into a vector space and topologized in such a way that it becomes a completion of (with the assignment becoming a linear topological embedding that identifies as a dense vector subspace of this completion). More generally, a is a pair consisting of a set together a family of (proper) filters, whose members are declared to be "", having all of the following properties: For each the discrete ultrafilter at is an element of If is a subset of a proper filter then If and if each member of intersects each member of then The set of all Cauchy filters on a uniform space forms a Cauchy space. Every Cauchy space is also a convergence space. A map between two Cauchy spaces is called if the image of every Cauchy filter in is a Cauchy filter in Unlike the category of topological spaces, the category of Cauchy spaces and Cauchy continuous maps is Cartesian closed, and contains the category of proximity spaces. Topologizing the set of prefilters Starting with nothing more than a set it is possible to topologize the set of all filter bases on with the , which is named after Marshall Harvey Stone. To reduce confusion, this article will adhere to the following notational conventions: Lower case letters for elements Upper case letters for subsets Upper case calligraphy letters for subsets (or equivalently, for elements such as prefilters). Upper case double-struck letters for subsets For every let where These sets will be the basic open subsets of the Stone topology. If then From this inclusion, it is possible to deduce all of the subset inclusions displayed below with the exception of For all where in particular, the equality shows that the family is a -system that forms a basis for a topology on called the . It is henceforth assumed that carries this topology and that any subset of carries the induced subspace topology. In contrast to most other general constructions of topologies (for example, the product, quotient, subspace topologies, etc.), this topology on was defined with using anything other than the set there were preexisting structures or assumptions on so this topology is completely independent of everything other than (and its subsets). The following criteria can be used for checking for points of closure and neighborhoods. If then: : belongs to the closure of if and only if : is a neighborhood of if and only if there exists some such that (that is, such that for all ). It will be henceforth assumed that because otherwise and the topology is which is uninteresting. Subspace of ultrafilters The set of ultrafilters on (with the subspace topology) is a Stone space, meaning that it is compact, Hausdorff, and totally disconnected. If has the discrete topology then the map defined by sending to the principal ultrafilter at is a topological embedding whose image is a dense subset of (see the article Stone–Čech compactification for more details). Relationships between topologies on and the Stone topology on Every induces a canonical map defined by which sends to the neighborhood filter of If then if and only if Thus every topology can be identified with the canonical map which allows to be canonically identified as a subset of (as a side note, it is now possible to place on and thus also on the topology of pointwise convergence on so that it now makes sense to talk about things such as sequences of topologies on converging pointwise). For every the surjection is always continuous, closed, and open, but it is injective if and only if (that is, a Kolmogorov space). In particular, for every topology the map is a topological embedding (said differently, every Kolmogorov space is a topological subspace of the space of prefilters). In addition, if is a map such that (which is true of for instance), then for every the set is a neighborhood (in the subspace topology) of See also Notes Proofs Citations References (Provides an introductory review of filters in topology and in metric spaces.) Filters General topology
Filters in topology
[ "Chemistry", "Mathematics", "Engineering" ]
15,179
[ "General topology", "Chemical equipment", "Filters", "Topology", "Filtration" ]
47,517,025
https://en.wikipedia.org/wiki/Cryspovirus
Cryspovirus is a genus of viruses, in the family Partitiviridae. Protists serve as natural hosts. There is only one species in this genus: Cryptosporidium parvum virus 1. Cryptosporidium, a genus of Apicomplexan parasites, is known to cause human diarrheal illness. A bi-segmented dsRNA virus linked with Cryptosporidium was discovered and found to have similarities with picobirnaviruses and partitiviruses. This discovery led to the identification of a distinct virus called Cryptosporidium parvum virus 1 (CSpV1). It was suggested to be the sole partitivirus found in a protozoan host. Based on this, a new genus named Cryspovirus was proposed within the Partitiviridae family, which was subsequently approved by the ICTV Executive Committee in 2009. CSpV1, also known as Cryspovirus, is believed to be transmitted intracellularly through Cryptosporidium oocysts and is linked with persistent, mostly non-virulent infections. The virus features isometric virions and has a genome composed of two separate dsRNA molecules encoding RdRp and CP. Notably, the CP of CSpV1 is smaller than that of other partitiviruses, indicating a unique capsid structure. Biologically, CSpV1 appears to be primarily transmitted through intracellular methods and is associated with non-aggressive infections. Its impact on altering Cryptosporidium's pathogenicity remains to be fully understood. CSpV1 exhibits unique genomic and coding characteristics, with its dsRNA segments having distinct nucleotide sequences (often detected via PCR). The virus is believed to employ a non-standard mechanism for translation, and conserved sequences at the 3′ ends of its dsRNAs might be involved in replication or packaging processes. CSpV1 holds practical significance in the detection of Cryptosporidium in contaminated water supplies and in the epidemiological monitoring of Cryptosporidium infections. Structure Viruses in Cryspovirus are non-enveloped, with icosahedral geometries, and T=1 symmetry. The diameter is around 30-35 nm. Genomes are linear and segmented, around 2.1kb in length. The genome codes for 2 proteins. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the double-stranded RNA virus replication model. Double-stranded RNA virus transcription is the method of transcription. Protists serve as the natural host. References External links ICTV Online Report Partitiviridae Viralzone: Cryspovirus Partitiviridae Virus genera Infectious diseases Virology Protista Diarrhea Double-stranded RNA viruses Sanitation Water treatment
Cryspovirus
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
585
[ "Protists", "Water treatment", "Water pollution", "Environmental engineering", "Water technology", "Eukaryotes" ]
47,517,355
https://en.wikipedia.org/wiki/Higrevirus
Higrevirus is a genus of viruses. Plants serve as natural hosts. There is currently only one species in this genus: the type species Hibiscus green spot virus 2. Structure Viruses in Higrevirus are non-enveloped, with bacilliform geometries. These viruses are about 30 nm wide and 50 nm long. Genomes are linear and segmented, tripartite, around 38.43.23.1kb in length. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription is the method of transcription. Plants serve as the natural host. References External links Viralzone: Higrevirus ICTV Positive-sense single-stranded RNA viruses Monotypic genera Virus genera Riboviria
Higrevirus
[ "Biology" ]
175
[ "Viruses", "Riboviria" ]
47,517,937
https://en.wikipedia.org/wiki/Lumped%20damage%20mechanics
Lumped damage mechanics or LDM is a branch of structural mechanics that is concerned with the analysis of frame structures. It is based on continuum damage mechanics and fracture mechanics. It combines the ideas of these theories with the concept of plastic hinge LDM can be defined as the fracture mechanics of complex structural systems. In the models of LDM, cracking or local buckling as well as plasticity are lumped at the inelastic hinges. As in continuum damage mechanics, LDM uses state variables to represent the effects of damage on the remaining stiffness and strength of the frame structure. In reinforced concrete structures, the damage state variable quantifies the crack density in the plastic hinge zone; in unreinforced concrete components and steel beams, it is a dimensionless measure of the crack surface; in tubular steel elements, the damage variable measures the degree of local buckling The LDM evolution laws can be derived from continuum damage mechanics or fracture mechanics. In the latter case, concepts such as the energy release rate or the stress intensity factor of a plastic hinge are introduced. LDM allows for the numerical simulation of the collapse of complex structures with a fraction of the computational cost and human effort of its continuum mechanics counterparts. LDM is also a regularization procedure that eliminates the mesh-dependence phenomenon that is observed in structural analysis with local damage models. In addition, LDM method has been implemented in the finite element analysis of crack propagation of steel beam-to-column connections subjected to ultra-low cycle fatigue. References Continuum mechanics Materials degradation
Lumped damage mechanics
[ "Physics", "Materials_science", "Engineering" ]
315
[ "Materials degradation", "Materials science", "Classical mechanics", "Continuum mechanics" ]
47,518,866
https://en.wikipedia.org/wiki/Fluentd
Fluentd is a cross-platform open-source data collection software project originally developed at Treasure Data. It is written primarily in the C programming language with a thin-Ruby wrapper that gives users flexibility. Overview Fluentd was positioned for "big data," semi- or un-structured data sets. It analyzes event logs, application logs, and clickstreams. According to Suonsyrjä and Mikkonen, the "core idea of Fluentd is to be the unifying layer between different types of log inputs and outputs.", Fluentd is available on Linux, macOS, and Windows. History Fluentd was created by Sadayuki Furuhashi as a project of the Mountain View-based firm Treasure Data. Written primarily in Ruby, its source code was released as open-source software in October 2011. The company announced $5 million of funding in 2013. Treasure Data was then sold to Arm Ltd. in 2018. Users Fluentd was one of the data collection tools recommended by Amazon Web Services in 2013, when it was said to be similar to Apache Flume or Scribe. Google Cloud Platform's BigQuery recommends Fluentd as the default real-time data-ingestion tool, and uses Google's customized version of Fluentd, called google-fluentd, as a default logging agent. Fluent Bit Fluent Bit is a log processor and log forwarder which is being developed as a CNCF sub-project under the umbrella of Fluentd project. Fluentd is written in C and Ruby and consumes at least sixty megabytes of memory. Fluent Bit is written only in C, with no dependencies, and consumes approximately one megabyte of memory, making it easier to run under embedded Linux and in containers. References Further reading Goasguen, Sébastien (2014). 60 Recipes for Apache CloudStack: Using the CloudStack Ecosystem, "Chapter 6: Advanced Recipes". O'Reilly Media. Wilkins, Phil (2022). Logging in Action, With Fluentd, Kubernetes and more. Manning. External links Computer logging Data warehousing products Data security Data mining and machine learning software Free artificial intelligence applications Free science software Free data analysis software Free software programmed in Ruby Software using the Apache license System administration
Fluentd
[ "Technology", "Engineering" ]
465
[ "Cybersecurity engineering", "System administration", "Computer logging", "Information systems", "Data security" ]
73,041,236
https://en.wikipedia.org/wiki/Poly%28phthalaldehyde%29
Poly(phthalaldehyde), abbreviated as PPA, is a metastable stimuli-responsive polymer first synthesized in 1967. It has garnered significant attention during the past couple of years due to its ease of synthesis and outstanding transient and mechanical properties. for this reason, It has been exploited for a variety of applications including sensing, drug delivery, and EUV lithography. As of 2023, it is considered the only aromatic aldehyde polymerized through a living chain growth polymerization. Discovery and history Poly(phthalaldehyde) was first reported in 1967 by Chuji Aso and Sanae Tagami from the department of Organic Synthesis at Kyushu University by an addition homopolymerization reaction of aromatic o-phthalaldehyde. This polymer, consisting of a polyacetal main chain, is still to date, the only aromatic aldehyde that can be homopolymerized through a chain-growth polymerization method. It is a white brittle solid with a low ceiling temperature and significant self-immolative properties. It has gathered significant attention in recent years especially in the development of novel responsive materials and applications. Synthesis techniques Since its first inception in 1967, many synthesis techniques have been developed and employed for the polymerization of o-phthalaldehyde. Most notably, living polymerization methods are among the most common and promising techniques used, as can be seen in the high number of publications in the literature depicting their usage in poly(phthalaldehyde) preparation. Living cationic polymerization (LCP) History and main idea Aso and Tagami were the first to report the polymerization of o-phthalaldehyde in 1967 using the cationic living polymerization technique. This technique, which was initially thought to require the usage of a strong Brönsted acid to initiate polymerization in addition to a strong nucleophile to depress polymerization and endcap the polymer chain was proven successful in a number of polymerization processes reported earlier. Interestingly, the authors were able to produce this polymer without using an initiator nor a terminator and determined the polymer's structure to be cyclic. In fact, they worked at liquid nitrogen temperature and relied on Boron trifluoride etherate catalyst which was sufficient to produce a polymer stable enough at room temperature for a few days. Current trends In the following years, polymer chemists started studying the characteristics of this polymer and worked on enhancing its thermal stability and mechanical properties. In particular, Moore and coworkers conducted rigorous mechanistic studies on poly(phthalaldehyde) by modifying the type of catalyst used, as well as the starting monomer concentration in an effort to control the molar mass, decrease the polydispersity index, and increase the polymer's purity. Among the catalysts used were triethyloxonium borofluoride, tin chloride, and triphenylmethylium tetrafluoroborate. Limitations While LCP was the first and sole method used to produce poly(phthalaldehyde), its usage nowadays has dramatically decreased in favor of other polymerization techniques which allow a better control over the polymer properties including molar mass and thermal stability. Living anionic polymerization (LAP) History and main idea While this polymerization technique did not typically gain fame and popularity until 2010, it was also reported by Aso and Tagami in 1969. In general, LAP involves the usage of a strong nucleophile to initiate polymerization in addition to the employment of an electrophile as a terminator to endcap the polymer chain. In Tagami's article, PPA was prepared by utilizing tert-butyllithium as an initiator and acetic anhydride as a terminator. However, the drawbacks faced when utilizing LCP (low polydispersity index (PDI), low yield, and no control over molecular weight) were also encountered in this polymerization technique. Current trends It was not until 1987 when two chemists, Hedrick and Schlemper, from the University of Freiburg proposed the use of phosphazene bases to speed up the reaction and lower the polydispersity index. Up until 2023, three different phosphazene bases have been used in PPA polymerization. Moreover, most of the published research articles describing PPA synthesis between 2008 and 2023 revolve around the usage of LAP, rendering it the most common and effective polymerization technique. Advantages The major advantage this polymerization technique presents over LCP lies in the fact that the polymer can be end capped on both sides of the chain with stimuli-responsive groups. The tuning process of PPA by these functional groups have not only expanded the set of applications this polymer can be used in, but has also improved its properties and attributes. For instance, by controlling the o-phthalaldehyde monomer/alcohol initiator concentration ratio, ultra-high molecular weights (50-150 KDa) PPA can be obtained. Furthermore, PPA synthesized through LAP is more thermally and mechanically stable. Generally, the presence of endcaps on both ends stabilizes the polymer and results in a more flexible chain with a high thermal stability. And because linear polymers synthesized by LAP method can be end capped whereas cyclic polymers prepared via LCP method cannot be end capped with functional groups, LAP results in more thermally stable polymers. It has a much lower PDI ranging between 1.3 and 1.9 as opposed to PPA synthesized through LCP which has a PDI ranging between 2 and 4.5. This is because of the ability to control the character, molecular weight, and end group of the polymer. Furthermore, the initiator used in LAP synthesis method, which is a strong nucleophile, acts as the first endcap, and hence by controlling the amount of initiator used, a control over the molar mass and PDI can be obtained. This is in contrary to cyclic PPA which is synthesized through LCP where the initiator (Lewis acid) will not be part of the final PPA product, and hence, controlling the amount of Lewis acid used will have no to little effect on the final molar mass and PDI of cyclic PPA polymer. Coordinative polymerization (CP) Although a less known polymerization technique, coordinative polymerization has been used a few times in PPA preparation. It mostly requires the activation of transition metal catalysts with trimethylaluminum or diethyl aluminum chloride and allows a control over the stereoselectivity of the compound. Another advantage of this technique lies within the usage of water as a co-catalyst in PPA synthesis which is deemed impossible in other polymerization methods. Professor Hisaya Tani from the Department of Polymer Science at Osaka University was the first to report a stereospecific polymerization of o-phthalaldehyde by employing dimeric dimethylaluminumoxybenzylideneaniline [Me2AlOCMeNPh]2 as catalyst and water as a co-catalyst. He was able to synthesize a fibrous PPA in exclusively trans-configuration which had never been reported before. Nonetheless, due to the inability to endcap the polymer with functional groups, this technique is rarely utilized at present and the mechanism of formation of PPA remains ambiguous and not well studied. Types of poly(phthalaldehyde) Depending on the polymerization technique applied, two different types of poly(phthalaldehyde) can be acquired, linear and cyclic. Linear PPA Linear PPA is produced by anionic polymerization methods using a strong nucleophile as an initiator. This technique prevents the cyclization of the polymer chain as the propagating species have only one charged terminus that cannot backbite the other terminus which, in turn, is neutral in charge. Although processing linear PPA requires highly sensitive reaction conditions and is more time demanding, this type of polymer has many advantages over its cyclic counterpart. For instance, a control over the polymer's molar mass can easily be achieved by controlling the monomer and alcohol initiator ratios. Furthermore, it has been proven to be more thermally stable than its cyclic counterpart due to the presence of functionalized endcaps that stabilizes the polymer chain from depolymerization. For these reasons, it has been studied to a far greater extent than cyclic PPA. Various linear PPA with distinct end groups have been reported and studied for a variety of applications including sensing, drug delivery, and lithography. For instance, once these end groups are cleaved as a response to the exposure of PPA to a specific stimulus, the polymer will sequentially disassemble from head to tail through an unzipping reaction to form the monomer in short times that can be as low as a few minutes. Cyclic PPA Cyclic PPA is obtained through a cationic polymerization of o-phthalaldehyde using a Lewis acid, typically Boron trifluoride etherate, as an initiator. When Aso and Tagami first reported the successful synthesis of PPA using this technique in 1967, they were unaware of the fact that the polymer they prepared was cyclic and instead reported the structure as linear in their research paper. It was not until 2013 that polymer chemists proved that the structure is cyclic using a combination of characterization techniques including Nuclear Magnetic Resonance (NMR), Fourier Transform Infrared Spectroscopy (FT-IR), Gel Permeation Chromatography (GPC), and Mass Spectrometry (MS). Cyclic PPA is easy to synthesize; it is reported by Prof. Jeffrey Moore that the cationic polymerization of o-phthalaldehyde is very fast, yielding cyclic PPA within few minutes. Furthermore, the polymer can be isolated without the addition of pyridine nor methanol nor a strong base terminator, which in general makes this polymerization technique easy, fast, and cheap. Nevertheless, a known issue of this technique is the fact that the molecular weight cannot be controlled based on the initial concentration of the monomer used, which has led typically to cyclic PPA with a wide variety of molecular weights ranging between 3 kDa to 100 kDa using the same starting conditions. Furthermore, because of its cyclic structure, no end caps are used or needed. The absence of functionalized end caps in the structure has limited the usage of cyclic PPA especially in stimuli responsive applications. Properties and characteristics PPA is a metastable polymer known for its ease of synthesis and rapid depolymerization. In addition, its properties can be easily influenced and manipulated upon either functionalizing the phthalaldehyde monomer with different groups, most efficiently, electron withdrawing groups, or employing different functional groups as end caps.3 Mechanical properties PPA is known to have a rigid and brittle backbone which limits its flexibility and usage in some applications. However, it can be easily tuned by adding additives rendering it a soft material. The mechanical properties of cyclic PPA films drop cast using different solvents have recently been investigated. The study showed the polymer to possess a large elastic modulus of 2.5-3 GPa which was also previously reported in another study, in addition to tensile strength values ranging between 25 and 35 MPa and a failure strain of 1-1.5% that is highly dependent of the solvent used. Plasticizers as additives With the insurgence in the usage of PPA during the past few years for various applications, the need to ameliorate the transient properties and enhance the mechanical features of this polymer has come to surface. PPA is known to be brittle; it possesses a large storage modulus, and a glass transition temperature that is above its thermal degradation point, which renders the polymer unsuitable for a broad range of applications. One way to ameliorate its intrinsic properties is via the addition of a plasticizing agent that can disrupt the polymer's intermolecular packing, and thus making it more flexible, decreasing its storage modulus, depressing its glass transition temperature, and increasing its shear strength. A few examples of plasticizers that have been used with PPA include dimethyl phthalate, bis(2-ethylhexyl) phthalate, diethyl adipate, and tri-isononyl trimellitate (TINTM). In a recent study, the effect of two ether-ester plasticizers on the mechanical flexibility and photo-transience speed of cyclic PPA was investigated. The authors were able to show that the addition of these additives broadened the storage modulus range and decreased it from 2300 MPa in the case of pure PPA down to 19 MPa in the PPA/plasticizer mixture, hence making the polymer more flexible and in need of less energy to be distorted. In another study published by the same research group, the effect of diethyl adipate (DEA) plasticizer on the glass transition temperature of cyclic PPA was investigated. After determining the glass transition temperature of pure PPA to be 187 °C, PPA films with various DEA concentrations were prepared. By varying DEA concentration, the authors were able depress Tg to 12.5 °C demonstrating the importance of plasticizers in enhancing the mechanical flexibility and thermal properties of PPA. Similar results were previously observed where the thermal transitions were depressed from 95 °C for cPPA to 24 °C for diethyl phthalate (DEP)-plasticized cPPA. Among the few studies that have been reported on the usage of plasticizers with PPA, it has been noted that the usage of plasticizers results in a decrease in the tensile stress of the polymers which indicate that PPA is becoming more flexible and hence the film can fold more easily. Nevertheless, a control on the amount of plasticizer used is important. For instance, in the study discussed above, it has been reported that the usage of a large amount of plasticizer (more than 50% w/w in comparison with PPA polymer) results in phase segregation and a decrease in the flexibility of the PPA film. Furthermore, the nature of the used solvent can highly affect the mechanical properties of PPA as well. In particular, in another study published in 2019, both the elastic modulus and tensile strength increase when dichloromethane was used as a solvent to drop-cast PPA in comparison to dioxane and chloroform. Thermal properties The thermal stability of PPA is highly dependent on whether the polymer is end-capped or isolated without end groups. Cyclic PPA, in addition to functionalized linear PPA chains are known to be thermally stable for up to 150oC as determined by both Differential Scanning Calorimetry (DSC) and Thermogravimetric Analysis (TGA). Moreover, the polymer is known for its long-term shelf life wherein it can be stored at room-temperature for a significant amount of time. Various chemists have studied substitution effects on the thermal stability of PPA. For instance, scientists at The International Business Machines Corporation (IBM) concluded, after extensive studies, that o-phthalaldehyde monomers functionalized with chloro, bromo, and 4-trimethylsilyl functional groups result in highly stable PPA compared to the unsubstituted polymer. Similarly, Phillips et al. proved that the substituted and end-capped poly(4,5-dichlorophthalaldehyde) possesses higher thermal degradation temperatures than its unsubstituted counterparts. Chemical properties By means of controlling the identity and reactivity of the endcaps, PPA can withstand harsh chemical conditions with no significant changes in its structure. For instance, while functionalizing PPA with an allyl acetate and tert-butyldimethylsilyl ether functional groups can lead to its rapid depolymerization in the presence of Pd(0) and F− respectively, a simple change in the nature of the endcaps will preserve the chain even in the presence of both corrosive agents. On a separate note, while PPA is insoluble in aqueous solvents and alcohols, it is highly soluble in organic solvents such as THF, DCM, and DMSO where it can be dissolved for days without triggering depolymerization. Applications Due to its unique stability, chemical properties, and outstanding tunability and reactivity, PPA has been employed in a variety of applications. Photoresist The high solubility and stability of PPA in organic solvents have allowed its investigation as a base material in first generation amplified photoresist for lithography in the early 80s by three scientists, Grant Willson, Jean Fréchet, and Hiroshi Ito who were working at IBM at the time. The story of how this successful achievement started and progressed can be found in the review paper written by Hiroshi Ito. Because PPA by itself does not undergo complete depolymerization upon its subjection to light, it is usually end-capped or used along photoacid generators (PAGs) for enhanced sensitivity. In this case, depolymerization is triggered upon irradiation either by end-cap removal and self-immolation or by the generated acid. Ober et al. stated that the use of PPA as photoresist under extreme ultraviolet (EUV) irradiation is yet to be successful due to the instability of PPA and the volatility of its monomers.30 However, they were able to report one of the first PPA derivatives without the use of PAGs with enhanced photoresist properties upon EUV exposure.19 Drug release Owing to its high reactivity and the ability to tune its endcap groups, PPA has been lately utilized in drug delivery applications. In one recent study, UV-sensitive PPA microcapsules containing different types of drugs were prepared. Once the capsules were subjected to a UV-light trigger, an unzipping reaction took place and the shell ruptured which led to the release of the core contain of these microcapsules. A unique advantage of these microcapsules is that they allow the immediate release of the drug upon exposure to the trigger rather than its continuous release over a period of time ranging from minutes to hours as other common microcapsules function. In an earlier publication, DiLauro et al. reported the ability to predesign and control the thickness of the microcapsule shells and length of the PPA used to form the shell, which have stimuli-responsive endcaps allowing head-to-tail fluoride-triggered depolymerization. Sensing through depolymerization PPA is known as a self-immolative material which depolymerizes through endcap cleavage in response to a specific stimulus. For this reason, several PPA polymers with different endcaps have been synthesized and used as self-immolative materials for sensing toxic and specific compounds. Acid-triggered depolymerization Due to the presence of two types of oxygen atoms in the PPA backbone, in addition to the fact that H+ tends to protonate oxygen atoms easily, depolymerization can occur through both endcap cleavage and protonation of oxygen atoms present in the backbone. For this reason, polymer chemists tend to use endcaps rich in oxygen atoms to accelerate depolymerization rate. For example, Moore and co-workers reported the use of a specific ion coactivation (SICA) effect that allowed the ion and acid coactivated-triggered depolymerization of a cyclic PPA microcapsules at the solid/liquid interface of the polymer and solution. Fluoride-triggered depolymerization Silyl groups can be deprotected with fluoride ions resulting in a strong Si-F bond that is hard and challenging to break. For this reason, different polymer chemists started to employ PPA in fluoride sensing by using t-butyldimethylsilyl (TBS) containing initiators and terminators. The fluoride sensing ability of PPA has been previously used in applications such as drug release, as previously reported by DiLauro et al. Another application studied by Phillips and co-workers includes the use of fluoride-triggered PPA depolymerization in changing the structure of plastics in a predetermined way. UV-light triggered depolymerization To demonstrate its capability in rapidly depolymerizing in presence of UV-light, DiLauro et al. synthesized a PPA polymer with two UV-sensitive endcaps, 2-nitro-4,5-dimethoxybenzyl alcohol and 1-[[(chlorocarbonyl)oxy]methyl]-4,5-dimethoxy-2 nitrobenzene, and were able to achieve complete depolymerization in a few minutes. In a practical application in organic electronics, cyclic PPA in the presence of 2-(4-methoxystyryl)-4,6-bis(trichloromethyl)-1,3,5-triazine (MBTT used as PAG) undergoes depolymerization upon exposure to UV-light, which in turn deactivates the transient electronics. Another similar application in transient electronics was reported where an organic light-emitting diode (OLED) was integrated on the PPA substrate and can cause depolymerization in the presence of a PAG. Pd(0)-triggered depolymerization Apart from its usage in sensing acids and fluoride anions, PPA has been used in sensing Pd(0) metal by employing allyl chloroformate as a terminating end cap. This has been reported by Phillips and his research group, where they used an allyl formate endcap that stoichiometrically depolymerized within minutes upon its exposure to a catalytic amount of tetrakis(triphenylphosphine)palladium(0) (Pd(PPh3)4). Health and safety According to the safety data sheet of PPA, it should not be allowed in contact with the skin or eyes as it may lead to skin, eye, and respiratory irritations or allergic reactions. In addition, as some unfunctionalized PPA are unstable at temperatures even lower than room temperature, it is important to note that PPA should be stored at temperatures below -10 °C under inert atmosphere and away from sunlight, moisture, and heat, but with proper ventilation. Since the depolymerization of PPA is greatly studied in its applications, it is important to also note the possible safety concerns of its monomer. In addition to the abovementioned hazards of PPA, phthalaldehyde is very toxic if swallowed and for aquatic life. References Polymer chemistry Smart materials Soft matter
Poly(phthalaldehyde)
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,752
[ "Soft matter", "Materials science", "Condensed matter physics", "Polymer chemistry", "Smart materials" ]
73,041,563
https://en.wikipedia.org/wiki/HD%2093486
HD 93486, also known as HIP 52381, is a binary star located in the southern circumpolar constellation Chamaeleon near the border with Octans. Its variable star designation is RZ Chamaeleontis (RZ Cha). It has an apparent magnitude ranging from 8.2 to 9.1, which is below the limit for naked eye visibility. Gaia DR3 parallax measurements place the system 568 light years away, and it is currently receding with a heliocentric radial velocity of . At its current distance, HD 93486's average brightness is diminished by 0.53 magnitudes due to interstellar dust. The system has a combined absolute magnitude of +1.72. In 1964, HD 93486 was discovered to be an eclipsing binary by astronomer W. Strohmeier and colleagues. Four years later, the system was found to be an Algol variable and was given the variable star designation RZ Chamaeleontis in 1974. J. Andersen et al. (1975) calculated a circular orbit of 2.8321 days, which is also its variability period. During this time, RZ Cha drops from photographic magnitude 8.2 to 9.1 when the smaller component is eclipsed, and to 8.8 when the larger one is eclipsed. Both components have a stellar classification of F5 IV-V, indicating that they are slightly evolved F-type stars with luminosity classes intermediate between a subgiant and a main-sequence star. The primary has 151% the mass of the Sun and 2.29 times the Sun's radius. The secondary has 140% the mass of the Sun and 2.21 times the radius of the Sun. Together, both stars radiate 7.94 times the luminosity of the Sun from their photospheres at an effective temperature of , giving it a combined yellowish-white hue. The system is metal enriched with an iron abundance and is estimated to be 2 to 3 billion years old. Both stars spin modestly, with projected rotational velocities of and 41 km/s respectively. References Further reading F-type main-sequence stars F-type subgiants Algol variables Eclipsing binaries Chamaeleon Chamaeleontis, RZ CD-81 00391 093486 052381
HD 93486
[ "Astronomy" ]
491
[ "Chamaeleon", "Constellations" ]
73,042,163
https://en.wikipedia.org/wiki/Emraclidine
Emraclidine (developmental code names CVL-231, PF-06852231) is an investigational antipsychotic for the treatment of both schizophrenia and Alzheimer's disease psychosis developed by Cerevel Therapeutics. As of August 2024, it is in phase 2 clinical trials. Emraclidine is a positive allosteric modulator that selectively targets the muscarinic acetylcholine receptor M4 subtype. The M4 receptor subtype is expressed in the striatum of the brain, which plays a key role in regulating acetylcholine and dopamine levels. An imbalance of these neurotransmitters has been linked to psychotic symptoms in schizophrenia. Unlike other muscarinic receptors, M4 receptor subtypes are selectively expressed in the striatum and activation of these receptors has been shown to indirectly regulate dopamine levels without blocking D2/D3 receptors, which may lead to unwanted motor side effects seen in current antipsychotics. See also ML-007 NBI-1117568 NS-136 Xanomeline/trospium References Azetidines Carboxamides Experimental drugs developed for schizophrenia M4 receptor positive allosteric modulators Pyridines Pyrrolopyridines Trifluoromethyl compounds
Emraclidine
[ "Chemistry" ]
283
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
73,043,913
https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20%28Moldova%29
The Ministry of Energy () is one of the fourteen ministries of the Government of Moldova. The ministry was established on 16 February 2023 following the restructuring of the Ministry of Infrastructure and Regional Development. Currently, the Moldovan minister of energy is Dorin Recean. The Ministry of Energy aims for the energy independence of Moldova. It was created following a heavy energy crisis in the country. List of ministers References Energy Moldova Ministries established in 2023
Ministry of Energy (Moldova)
[ "Engineering" ]
90
[ "Energy organizations", "Energy ministries" ]
73,043,984
https://en.wikipedia.org/wiki/8-Hydroxyhexahydrocannabinol
8-Hydroxyhexahydrocannabinols (8-OH-9α-HHC and 8-OH-9β-HHC) are active primary metabolites of hexahydrocannabinol (HHC) in animals and trace phytocannabinoids. The 8-OH-HHCs are produced in notable concentrations following HHC administration in several animal species, including humans. They have drawn research interest for their role in HHC toxicology and stereoisomeric probes of the cannabinoid drug/receptor interaction. Like Δ9-THC and Δ8-THC, HHC is processed by cytochrome p450 (CYP3A4, CYP2C9 and CYP2C19) to a series of oxygenated derivatives, some of which maintain activity. While 11-OH-HHC and its downstream products are the major metabolites of HHC metabolism, hydroxylation at C8 plays a varyingly significant role in animal species. Metabolite ratios are also subject to interspecies variation, with one study finding mice hepatocytes preferentially produced 8α-OH-HHC (49/5 α/β) while hamster hematocytes evidenced the opposing selectivity (20/43 α/β). While 11-OH-HHC is quickly oxidized to the inactive, water-soluble 11-COOH-HHC, further oxidation of 8-OH instead yields the 8-oxo derivatives, which are then conjugated and excreted. Stereoisomerism There are four possible 8-OH-HHC metabolites arising from naturally derived HHCs: cis- and trans-8-OH-9α-HHC & cis- and trans-8-OH-9β-HHC. All four have been prepared synthetically to probe stereochemical effects on cannabinoid biological activity. In in vivo tests on rhesus macaques, Mechoulam and coworkers found the highest activity in the cis-8-OH-9β-HHC stereoisomer. All four forms are believed to be active. References Cannabinoids Human drug metabolites Recreational drug metabolites Benzochromenes Diols
8-Hydroxyhexahydrocannabinol
[ "Chemistry" ]
476
[ "Chemicals in medicine", "Human drug metabolites" ]
73,044,465
https://en.wikipedia.org/wiki/Scleroderma%20bermudense
Scleroderma bermudense is a species of Basidiomycete fungi in the family Sclerodermataceae. The species was first described by American botanist and mycologist, William Chambers Coker, in 1939. Range The species is indigenous to Bermuda, the Bahamas, Barbados, the Virgin Islands, Cuba, the Dominican Republic, Puerto Rico; the US state of Florida; and the Mexican states of Guerrero, Acapulco, Quintana Roo, Veracruz, and Yucatan. It has been introduced accidentally along with its host tree in various tropical regions, including French Guiana, Senegal, and Réunion. Habitat Scleroderma bermudense is limited to the dune ecosystem of sandy beaches beneath its mycorrhizal host. Ecology Scleroderma bermudense is a mycorrhizal fungus associated with the seagrape Coccoloba uvifera. It has been found to alleviate salt uptake in seagrape seedlings, thus facilitating the tree's ability to live on coastal beaches. Etymology The genus name comes from Greek sclero, meaning hard, and derma, meaning skin, and is the same as the name of a skin disease also characterized by hardened skin. The specific epithet bermudense refers to the type locality, Bermuda. This species does not have a common name in English. Taxonomy This species has previously been considered a synonym of Scleroderma stellatum, found in Brazil, but S. stellatum differs in having an echinulated peridium, which S. bermudensis lacks. Conservation Status Scleroderma bermudense has been proposed for Endangered status under criteria A3c because its habitat is subject to sea level rise. References External links Fungi described in 1939 Fungi of North America Fungi of Central America Puffballs Fungus species Scleroderma
Scleroderma bermudense
[ "Biology" ]
379
[ "Fungi", "Fungus species" ]
73,045,817
https://en.wikipedia.org/wiki/Pacific%20Drive%20%28video%20game%29
Pacific Drive is a 2024 survival game developed by Ironwood Studios and published by Kepler Interactive. The game is set in the Pacific Northwest, which the player traverses on foot or in a station wagon as they attempt to find a way to escape. It uses a first-person perspective; the player must attempt to avoid anomalies and obstacles. The vehicle can be repaired and customized at the player's garage. Development of Pacific Drive began in 2019 after the founding of Ironwood Studios. Creative director Cassandra Dracott conceived the idea while driving through the Olympic Peninsula. She considered creating the game independently but soon realized she would need a team, which she began building during the COVID-19 pandemic. Pacific Drive was announced in September 2022, and was released for the PlayStation 5 and Windows on February 22, 2024. It received positive reviews from critics, who praised its atmosphere, characters, and vehicle design, but criticized its repetitive and difficult gameplay. It garnered several award nominations, and sold over 600,000 copies within five months. A television adaptation is in development. Gameplay Pacific Drive is a survival game played from a first-person perspective. The game is set in 1998 in the Olympic Exclusion Zone, a fictionalized abandoned version of the Olympic Peninsula in Washington, United States, which the player traverses on foot or in a station wagon. The map is laid out in a graph of junctions that the player can visit with their station wagon; areas contain abandoned buildings and junk vehicles the player can scavenge for resources, which they can store in the trunk. Junctions also contain "anchors", anomalous devices that contain large amounts of energy, that when collected enough of, allow the player to open a gateway back to the garage. After a certain amount of time spent in a junction, or after the gateway is opened, the junction begins to destabilize, and the safe area of the junction shrinks, being replaced with an area that deals damage over time, forcing the player to leave the junction as soon as possible. Initially the player may only visit adjacent junctions connected via roads; additional junctions become available to the player once they successfully used a gateway from a neighboring junction, gradually opening up the map. The player can customize their vehicle in their auto shop, which acts as their base of operations, and perform repairs either in the auto shop, or while traversing the world. Generic damage can be repaired by either using a limited use "repair putty" or a blowtorch, or by swapping out a damaged part to an intact one. Parts can also develop specific damage that may need specialized one-time use items to fix, such as patching flat tires using a sealant, or fixing a broken spark plug; items used for these repairs can be crafted from resources found around the world. The garage's Fabrication Station harvests resources and creates blueprints or special items, including some that discover new routes, add fuel to the car, or destabilize a zone. Tools like a buzzsaw called the Scrapper or an "impact hammer" can be used to harvest resources from other items in the world. The car will occasionally develop "quirks", such as the horn sounding when the steering wheel is turned, or a door opening when the car radio is switched on; to fix a quirk, the player must use an MS-DOS computer called a "Tinker Station" and correctly input both the cause and effect of the quirk. As the player traverses the world, they encounter various anomalies that affect gameplay: some are dangerous and damage the car or player, others have various effects like temporarily scrambling some of the car controls, while some are by default neutral and will merely help or hinder traversal. Some areas are also irradiated which actively damage the player and slowly corrode the car. Additionally, zones can occasionally develop certain conditions that add additional complexity, such as explosions being more powerful or extreme darkness. Plot In Washington's Olympic Peninsula in 1947, American researcher Dr. Ophelia "Oppy" Turner, her husband Allen, and her colleagues including scientists Tobias Barlow and Francis Cooke, develop "LIM technology", a revolutionary experimental technology, with the cooperation of the United States government. However, LIM technology experimentation leads to mysterious phenomena and unexplained disappearances in the region, which in 1955 prompts the creation of the U.S. Advanced Resonance Development Agency (ARDA). ARDA establishes the Olympic Exclusion Zone to secretly research the phenomena, referred to as "anomalies", while Oppy and her long-triumphed LIM technology fade from the public interest. Initially consisting of approximately western Clallam County near Forks, the spread of anomalies and the worsening instability of the original Zone leads to its expansion in 1961 and 1967 to eventually cover , almost the entire Peninsula, before being completely evacuated and sealed in 1987 after ARDA's disestablishment. In 1998, the player, referred to as "the Driver", drives to the Zone's wall, but a roadblock forces them to take a detour through a forest, where a bright light suddenly teleports them into the Outer Zone (the 1967 Zone boundary) and destroys their van. Stumbling through the undergrowth, the Driver finds a still-operational station wagon and is contacted through its radio by Tobias and Francis, who direct the Driver to a garage owned by Oppy, who reluctantly agrees to assist the Driver and let them use the garage. Oppy, Tobias, and Francis explain the nature of anomalies, including one called a "Remnant", which inhabits inanimate objects and forms a psychic link with the host, gradually causing the host to become obsessed with the object to the point of insanity; they deduce the station wagon is a Remnant and agree to help the Driver separate from it and escape the Zone. Oppy gives the Driver an invention called an ARC device, which can teleport them to the garage in an emergency, and suggests they take the station wagon to a massive anomaly known as "Colossal Cappy" to confirm if it is a Remnant by driving into it, a gambit which succeeds and confirms the station wagon is indeed a Remnant. Francis reveals the interaction between Cappy and the Remnant caused an event called the "Mass Hallucination", and that the signal from the event was equal and opposite to the Remnant and originated in the Deep Zone (the original 1955 Zone boundary), which they believe could cancel out the Remnant. Tobias and Francis have the Driver seek out three anomalies in the Mid Zone (the 1961 Zone boundary) known as "the Murals" to locate the Mass Hallucination source, while Oppy suggests the Driver explore the research facility where her husband Allen died in an experiment that caused the previous Mass Hallucination 40 years prior. The group eventually locates the source within the Deep Zone, but with its disabled power grid making access impossible, they devise a plan to jump start the grid using the station wagon and their own battery supplies; however, when a power surge damages the batteries, Tobias sacrifices himself to complete the plan and get the Driver into the Deep Zone. In the Deep Zone, Oppy manages to supercharge the ARC device so it can work there. The Driver soon locates and enters the source, known as the Well, and is transported into a bizarre maze of television screens where they overhear past conversations between Oppy, Allen, Francis, and Tobias, as well as a deceased Tobias who leaves a final farewell to Francis and a parting message for Oppy from Allen. The Driver recovers the station wagon in the Well and returns to the garage, where Oppy and Francis reveal they heard everything there and that the Remnant is gone, though the station wagon is still linked to the Driver. After congratulating Francis on his theories (which she had previously dismissed) being correct, Oppy passes on her research and equipment to Francis and the Driver and finally leaves the Zone. Development and release After working at video game development studios like Sony Online Entertainment, Sucker Punch Productions, and Oculus VR, Cassandra Dracott founded Seattle-based Ironwood Studios in 2019 to create her own games. She conceived Pacific Drive while driving through the Olympic Peninsula; she felt driving through the Pacific Northwest "on a lonely road ... and the radio is playing a certain tune, it can be really memorable", comparing it to her childhood in Portland, Oregon. The car is loosely based on Dracott's 1989 Buick Estate and her first car, a Volvo station wagon. As she began developing a prototype of Pacific Drive, Dracott considered remaining solo but realized she would need a team as the concept began to grow. She began building the team at the beginning of the COVID-19 pandemic in 2020; they moved into their Seattle office in 2022. The team wanted the player's relationship with their car to be the most important gameplay factor; lead game designer Seth Rosen said "the car's health is generally a better indicator of how a run is going than your own". They attempted scripted "character building" moments for the vehicle but determined unscripted gameplay resonated better. Rosen avoided survival game elements he considered frustrating, such as inventory management and resource grinding. The team designed enemies and events with simplistic behaviors independently, and interesting scenarios when combined. They wanted enemies to be "pretty dangerous" but still allow the player to solve problems creatively while overcoming threats. Initial experiments of enemies controlled through artificial intelligence were scrapped as their behavior was too difficult to read while driving. The world's randomization was inspired by Derek Yu's work on Spelunky and subsequent book for Boss Fight Books. Pacific Drive uses the Unreal Engine 4 game engine. Pacific Drive was announced on September 13, 2022, during PlayStation's State of Play presentation, alongside its debut trailer. It was originally scheduled to release for the PlayStation 5 and Windows in 2023. A gameplay trailer was released on February 9, 2023. In June, Ironwood Studios announced it had partnered with Kepler Interactive to publish the game. In August, the release window was delayed to early 2024 to allow for additional development without excessive overworking. A story trailer was featured at the PC Gaming Show in November, revealing the release date of February 22, 2024. Reception Critical response Pacific Drive received "generally favorable" reviews from critics, according to review aggregator website Metacritic, and 80% of critics recommend the game according to OpenCritic. GameSpots Mark Delaney and The Jimquisitions James Stephanie Sterling considered it among the year's best games to date, while Shacknewss TJ Denzer called it "one of the most interesting survival games I've ever played". Game World Navigators Sergey Pletnev compared it to the novel Roadside Picnic (1972) in its depiction of a world filled with dangerous, incomprehensible anomalies. The game had sold over 600,000 copies by July 2024. PC Gamers Christopher Livingston considered the station wagon among the best video game vehicles, praising the durability system. Push Squares Stephen Tailby felt the vehicle maintenance added "a great sense of progression", and Shacknewss Denzer enjoyed the car's customizability options but occasionally found it inexplicably awkward to drive. Some reviewers considered the maintenance overwhelming and tiresome; IGNs Sarah Thwaites wrote that "getting stuck with a quirk you can't figure out is a real momentum killer". GameSpots Delaney found the driving more "engaging and enjoyable" than other games. Reviewers concurred Pacific Drives gameplay was enjoyable but often frustrating; GamesRadar+s Leon Hurley said the "unfair" challenges meant he completed the game "more embittered than empowered" IGN's Thwaites wrote it "often struggles to walk the fine line between being engaging and overcomplicated" by assigning the player too many tasks, ultimately distracting from its enjoyable atmosphere. GameSpots Delaney found the game "unintentionally obtuse" but commended the accessibility options. Some critics criticized the user interface's tedious and complicated design and controls; Eurogamers Chris Tapsell felt it "must have at least partially been designed to be deliberately awkward". Shacknewss Denzer found the Olympic Exclusion Zone "a character in and of itself", praising its beauty and cohesivity. IGNs Thwaites lauded the worldbuilding and use of the Pacific Northwest but felt the repetitive gameplay impacting the narrative pacing. GameSpots Delaney favorably compared the audio logs to the podcast Serial and found the variety of music enhanced the world's strangeness; GamesRadar+s Hurley similarly felt the soundtrack amplified the atmosphere. Reviewers praised the "compelling" non-player characters and the enemy designs and behavior. Push Squares Tailby lauded the stylized visuals but criticized the inconsistent frame rate and loading times on the PlayStation 5 version. Accolades Prior to its release, Pacific Drive was nominated for Most Wanted Game at the Golden Joystick Awards and PC Gaming Show. The Science Fiction and Fantasy Writers Association added the game to the suggested reading list for the Nebula Awards, and the TIGA Games Industry Awards shortlisted it for the Narrative/Story-Telling and Creativity in Games awards. It was nominated for Best Debut Indie Game at the Game Awards 2024 in November. In December, it won Best Music at the inaugural Indie Game Awards, and was longlisted for four awards at the 21st British Academy Games Awards: Debut Game, Game Design, Narrative, and New Intellectual Property. Other media In December 2024, Atomic Monster acquired the rights to develop Pacific Drive into a television series. It is set to be executive produced by Atomic Monster's James Wan, Michael Clear, and Rob Hackett, and the Menagerie Productions's Jeff Ludwig. References External links 2024 video games Driving simulators Kepler Interactive games PlayStation 5 games Roguelike video games Single-player video games Survival video games Unreal Engine 4 games Video games developed in the United States Video games set in 1998 Video games set in Washington (state) Windows games
Pacific Drive (video game)
[ "Technology" ]
2,897
[ "Driving simulators", "Real-time simulation" ]
73,047,762
https://en.wikipedia.org/wiki/Cobalt%28II%29%20stearate
Cobalt(II) stearate is a metal-organic compound, a salt of cobalt and stearic acid with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid. Synthesis An exchange reaction of sodium stearate and cobalt dichloride: Physical properties Cobalt(II) stearate forms a violet substance, occurring in several crystal structures. It is insoluble in water. Uses Cobalt(II) stearate is a high-performance bonding agent for rubber. The compound is suitable for applications in natural rubber, cisdene, styrene-butadiene rubber, and their compounds to bond easily with brass- or zinc-plated steel cord or metal plates as well as various bare steel, especially for bonding with brass plating of various thicknesses. References Stearates Cobalt(II) compounds
Cobalt(II) stearate
[ "Chemistry" ]
184
[ "Inorganic compounds", "Inorganic compound stubs" ]
73,047,908
https://en.wikipedia.org/wiki/Adobe%20Enhanced%20Speech
Adobe Enhanced Speech is an online artificial intelligence software tool by Adobe that aims to significantly improve the quality of recorded speech that may be badly muffled, reverberated, full of artifacts, tinny, etc. and convert it to a studio-grade, professional level, regardless of the initial input's clarity. Users may upload mp3 or wav files up to an hour long and a gigabyte in size to the site to convert them relatively quickly, then being free to listen to the converted version, toggle back-and-forth and alternate between it and the original as it plays, and download it. Currently in beta and free to the public, it has been used in the restoration of old movies and the creation of professional-quality podcasts, narrations, etc. by those without sufficient microphones. Although the model still has some current limitations, such as not being compatible with singing and occasional issues with excessively muffled source audio resulting in a light lisp in the improved version, it is otherwise noted as incredibly effective and efficient in its purpose. Utilizing advanced machine learning algorithms to distinguish between speech and background sounds, it enhances the quality of the speech by filtering out the noise and artifacts, adjusting the pitch and volume levels, and normalizing the audio. This is accomplished by the network having been trained on a large dataset of speech samples from a diverse range of sources and then being fine-tuned to optimize the output. References Enhanced Search Audio software Voice technology Deep learning software applications
Adobe Enhanced Speech
[ "Engineering" ]
306
[ "Audio engineering", "Audio software" ]
73,050,413
https://en.wikipedia.org/wiki/Pakistan%20Automated%20Fingerprint%20Identification%20System
The Pakistan Automated Fingerprint Identification System (PAFIS) is a biometric identification system used by law enforcement agencies in Pakistan to identify and track criminals and suspects. PAFIS was developed by the National Database and Registration Authority (NADRA) in collaboration with the Federal Investigation Agency (FIA) and other law enforcement agencies. PAFIS uses advanced fingerprint recognition technology to scan and match fingerprints of individuals against a central database of known criminals and suspects. The system can also search for partial or distorted fingerprints and can store data on palm prints and footprints as well. PAFIS has been used in a number of high-profile criminal investigations in Pakistan, including terrorist attacks and kidnappings. It has also been used to identify missing persons and to track down individuals who have gone missing or are wanted for criminal offenses. References Federal Investigation Agency
Pakistan Automated Fingerprint Identification System
[ "Chemistry", "Biology" ]
169
[ "Biochemistry stubs", "Biotechnology stubs", "Bioinformatics", "Bioinformatics stubs" ]
73,050,688
https://en.wikipedia.org/wiki/Tensor%20%28machine%20learning%29
In machine learning, the term tensor informally refers to two different concepts (i) a way of organizing data and (ii) a multilinear (tensor) transformation. Data may be organized in a multidimensional array (M-way array), informally referred to as a "data tensor"; however, in the strict mathematical sense, a tensor is a multilinear mapping over a set of domain vector spaces to a range vector space. Observations, such as images, movies, volumes, sounds, and relationships among words and concepts, stored in an M-way array ("data tensor"), may be analyzed either by artificial neural networks or tensor methods. Tensor decomposition factorizes data tensors into smaller tensors. Operations on data tensors can be expressed in terms of matrix multiplication and the Kronecker product. The computation of gradients, a crucial aspect of backpropagation, can be performed using software libraries such as PyTorch and TensorFlow. Computations are often performed on graphics processing units (GPUs) using CUDA, and on dedicated hardware such as Google's Tensor Processing Unit or Nvidia's Tensor core. These developments have greatly accelerated neural network architectures, and increased the size and complexity of models that can be trained. History A tensor is by definition a multilinear map. In mathematics, this may express a multilinear relationship between sets of algebraic objects. In physics, tensor fields, considered as tensors at each point in space, are useful in expressing mechanics such as stress or elasticity. In machine learning, the exact use of tensors depends on the statistical approach being used. In 2001, the field of signal processing and statistics were making use of tensor methods. Pierre Comon surveys the early adoption of tensor methods in the fields of telecommunications, radio surveillance, chemometrics and sensor processing. Linear tensor rank methods (such as, Parafac/CANDECOMP) analyzed M-way arrays ("data tensors") composed of higher order statistics that were employed in blind source separation problems to compute a linear model of the data. He noted several early limitations in determining the tensor rank and efficient tensor rank decomposition. In the early 2000s, multilinear tensor methods crossed over into computer vision, computer graphics and machine learning with papers by Vasilescu or in collaboration with Terzopoulos, such as Human Motion Signatures, TensorFaces TensorTexures and Multilinear Projection. Multilinear algebra, the algebra of higher-order tensors, is a suitable and transparent framework for analyzing the multifactor structure of an ensemble of observations and for addressing the difficult problem of disentangling the causal factors based on second order or higher order statistics associated with each causal factor. Tensor (multilinear) factor analysis disentangles and reduces the influence of different causal factors with multilinear subspace learning. When treating an image or a video as a 2- or 3-way array, i.e., "data matrix/tensor", tensor methods reduce spatial or time redundancies as demonstrated by Wang and Ahuja. Yoshua Bengio, Geoff Hinton and their collaborators briefly discuss the relationship between deep neural networks and tensor factor analysis beyond the use of M-way arrays ("data tensors") as inputs. One of the early uses of tensors for neural networks appeared in natural language processing. A single word can be expressed as a vector via Word2vec. Thus a relationship between two words can be encoded in a matrix. However, for more complex relationships such as subject-object-verb, it is necessary to build higher-dimensional networks. In 2009, the work of Sutskever introduced Bayesian Clustered Tensor Factorization to model relational concepts while reducing the parameter space. From 2014 to 2015, tensor methods become more common in convolutional neural networks (CNNs). Tensor methods organize neural network weights in a "data tensor", analyze and reduce the number of neural network weights. Lebedev et al. accelerated CNN networks for character classification (the recognition of letters and digits in images) by using 4D kernel tensors. Definition Let be a field such as the real numbers or the complex numbers . A tensor is a multilinear transformation from a set of domain vector spaces to a range vector space: Here, and are positive integers, and is the number of modes of a tensor (also known as the number of ways of a multi-way array). The dimensionality of mode is , for . In statistics and machine learning, an image is vectorized when viewed as a single observation, and a collection of vectorized images is organized as a "data tensor". For example, a set of facial images with pixels that are the consequences of multiple causal factors, such as a facial geometry , an expression , an illumination condition , and a viewing condition may be organized into a data tensor (ie. multiway array) where are the total number of facial geometries, are the total number of expressions, are the total number of illumination conditions, and are the total number of viewing conditions. Tensor factorizations methods such as TensorFaces and multilinear (tensor) independent component analysis factorizes the data tensor into a set of vector spaces that span the causal factor representations, where an image is the result of tensor transformation that maps a set of causal factor representations to the pixel space. Another approach to using tensors in machine learning is to embed various data types directly. For example, a grayscale image, commonly represented as a discrete 2-way array with dimensionality where are the number of rows and are the number of columns. When an image is treated as 2-way array or 2nd order tensor (i.e. as a collection of column/row observations), tensor factorization methods compute the image column space, the image row space and the normalized PCA coefficients or the ICA coefficients. Similarly, a color image with RGB channels, may be viewed as a 3rd order data tensor or 3-way array.-------- In natural language processing, a word might be expressed as a vector via the Word2vec algorithm. Thus becomes a mode-1 tensor The embedding of subject-object-verb semantics requires embedding relationships among three words. Because a word is itself a vector, subject-object-verb semantics could be expressed using mode-3 tensors In practice the neural network designer is primarily concerned with the specification of embeddings, the connection of tensor layers, and the operations performed on them in a network. Modern machine learning frameworks manage the optimization, tensor factorization and backpropagation automatically. As unit values Tensors may be used as the unit values of neural networks which extend the concept of scalar, vector and matrix values to multiple dimensions. The output value of single layer unit is the sum-product of its input units and the connection weights filtered through the activation function : where If each output element of is a scalar, then we have the classical definition of an artificial neural network. By replacing each unit component with a tensor, the network is able to express higher dimensional data such as images or videos: This use of tensors to replace unit values is common in convolutional neural networks where each unit might be an image processed through multiple layers. By embedding the data in tensors such network structures enable learning of complex data types. In fully connected layers Tensors may also be used to compute the layers of a fully connected neural network, where the tensor is applied to the entire layer instead of individual unit values. The output value of single layer unit is the sum-product of its input units and the connection weights filtered through the activation function : The vectors and of output values can be expressed as a mode-1 tensors, while the hidden weights can be expressed as a mode-2 tensor. In this example the unit values are scalars while the tensor takes on the dimensions of the network layers: In this notation, the output values can be computed as a tensor product of the input and weight tensors: which computes the sum-product as a tensor multiplication (similar to matrix multiplication). This formulation of tensors enables the entire layer of a fully connected network to be efficiently computed by mapping the units and weights to tensors. In convolutional layers A different reformulation of neural networks allows tensors to express the convolution layers of a neural network. A convolutional layer has multiple inputs, each of which is a spatial structure such as an image or volume. The inputs are convolved by filtering before being passed to the next layer. A typical use is to perform feature detection or isolation in image recognition. Convolution is often computed as the multiplication of an input signal with a filter kernel . In two dimensions the discrete, finite form is: where is the width of the kernel. This definition can be rephrased as a matrix-vector product in terms of tensors that express the kernel, data and inverse transform of the kernel. where and are the inverse transform, data and kernel. The derivation is more complex when the filtering kernel also includes a non-linear activation function such as sigmoid or ReLU. The hidden weights of the convolution layer are the parameters to the filter. These can be reduced with a pooling layer which reduces the resolution (size) of the data, and can also be expressed as a tensor operation. Tensor factorization An important contribution of tensors in machine learning is the ability to factorize tensors to decompose data into constituent factors or reduce the learned parameters. Data tensor modeling techniques stem from the linear tensor decomposition (CANDECOMP/Parafac decomposition) and the multilinear tensor decompositions (Tucker). Tucker decomposition Tucker decomposition, for example, takes a 3-way array and decomposes the tensor into three matrices and a smaller tensor . The shape of the matrices and new tensor are such that the total number of elements is reduced. The new tensors have shapes Then the original tensor can be expressed as the tensor product of these four tensors: In the example shown in the figure, the dimensions of the tensors are : I=8, J=6, K=3, : I=8, P=5, : J=6, Q=4, : K=3, R=2, : P=5, Q=4, R=2. The total number of elements in the Tucker factorization is The number of elements in the original is 144, resulting in a data reduction from 144 down to 110 elements, a reduction of 23% in parameters or data size. For much larger initial tensors, and depending on the rank (redundancy) of the tensor, the gains can be more significant. The work of Rabanser et al. provides an introduction to tensors with more details on the extension of Tucker decomposition to N-dimensions beyond the mode-3 example given here. Tensor trains Another technique for decomposing tensors rewrites the initial tensor as a sequence (train) of smaller sized tensors. A tensor-train (TT) is a sequence of tensors of reduced rank, called canonical factors. The original tensor can be expressed as the sum-product of the sequence. Developed in 2011 by Ivan Oseledts, the author observes that Tucker decomposition is "suitable for small dimensions, especially for the three-dimensional case. For large d it is not suitable." Thus tensor-trains can be used to factorize larger tensors in higher dimensions. Tensor graphs The unified data architecture and automatic differentiation of tensors has enabled higher-level designs of machine learning in the form of tensor graphs. This leads to new architectures, such as tensor-graph convolutional networks (TGCN), which identify highly non-linear associations in data, combine multiple relations, and scale gracefully, while remaining robust and performant. These developments are impacting all areas of machine learning, such as text mining and clustering, time varying data, and neural networks wherein the input data is a social graph and the data changes dynamically. Hardware Tensors provide a unified way to train neural networks for more complex data sets. However, training is expensive to compute on classical CPU hardware. In 2014, Nvidia developed cuDNN, CUDA Deep Neural Network, a library for a set of optimized primitives written in the parallel CUDA language. CUDA and thus cuDNN run on dedicated GPUs that implement unified massive parallelism in hardware. These GPUs were not yet dedicated chips for tensors, but rather existing hardware adapted for parallel computation in machine learning. In the period 2015–2017 Google invented the Tensor Processing Unit (TPU). TPUs are dedicated, fixed function hardware units that specialize in the matrix multiplications needed for tensor products. Specifically, they implement an array of 65,536 multiply units that can perform a 256x256 matrix sum-product in just one global instruction cycle. Later in 2017, Nvidia released its own Tensor Core with the Volta GPU architecture. Each Tensor Core is a microunit that can perform a 4x4 matrix sum-product. There are eight tensor cores for each shared memory (SM) block. The first GV100 GPU card has 108 SMs resulting in 672 tensor cores. This device accelerated machine learning by 12x over the previous Tesla GPUs. The number of tensor cores scales as the number of cores and SM units continue to grow in each new generation of cards. The development of GPU hardware, combined with the unified architecture of tensor cores, has enabled the training of much larger neural networks. In 2022, the largest neural network was Google's PaLM with 540 billion learned parameters (network weights) (the older GPT-3 language model has over 175 billion learned parameters that produces human-like text; size isn't everything, Stanford's much smaller 2023 Alpaca model claims to be better, having learned from Meta/Facebook's 2023 model LLaMA, the smaller 7 billion parameter variant). The widely popular chatbot ChatGPT is built on top of GPT-3.5 (and after an update GPT-4) using supervised and reinforcement learning. References Machine learning Tensors
Tensor (machine learning)
[ "Engineering" ]
2,942
[ "Artificial intelligence engineering", "Tensors", "Machine learning" ]
73,051,338
https://en.wikipedia.org/wiki/HD%20174500
HD 174500, also designated as HR 7097 or rarely 34 G. Telescopii, is a solitary white-hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.16, placing it near the limit for naked eye visibility. Gaia DR3 parallax measurements place the object 692 light years away, and it is currently receding with a heliocentric radial velocity of . At its current distance, HD 174500's brightness is diminished by 0.39 magnitudes due to interstellar dust. It has an absolute magnitude of −0.82. HD 174500 has a stellar classification of A1 IV/V, indicating that it is an evolved A-type star with the blended luminosity class of a subgiant and a main sequence star. It has 3 times the mass of the Sun and an enlarged radius of . It radiates 273 times the luminosity of the Sun from its photosphere at an effective temperature of . At the age of 370 million years HD 174500 is currently on the subgiant track and is 1.8% past its main sequence lifetime. Like many hot stars it spins rapidly, having a projected rotational velocity of . It has a solar metallicity with [Fe/H] = +0.02. This object is located close to the 5th magnitude star HD 174387. However, they do not form a double star. References A-type main-sequence stars A-type subgiants Telescopium Telescopii, 34 CD-46 12676 174500 092669 7097
HD 174500
[ "Astronomy" ]
340
[ "Telescopium", "Constellations" ]
73,052,423
https://en.wikipedia.org/wiki/Double%20operator%20integral
In functional analysis, double operator integrals (DOI) are integrals of the form where is a bounded linear operator between two separable Hilbert spaces, are two spectral measures, where stands for the set of orthogonal projections over , and is a scalar-valued measurable function called the symbol of the DOI. The integrals are to be understood in the form of Stieltjes integrals. Double operator integrals can be used to estimate the differences of two operators and have application in perturbation theory. The theory was mainly developed by Mikhail Shlyomovich Birman and Mikhail Zakharovich Solomyak in the late 1960s and 1970s, however they appeared earlier first in a paper by Daletskii and Krein. Double operator integrals The map is called a transformer. We simply write , when it's clear which spectral measures we are looking at. Originally Birman and Solomyak considered a Hilbert–Schmidt operator and defined a spectral measure by for measurable sets , then the double operator integral can be defined as for bounded and measurable functions . However one can look at more general operators as long as stays bounded. Examples Perturbation theory Consider the case where is a Hilbert space and let and be two bounded self-adjoint operators on . Let and be a function on a set , such that the spectra and are in . As usual, is the identity operator. Then by the spectral theorem and and , hence and so where and denote the corresponding spectral measures of and . Literature References Functional analysis Definitions of mathematical integration
Double operator integral
[ "Mathematics" ]
315
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
73,052,740
https://en.wikipedia.org/wiki/Bernard%20Vauquois
Bernard Vauquois ( — ) was a French mathematician and computer scientist. He was a pioneer of computer science and machine translation (MT) in France. An astronomer-turned-computer scientist, he is known for his work on the programming language ALGOL 60, and later for extensive work on the theoretical and practical problems of MT, of which the eponymous Vauquois triangle is one of the most widely-known contributions. He was a professor at what would become the Grenoble Alpes University. Biography Bernard Vauquois was initially a researcher at French National Centre for Scientific Research (CNRS) from 1952 to 1958 at the Astrophysics Institute of the Meudon Observatory, after completing studies in mathematics, physics, and astronomy. Since 1957, his research program has also focused on methods applied to physics from the perspective of electronic computers, and he has taught programming to physicists. This double interest in astrophysics and electronic computers is reflected in the subject of his thesis and that of the complementary thesis in physical sciences that he defended in 1958. In 1960, at 31 years old, he was appointed professor of computer science at the Grenoble University where, with professors Jean Kuntzmann and Noël Gastinel, he began activities in computer science. At that time, he was also working on the definition of the language ALGOL 60. Also in 1960, he founded the Centre d'Étude pour la Traduction Automatique (CETA), later renamed as Groupe d'Étude pour la Traduction Automatique (GETA) and currently known as GETALP, a team at the Laboratoire d'informatique de Grenoble, and soon showed his gift for rapid understanding, synthesis, and innovation, and his taste for personal communication across linguistic borders and barriers. After visiting a number of centers, mainly in the United States, where machine translation research was conducted, he analyzed the shortcomings of the "first-generation" approach and evaluated the potential of a new generation based on grammar and formal language theory, and proposed a new approach based on a representational "pivot" and the use of (declarative) rule systems that transform a sequential sentence from one level of representation to another. He led the GETA in constructing the first large second-generation system, applied to Russian–French, from 1962 to 1971. At the end of this period, the accumulated experience led him to correct some defects of the "pure" declarative and interlingual approach, and to use heuristic programming methods, implemented with procedural grammars written in LSPLs ("specialized languages for linguistic programming", langages spécialisés pour la programmation linguistique) that were developed under his direction, and integrated into the ARIANE-78 machine translation system. In 1974, when he cofounded the Leibniz laboratory, he proposed "multilevel structure descriptors" (descripteurs de structures multiniveaux) for units larger than sentence translation. This idea, premonitory of later theoretical work (Ray Jackendoff, Gerald Gazdar) is still the cornerstone of all machine translation software built by GETA and the French national TA project. Bernard Vauquois' last contribution was "static grammar" (grammaire statique) in 1982–83, during the ESOPE project, the preparatory phase of the French national MT project. He was a key figure in the field of computational linguistics in France. At CNRS, he was a member of section 22 of the National Committee in 1963: "General Linguistics, Modern Languages and Comparative Literature", and then, in 1969, of section 28: "General Linguistics, Foreign Languages and Literature". Since 1965, he has been vice-president of the Association for Natural Language Processing (ATALA). He was its president from 1966 to 1971. He was also one of the founders, in 1965, of the ICCL (International Committee on Computational Linguistics), which organizes COLING conferences. He was its president from 1969 to 1984. From France, he often collaborated with other countries (notably Canada, the United States, the USSR, Czechoslovakia, Japan, China, Brazil, Malaysia, and Thailand), working on the specification and implementation of grammars and dictionaries. He began cooperating with Malaysia, for example, in 1979, which led to the creation of the Automatic Terjemaan Project, with a first prototype of an English-Malay MT system demonstrated in 1980. Vauquois triangle The Vauquois triangle is a conceptual model and diagram illustrating possible approaches to the design of machine translation systems, first proposed in 1968. Legacy Bernard Vauquois is regarded as a pioneer of machine translation in France. He played a key role in developing the first large-scale second-generation machine translation system, and his work influenced the field of machine translation for many years. He supervised some twenty doctoral theses, most of them concerning formal aspects of natural and artificial languages, with an emphasis on machine translation. The Center for Studies on Automatic Translation, which Vauquois founded in 1960, later became the Group for the Study of Machine Translation and Automated Processing of Languages and Speech (GETALP). It is still a research institution in natural language processing. Vauquois was a prolific writer and speaker, disseminating knowledge about machine translation and related topics. His papers and presentations were instrumental in establishing the field of machine translation in France and beyond. Publications References 1929 births 1985 deaths French mathematicians French computer scientists Machine translation Natural language processing researchers People from Paris ALGOL 60 Programming language designers
Bernard Vauquois
[ "Technology" ]
1,147
[ "Machine translation", "Natural language and computing" ]
73,053,077
https://en.wikipedia.org/wiki/Henry%20Prevost%20Babbage
Henry Prevost Babbage (1824–1918) was a soldier in the Bengal Army of the East India Company. After retiring with the rank of major general, he continued the work of his father, Charles Babbage, whom he had assisted as a student. He organised and edited his father's papers and prototypes and arranged for their publication and completion. These works included Babbage's Calculating Engines (1889) and a working Analytical Engine Mill – a simplified portion of the full Analytical Engine design. Military career He was brevetted as a colonel in the Bengal Staff Corps on 10 June 1874. References 1824 births 1918 deaths Alumni of University College London British computer scientists British soldiers Major generals People educated at University College School
Henry Prevost Babbage
[ "Technology" ]
149
[ "Computing stubs" ]
73,053,543
https://en.wikipedia.org/wiki/HD%20174474
HD 174474, also designated as HR 7095 or rarely 35 G. Telescopii, is a solitary white-hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.17, placing it near the limit for naked eye visibility. The object is located relatively close at a distance of 244 light years based on Gaia DR3 parallax measurements but is drifting closer with a heliocentric radial velocity of . At its current distance, HD 174474's brightness is diminished by 0.26 magnitudes due to interstellar dust. It has an absolute magnitude of +1.61. This is an ordinary A-type main-sequence star with a stellar classification of A2 V. It has double the mass of the Sun and 1.89 times the Sun's radius. It radiates 18.1 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 174474 is slightly metal deficient with an iron abundance 22% below solar levels ([Fe/H] = −0.11). It is estimated to be 630 million years old based on stellar evolution models from David & Hillenbrand (2015). References A-type main-sequence stars 174474 092676 7095 CD-48 12769 Telescopium Telescopii, 35
HD 174474
[ "Astronomy" ]
292
[ "Telescopium", "Constellations" ]
73,055,811
https://en.wikipedia.org/wiki/List%20of%20atmospheric%20optical%20phenomena
Atmospheric optical phenomena include: Afterglow Airglow Alexander's band, the dark region between the two bows of a double rainbow. Alpenglow Anthelion Anticrepuscular rays Aurora (northern and southern lights, aurora borealis and aurora australis) Belt of Venus Brocken Spectre Circumhorizontal arc Circumzenithal arc Cloud iridescence Crepuscular rays Earth's shadow Earthquake lights Glories Green flash Halos, of Sun or Moon, including sun dogs Haze Heiligenschein or halo effect, partly caused by the opposition effect Ice blink Light pillar Lightning Mirages (including Fata Morgana) Monochrome Rainbow Moon dog Moonbow Nacreous cloud/Polar stratospheric cloud Rainbow Sprite (lightning) Subsun Sun dog Tangent arc Tyndall effect Upper-atmospheric lightning, including red sprites, Blue jets, and ELVES Water sky See also References atmospheric optical phenomena Optical phenomena
List of atmospheric optical phenomena
[ "Physics" ]
201
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
60,855,500
https://en.wikipedia.org/wiki/Patricia%20Dove
Patricia Martin Dove is an American geochemist. She is a university distinguished professor and the C.P. Miles Professor of Science at Virginia Tech with appointments in the department of Geosciences, department of Chemistry, and department of Materials Science and Engineering. Her research focuses on the kinetics and thermodynamics of mineral reactions with aqueous solutions in biogeochemical systems. Much of her work is on crystal nucleation and growth during biomineralization and biomaterial interactions with mineralogical systems. She was elected a member of the National Academy of Sciences (NAS) in 2012, where she currently serves as chair of the Geology Section and is the immediate-past chair of Class I, Physical and Mathematical Sciences. Family and education Dove grew up on a working farm in Bedford County, Virginia. With the encouragement of her parents, she became interested in science as a child, collecting specimens of tree leaves and Indian arrowhead artifacts in the Piedmont region of Virginia. Dove participated in the local science fairs and presented her research projects on plant growth at the Virginia Junior Academy of Science and the 1976 Westinghouse International Science and Engineering Fair, which later became the Intel International Science and Engineering Fair. She studied soil science and plant physiology in the Department of Agronomy at Virginia Tech and earned the bachelor's degree in 1980. Under the advisement of J. Donald Rimstidt, she further earned the Master's degree in environmental geochemistry at Virginia Tech with investigations of scorodite solubility and the geochemistry of Brinton Arsenic Mine. Dove completed a PhD degree in 1991 at Princeton University, where she worked with David Crerar to develop the hydrothermal mixed flow reactor (MFR). Using the MFR, she determined the hydrothermal dissolution kinetics of quartz in electrolyte solutions and dissolution of the isostructural sulfate minerals- celestine, anglesite and baryte. Dove subsequently received a National Science Foundation Postdoctoral Fellowship (1991-1993) to work with Michael Hochella in investigations of mineral surface-water interactions at Stanford University using the newly-developed Atomic Force Microscope. Patricia Dove was born to Fuller Emerson Martin and Lou Ellen Martin, the oldest of four children. She met Joseph Dove at Virginia Tech, and they married in September 1980. They have a daughter, Meredith Dove, and a son, Emerson Dove. Patricia Dove has a life-long passion for horses and has competed in the dressage and reining disciplines. Career and research Dove was an assistant and tenured associate professor in the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology from 1993 to 2000. She returned to her Alma mater, Virginia Polytechnic Institute and State University, in 2000 and leads the Biogeochemistry of Earth Processes research group. In 2008, Dove was appointed the C.P. Miles Professor of Science. In 2013, she was named a university distinguished professor. Dove and collaborators have made notable contributions to understanding mineral-water interactions in silica geochemistry (gca, pnas_ab) and the biomineralisation of carbonate mineral systems. She combines chemical principles with nanoanalysis and in-situ measurements of crystal nucleation, growth, and dissolution reactions. Using in situ Atomic Force Microscopy they show how elemental impurities are incorporated into the minerals of shells to affect the chemical composition and can be used to reconstruct past environmental conditions. Dove demonstrated that temperature and the magnesium carbonate availability can alter the composition and crystal form of minerals. Other work demonstrated the amino acids and peptides in macromolecules often associated with biomineralizing tissues can act as crystal growth promoters or inhibitors to regulate the rate of skeletal formation. In 2003, Dove led an international endeavor to establish current knowledge of the chemical processes that control biomineralisation and called for an interdisciplinary endeavor to advance the field using new quantitative and high-resolution experimental and theoretical methods (Napa, California). Over the next decade, many biominerals and synthetic biomaterials were determined to involve small particles rather than by classical crystallization. In 2013, she organized an interdisciplinary workshop to find consensus for the basis of these observations (Berkeley, California). A multi-disciplinary consensus emerged for the concept of Crystallization by Particle Attachment (CPA) that was published in Science and rapidly showing applications to diverse fields. The physical-chemical model for non-classical crystallization hypothesizes how an interplay of thermodynamic and kinetic factors allow the multiple pathways to crystal formation that are observed. Dove is a charter member of the Virginia Academy of Science, Engineering, and Medicine. The Virginia Academy of Science, Engineering, and Medicine (VASEM) was co-founded by Senator Mark Warner and the presidents of Virginia’s research universities in collaboration with members of the National Academy of Sciences, National Academy of Engineering, and National Academy of Medicine who live or work in the Commonwealth of Virginia. As a state academy, VASEM provides technical expertise to the Virginia government. In 2016, Dove was appointed the second president of VASEM (2016-2019). Awards and honors 1995 Georgia Institute of Technology AMOCO CETL Junior Faculty Teaching Award 1996 Geochemical Society F.W. Clarke Medal 1999 United States Department of Energy Best University Research Award 2000 Mineralogical Society of America Fellow 2005 United States Department of Energy Best University Research Award 2008 American Geophysical Union Fellow 2010 Geochemical Society 2010 European Association of Geochemistry Fellow 2012 National Academy of Sciences Elected member 2013 Office of the Governor of Virginia Outstanding Scientist Award 2014 Mineralogical Society of America Dana Medal 2016 Virginia Museum of Natural History Thomas Jefferson Medal 2022 International Mineralogical Association - Medal of Excellence in Mineralogical Sciences. Selected publications References Year of birth missing (living people) Living people American geochemists Princeton University alumni Virginia Tech faculty
Patricia Dove
[ "Chemistry" ]
1,182
[ "Geochemists", "American geochemists" ]
60,857,350
https://en.wikipedia.org/wiki/List%20of%20sigils%20of%20demons
In demonology, sigils are pictorial signatures attributed to demons, angels, or other beings. In the ceremonial magic of the Middle Ages, sigils were used in the summoning of these beings and were the pictorial equivalent to their true name. See also List of demons in the Ars Goetia List of occult symbols List of theological demons References External links Demonology Lists of symbols Magic symbols Demons-related lists
List of sigils of demons
[ "Mathematics" ]
86
[ "Symbols", "Lists of symbols" ]
60,858,807
https://en.wikipedia.org/wiki/List%20of%20symbols%20designated%20by%20the%20Anti-Defamation%20League%20as%20hate%20symbols
This is a list of hate symbols, including acronyms, numbers, phrases, logos, flags, gestures and other miscellaneous symbols used for hateful purposes, according to the Anti-Defamation League. Some of these items have been appropriated by hate groups and may have other, non-hate-group-related meanings, including anti-racist meanings. Acronyms Numerical Phrases Hate group logos Flags Gestures Miscellaneous symbols See also Armanen runes Cross burning Far-right subcultures Fascist symbolism List of fascist movements List of Ku Klux Klan organizations List of neo-Nazi organizations List of organizations designated by the Southern Poverty Law Center as hate groups List of white nationalist organizations Nazi symbolism The modern Federal Republic of Germany's Strafgesetzbuch section 86a References Informational notes a.This symbol, while sometimes used as a hate symbol, is also used as a religious symbol for many pagans, and should not be assumed to be a hate symbol in all contexts. Citations Nazi symbolism Hate symbols Anti-Defamation League
List of symbols designated by the Anti-Defamation League as hate symbols
[ "Mathematics" ]
203
[ "Symbols", "Lists of symbols" ]