id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
42,804,273
https://en.wikipedia.org/wiki/Ribosomally%20synthesized%20and%20post-translationally%20modified%20peptides
Ribosomally synthesized and post-translationally modified peptides (RiPPs), also known as ribosomal natural products, are a diverse class of natural products of ribosomal origin. Consisting of more than 20 sub-classes, RiPPs are produced by a variety of organisms, including prokaryotes, eukaryotes, and archaea, and they possess a wide range of biological functions. As a consequence of the falling cost of genome sequencing and the accompanying rise in available genomic data, scientific interest in RiPPs has increased in the last few decades. Because the chemical structures of RiPPs are more closely predictable from genomic data than are other natural products (e.g. alkaloids, terpenoids), their presence in sequenced organisms can, in theory, be identified rapidly. This makes RiPPs an attractive target of modern natural product discovery efforts. Definition RiPPs consist of any peptides (i.e. molecular weight below 10 kDa) that are ribosomally-produced and undergo some degree of enzymatic post-translational modification. This combination of peptide translation and modification is referred to as "post-ribosomal peptide synthesis" (PRPS) in analogy with nonribosomal peptide synthesis (NRPS). Historically, the current sub-classes of RiPPs were studied individually, and common practices in nomenclature varied accordingly in the literature. More recently, with the advent of broad genome sequencing, it has been realized that these natural products share a common biosynthetic origin. In 2013, a set of uniform nomenclature guidelines were agreed upon and published by a large group of researchers in the field. Prior to this report, RiPPs were referred to by a variety of designations, including post-ribosomal peptides, ribosomal natural products, and ribosomal peptides. The acronym "RiPP" stands for "ribosomally synthesized and post-translationally modified peptide". Prevalence and applications RiPPs constitute one of the major superfamilies of natural products, like alkaloids, terpenoids, and nonribosomal peptides, although they tend to be large, with molecular weights commonly in excess of 1000 Da. The advent of next-generation sequencing methods has made genome mining of RiPPs a common strategy. In part due to their increased discovery and hypothesized ease of engineering, the use of RiPPs as drugs is increasing. Although they are ribosomal peptides in origin, RiPPs are typically categorized as small molecules rather than biologics due to their chemical properties, such as moderate molecular weight and relatively high hydrophobicity. The uses and biological activities of RiPPs are diverse. RiPPs in commercial use include nisin, a food preservative, thiostrepton, a veterinary topical antibiotic, and nosiheptide and duramycin, which are animal feed additives. Phalloidin functionalized with a fluorophore is used in microscopy as a stain due to its high affinity for actin. Anantin is a RiPP used in cell biology as an atrial natriuretic peptide receptor inhibitor. In 2012-2013, a derivatized RiPP in clinical trials was LFF571. Phase II clinical trials of LFF571, a derivative of the thiopeptide GE2270-A, for the treatment of Clostridioides difficile infections, with comparable safety and efficacy to vancomycin, was terminated early as the results were unfavorable. Also recently in clinical trials was the NVB302 (a derivative of the lantibiotic actagardine) which is used for the treatment of Clostridioides difficile infection. Duramycin has completed phase II clinical trials for the treatment of cystic fibrosis. Other bioactive RiPPs include the antibiotics cyclothiazomycin and bottromycin, the ultra-narrow spectrum antibiotic plantazolicin, and the cytotoxin patellamide A. Streptolysin S, the toxic virulence factor of Streptococcus pyogenes, is also a RiPP. Additionally, human thyroid hormone itself is a RiPP due to its biosynthetic origin as thyroglobulin. Classifications Amatoxins and phallotoxins Amatoxins and phallotoxins are 8- and 7-membered natural products, respectively, characterized by N-to-C cyclization in addition to a tryptathionine motif derived from the crosslinking of Cys and Trp. The amatoxins and phallotoxins also differ from other RiPPs based on the presence of a C-terminal recognition sequence in addition to the N-terminal leader peptide. α-Amanitin, an amatoxin, has a number of posttranslational modifications in addition to macrocyclization and formation of the tryptathionine bridge: oxidation of the tryptathionine leads to the presence of a sulfoxide, and numerous hydroxylations decorate the natural product. As an amatoxin, α-amanitin is an inhibitor of RNA polymerase II. Bottromycins Bottromycins contain a C-terminal decarboxylated thiazole in addition to a macrocyclic amidine. There are currently six known bottromycin compounds, which differ in the extent of side chain methylation, an additional characteristic of the bottromycin class. The total synthesis of bottromycin A2 was required to definitively determine the structure of the first bottromycin. Thus far, gene clusters predicted to produce bottromycins have been identified in the genus Streptomyces. Bottromycins differ from other RiPPs in that there is no N-terminal leader peptide. Rather, the precursor peptide has a C-terminal extension of 35-37 amino acids, hypothesized to act as a recognition sequence for posttranslational machinery. Cyanobactins Cyanobactins are diverse metabolites from cyanobacteria with N-to-C macrocylization of a 6–20 amino acid chain. Cyanobactins are natural products isolated from cyanobacteria, and close to 30% of all cyanobacterial strains are thought to contain cyanobacterial gene clusters. However, while thus far all cyanobactins are credited to cyanobacteria, there exists the possibility that other organisms could produce similar natural products. The precursor peptide of the cyanobactin family is traditionally designated the "E" gene, whereas precursor peptides are designated gene "A" in most RiPP gene clusters. "A" is a serine protease involved in cleavage of the leader peptide and subsequent macrocyclization of the peptide natural product, in combination with an additional serine protease homologue, the encoded by gene "G". Members of the cyanobactin family may bear thiazolines/oxazolines, thiazoles/oxazoles, and methylations depending on additional modification enzymes. For example, perhaps the most famous cyanobactin is patellamide A, which contains two thiazoles, a methyloxazoline, and an oxazoline in its final state, a macrocycle derived from 8 amino acids. Lanthipeptides Lanthipeptides are one of the most well-studied families of RiPPs. The family is characterized by the presence of lanthionine (Lan) and 3-methyllanthionine (MeLan) residues in the final natural product. There are four major classes of lanthipeptides, delineated by the enzymes responsible for installation of Lan and MeLan. The dehydratase and cyclase can be two separate proteins or one multifunctional enzyme. Previously, lanthipeptides were known as "lantipeptides" before a consensus was reached in the field. Lantibiotics are lanthipeptides that have known antimicrobial activity. The founding member of the lanthipeptide family, nisin, is a lantibiotic that has been used to prevent the growth of food-born pathogens for over 40 years. Lasso peptides Lasso peptides are short peptides containing an N-terminal macrolactam macrocycle "ring" through which a linear C-terminal "tail" is threaded. Because of this threaded-loop topology, these peptides resemble lassos, giving rise to their name. They are a member of a larger class of amino-acid-based lasso structures. Additionally, lasso peptides are formally rotaxanes. The N-terminal "ring" can be from 7 to 9 amino acids long and is formed by an isopeptide bond between the N-terminal amine of the first amino acid of the peptide and the carboxylate side chain of an aspartate or glutamate residue. The C-terminal "tail" ranges from 7 to 15 amino acids in length. The first amino acid of lasso peptides is almost invariably glycine or cysteine, with mutations at this site not being tolerated by known enzymes. Thus, bioinformatics-based approaches to lasso peptide discovery have thus used this as a constraint. However, some lasso peptides were recently discovered that also contain serine or alanine as their first residue. The threading of the lasso tail is trapped either by disulfide bonds between ring and tail cysteine residues (class I lasso peptides), by steric effects due to bulky residues on the tail (class II lasso peptides), or both (class III lasso peptides). The compact structure makes lasso peptides frequently resistant to proteases or thermal unfolding. Linear azol(in)e-containing peptides Linear azole(in)e-containing peptides (LAPs) contain thiazoles and oxazoles, or their reduced thiazoline and oxazoline forms. Thiazol(in)es are the result of cyclization of Cys residues in the precursor peptide, while (methyl)oxazol(in)es are formed from Thr and Ser. Azole and azoline formation also modifies the residue in the -1 position, or directly C-terminal to the Cys, Ser, or Thr. A dehydrogenase in the LAP gene cluster is required for oxidation of azolines to azoles. Plantazolicin is a LAP with extensive cyclization. Two sets of five heterocycles endow the natural product with structural rigidity and unusually selective antibacterial activity. Streptolysin S (SLS) is perhaps the most well-studied and most famous LAP, in part because the structure is still unknown since the discovery of SLS in 1901. Thus, while the biosynthetic gene cluster suggests SLS is a LAP, structural confirmation is lacking. Microcins Microcins are all RiPPs produced by Enterobacteriaceae with a molecular weight <10 kDa. Many members of other RiPP families, such as microcin E492, microcin B17 (LAP) and microcin J25 (Lasso peptide) are also considered microcins. Instead of being classified based on posttranslational modifications or modifying enzymes, microcins are instead identified by molecular weight, native producer, and antibacterial activity. Microcins are either plasmid- or chromosome-encoded, but specifically have activity against Enerobacteriaceae. Because these organisms are also often producers of microcins, the gene cluster contains not only a precursor peptide and modification enzymes, but also a self-immunity gene to protect the producing strain, and genes encoding export of the natural product. Microcins have bioactivity against Gram-negative bacteria but usually display narrow-spectrum activity due to hijacking of specific receptors involved in the transport of essential nutrients. Thiopeptides Most of the characterized thiopeptides have been isolated from Actinobacteria. General structural features of thiopeptide macrocycles, are dehydrated amino acids and thiazole rings formed from dehydrated serine/threonine and cyclized cysteine residues, respectively The thiopeptide macrocycle is closed with a six-membered nitrogen-bearing ring. Oxidation state and substitution pattern of the nitrogenous ring determines the series of the thiopeptide natural product. While the mechanism of macrocyclization is not known, the nitrogenous ring can exist in thiopeptides as a piperidine, dehydropiperidine, or a fully oxidized pyridine. Additionally, some thiopeptides bear a second macrocycle, which bears a quinaldic acid or indolic acid residue derived from tryptophan. Perhaps the most well-characterized thiopeptide, thiostrepton A, contains a dehydropiperidine ring and a second, quinaldic acid-containing macrocycle. Four residues are dehydrated during posttranslational modification, and the final natural product also bears four thiazoles and one azoline. Other RiPPs Autoinducing Peptides (AIPs) and quorum sensing peptides are used as signaling molecules in the process called quorum sensing. AIPs are characterized by the presence of a cyclic ester or thioester, unlike other regulatory peptides that are linear. In pathogens, exported AIPs bind to extracellular receptors that trigger the production of virulence factors. In Staphylococcus aureus, AIPs are biosynthesized from a precursor peptide composed of a C-terminal leader region, the core region, and negatively charged tail region that is, along with the leader peptide, cleaved before AIP export. Bacterial Head-to-Tail Cyclized Peptides refers exclusively to ribosomally synthesized peptides with 35-70 residues and a peptide bond between the N- and C-termini, sometimes referred to as bacteriocins, although this term is used more broadly. The distinctive nature of this class is not only the relatively large size of the natural products but also the modifying enzymes responsible for macrocyclization. Other N-to-C cyclized RiPPs, such as the cyanobactins and orbitides, have specialized biosynthetic machinery for macrocylization of much smaller core peptides. Thus far, these bacteriocins have been identified only in Gram-positive bacteria. Enterocin AS-48 was isolated from Enterococcus and, like other bacteriocins, is relatively resistant to high temperature, pH changes, and many proteases as a result of macrocyclization. Based on solution structures and sequence alignments, bacteriocins appear to take on similar 3D structures despite little sequence homology, contributing to stability and resistance to degradation. Conopeptides and other toxoglossan peptides are the components of the venom of predatory marine snails, such as the cone snails or Conus. Venom peptides from cone snails are generally smaller than those found in other animal venoms (10-30 amino acids vs. 30-90 amino acids) and have more disulfide crosslinks. A single species may have 50-200 conopeptides encoded in its genome, recognizable by a well-conserved signal sequence. Cyclotides are RiPPs with a head-to-tail cyclization and three conserved disulfide bonds that form a knotted structure called a cyclic cysteine knot motif. No other posttranslational modifications have been observed on the characterized cyclotides, which are between 28 - 37 amino acids in size. Cyclotides are plant natural products and the different cyclotides appear to be species-specific. While many activities have been reported for cyclotides, it has been hypothesized that all are united by a common mechanism of binding to and disrupting the cell membrane. Glycocins are RiPPs that are glycosylated antimicrobial peptides. Only two members have been fully characterized, making this a small RiPP class. Sublancin 168 and glycocin F are both Cys-glycosylated and, in addition, have disulfide bonds between non-glycosylated Cys residues. While both members bear S-glycosyl groups, RiPPs bearing O- or N-linked carbohydrates will also be included in this family as they are discovered. Linaridins are characterized by C-terminal aminovinyl cysteine residues. While this posttranslational modification is also seen in the lanthipeptides epidermin and mersacidin, linaridins do not have Lan or MeLan residues. In addition, the linaridin moiety is formed from modification of two Cys residues, whereas lanthipeptide aminovinyl cysteines are formed from Cys and dehydroalanine (Dha). The first linaridin to be characterized was cypemycin. Microviridins are cyclic N-acetylated trideca- and tetradecapeptides with ω-ester and/or ω-amide bonds. Lactone formation through glutamate or aspartate ω-carboxy groups and the lysine ε-amino group forms macrocycles in the final natural product. This class of RiPPs function as protease inhibitors and were originally isolated from Microcystis viridis. Gene clusters encoding microviridins have also been identified in genomes across the Bacteroidetes and Proteobacteria phyla. Orbitides are plant-derived N-to-C cyclized peptides with no disulfide bonds. Also referred to as Caryophyllaceae-like homomonocyclopeptides, orbitides are 5-12 amino acids in length and are composed of mainly hydrophobic residues. Similar to the amatoxins and phallotoxins, the gene sequences of orbitides suggest the presence of a C-terminal recognition sequence. In the flaxseed variety Linum usitatissimum, a precursor peptide was found using Blast searching that potentially contains five core peptides separated by putative recognition sequences. Proteusins are named after "Proteus", a Greek shape-shifting sea god. Until now, the only known members in the family of Proteusins are called polytheonamides. They were originally presumed to be nonribosomal natural products due to the presence of many D-amino acids and other non-proteinogenic amino acids. However, a metagenomic study revealed the natural products as the most extensively modified class of RiPPs known to date. Six enzymes are responsible for installing a total of 48 posttranslational modifications onto the polytheonamide A and B precursor peptides, including 18 epimerizations. Polytheonamides are exceptionally large, as a single molecule is able to span a cell membrane and form an ion channel. Sactipeptides contain intramolecular linkages between the sulfur of Cys residues and the α-carbon of another residue in the peptide. A number of nonribosomal peptides bear the same modification. In 2003, the first RiPP with a sulfur-to-α-carbon linkage was reported when the structure of subtilosin A was determined using isotopically enriched media and NMR spectroscopy. In the case of subtilosin A, isolated from Bacillus subtilis 168, the Cα crosslinks between Cys4 and Phe31, Cys7 and Thr28, and Cys13 and Phe22 are not the only posttranslational modifications; the C- and N-termini form an amide bond, resulting in a circular structure that is conformationally restricted by the Cα bonds. Sactipeptides with antimicrobial activity are commonly referred to as sactibiotics (sulfur to alpha-carbon antibiotic). Biosynthesis RiPPs are characterized by a common biosynthetic strategy wherein genetically-encoded peptides undergo translation and subsequent chemical modification by biosynthetic enzymes. Common features All RiPPs are synthesized first at the ribosome as a precursor peptide. This peptide consists of a core peptide segment which is typically preceded (and occasionally followed) by a leader peptide segment and is typically ~20-110 residues long. The leader peptide is usually important for enabling enzymatic processing of the precursor peptide via aiding in recognition of the core peptide by biosynthetic enzymes and for cellular export. Some RiPPs also contain a recognition sequence C-terminal to the core peptide; these are involved in excision and cyclization. Additionally, eukaryotic RiPPs may contain a signal segment of the precursor peptide which helps direct the peptide to cellular compartments. During RiPP biosynthesis, the unmodified precursor peptide (containing an unmodified core peptide, UCP) is recognized and chemically modified sequentially by biosynthetic enzymes (PRPS). Examples of modifications include dehydration (i.e. lanthipeptides, thiopeptides), cyclodehydration (i.e. thiopeptides), prenylation (i.e. cyanobactins), and cyclization (i.e. lasso peptides), among others. The resulting modified precursor peptide (containing a modified core peptide, MCP) then undergoes proteolysis, wherein the non-core regions of the precursor peptide are removed. This results in the mature RiPP. Nomenclature Papers published prior to a recent community consensus employ differing sets of nomenclature. The precursor peptide has been referred to previously as prepeptide, prepropeptide, or structural peptide. The leader peptide has been referred to as a propeptide, pro-region, or intervening region. Historical alternate terms for core peptide included propeptide, structural peptide, and toxin region (for conopeptides, specifically). Family-specific features Lanthipeptides Lanthipeptides are characterized by the presence lanthionine (Lan) and 3-methyllanthionine (MeLan) residues. Lan residues are formed from a thioether bridge between Cys and Ser, while MeLan residues are formed from the linkage of Cys to a Thr residue. The biosynthetic enzymes responsible for Lan and MeLan installation first dehydrate Ser and Thr to dehydroalanine (Dha) and dehydrobutyrine (Dhb), respectively. Subsequent thioether crosslinking occurs through a Michael-type addition by Cys onto Dha or Dhb. Four classes of lanthipeptide biosynthetic enzymes have been designated. Class I lanthipeptides have dedicated lanthipeptide dehydratases, called LanB enzymes, though more specific designations are used for particular lanthipeptides (e.g. NisB is the nisin dehydratase). A separate cyclase, LanC, is responsible for the second step in Lan and MeLan biosynthesis. However, class II, III, and IV lanthipeptides have bifunctional lanthionine synthetases in their gene clusters, meaning a single enzyme carries out both dehydration and cyclization steps. Class II synthetases, designated LanM synthetases, have N-terminal dehydration domains with no sequence homology to other lanthipeptide biosynthetic enzymes; the cyclase domain has homology to LanC. Class III (LanKC) and IV (LanL) enzymes have similar N-terminal lyase and central kinase domains, but diverge in C-terminal cyclization domains: the LanL cyclase domain is homologous to LanC, but the class III enzymes lack Zn-ligand binding domains. Linear azol(in)e-containing peptides The hallmark of linear azol(in)e-containing peptide (LAP) biosynthesis is the formation of azol(in)e heterocycles from the nucleophilic amino acids serine, threonine, or cysteine. This is accomplished by three enzymes referred to as the B, C, and D proteins; the precursor peptide is referred to as the A protein, as in other classes. The C protein is mainly involved in leader peptide recognition and binding and is sometimes called a scaffolding protein. The D protein is an ATP-dependent cyclodehydratase that catalyzes the cyclodehydration reaction, resulting in formation of an azoline ring. This occurs by direct activation of the amide backbone carbonyl with ATP, resulting in stoichiometric ATP consumption. The C and D proteins are occasionally present as a single, fused protein, as is the case for trunkamide biosynthesis. The B protein is a flavin mononucleotide (FMN)-dependent dehydrogenase which oxidizes certain azoline rings into azoles. The B protein is typically referred to as the dehydrogenase; the C and D proteins together form the cyclodehydratase, although the D protein alone performs the cyclodehydration reaction. Early work on microcin B17 adopted a different nomenclature for these proteins, but a recent consensus has been adopted by the field as described above. Cyanobactins Cyanobactin biosynthesis requires proteolytic cleavage of both N-terminal and C-terminal portions of the precursor peptide. The defining proteins are thus an N-terminal protease, referred to as the A protein, and a C-terminal protease, referred to as the G protein. The G protein is also responsible for macrocyclization. For cyanobactins, the precursor peptide is referred to as the E peptide. Minimally, the E peptide requires a leader peptide region, a core (structural) region, and both N-terminal and C-terminal protease recognition sequences. In contrast to most RiPPs, for which a single precursor peptide encodes a single natural product via a lone core peptide, cyanobactin E peptides can contain multiple core regions; multiple E peptides can even be present in a single gene cluster. Many cyanobactins also undergo heterocyclization by a heterocyclase (referred to as the D protein), installing oxazoline or thiazoline moieties from Ser/Thr/Cys residues prior to the action of the A and G proteases. The heterocyclase is an ATP-dependent YcaO homologue that behaves biochemically in the same manner as YcaO-domain cyclodehydratases in thiopeptide and linear azol(in)e-containing peptide (LAP) biosynthesis (described above). A common modification is prenylation of hydroxyl groups by an F protein prenyltransferase. Oxidation of azoline heterocycles to azoles can also be accomplished by an oxidase domain located on the G protein. Unusual for ribosomal peptides, cyanobactins can include D-amino acids; these can occur adjacent to azole or azoline residues. The functions of some proteins found commonly in cyanobactin biosynthetic gene clusters, the B and C proteins, are unknown. Thiopeptides Thiopeptide biosynthesis involves particularly extensive modification of the core peptide scaffold. Indeed, due to the highly complex structures of thiopeptides, it was commonly thought that these natural products were nonribosomal peptides. Recognition of the ribosomal origin of these molecules came in 2009 with the independent discovery of the gene clusters for several thiopeptides. The standard nomenclature for thiopeptide biosynthetic proteins follows that of the thiomuracin gene cluster. In addition to the precursor peptide, referred to as the A peptide, thiopeptide biosynthesis requires at least six genes. These include lanthipeptide-like dehydratases, designated the B and C proteins, which install dehydroalanine and dehydrobutyrine moieties by dehydrating Ser/Thr precursor residues. Azole and azoline synthesis is effected by the E protein, the dehydrogenase, and the G protein, the cyclodehydratase. The nitrogen-containing heterocycle is installed by the D protein cyclase via a putative [4+2] cycloaddition of dehydroalanine moieties to form the characteristic macrocycle. The F protein is responsible for binding of the leader peptide. Thiopeptide biosynthesis is biochemically similar to that of cyanobactins, lanthipeptides, and linear azol(in)e-containing peptides (LAPs). As with cyanobactins and LAPs, azole and azoline synthesis occurs via the action of an ATP-dependent YcaO-domain cyclodehydratase. In contrast to LAPs, where cyclodehydration occurs via the action of two distinct proteins responsible for leader peptide binding and cyclodehydrative catalysis, these are fused into a single protein (G protein) in cyanobactin and thiopeptide biosynthesis. However, in thiopeptides, an additional protein, designated the Ocin-ThiF-like protein (F protein) is necessary for leader peptide recognition and potentially recruiting other biosynthetic enzymes. Lasso peptides Lasso peptide biosynthesis requires at least three genes, referred to as the A, B, and C proteins. The A gene encodes the precursor peptide, which is modified by the B and C proteins into the mature natural product. The B protein is an adenosine triphosphate-dependent cysteine protease that cleaves the leader region from the precursor peptide. The C protein displays homology to asparagine synthetase and is thought to activate the carboxylic acid side chain of a glutamate or aspartate residue via adenylylation. The N-terminal amine formed by the B protein (protease) then reacts with this activated side chain to form the macrocycle-forming isopeptide bond. The exact steps and reaction intermediates in lasso peptide biosynthesis remain unknown due to experimental difficulties associated with the proteins. Commonly, the B protein is referred to as the lasso protease, and the C protein is referred to as the lasso cyclase. Some lasso peptide biosynthetic gene clusters also require an additional protein of unknown function for biosynthesis. Additionally, lasso peptide gene clusters usually include an ABC transporter (D protein) or an isopeptidase, although these are not strictly required for lasso peptide biosynthesis and are sometimes absent. No X-ray crystal structure is yet known for any lasso peptide biosynthetic protein. The biosynthesis of lasso peptides is particularly interesting due to the inaccessibility of the threaded-lasso topology to chemical peptide synthesis. See also Nonribosomal peptide References Biosynthesis Molecular biology Enzymes Peptides
Ribosomally synthesized and post-translationally modified peptides
[ "Chemistry", "Biology" ]
6,664
[ "Biomolecules by chemical classification", "Peptides", "Biosynthesis", "Chemical synthesis", "Molecular biology", "Biochemistry", "Metabolism" ]
42,805,449
https://en.wikipedia.org/wiki/Surrobody
Based upon the pre-B cell receptor (pre-BCR), surrobodies are non-naturally occurring, antibody-like proteins with high affinity to their antigen. The trimeric pre-BCR composes an antibody heavy chain paired with two surrogate light chain components. They have been generated for both therapeutic and research applications. Xu et al have generated combinatorial libraries based on these pre-BCR proteins in which diverse heavy chains are paired with a fixed surrogate light chain components. These libraries have been expressed in mammalian, Escherichia coli, and phagemid systems to generate proteins with high affinity to their target. Surrobodies have been patented by Sea Lane Biotechnologies in 2012. References Immune system
Surrobody
[ "Biology" ]
154
[ "Immune system", "Organ systems" ]
42,806,211
https://en.wikipedia.org/wiki/Conway%20criterion
In the mathematical theory of tessellations, the Conway criterion, named for the English mathematician John Horton Conway, is a sufficient rule for when a prototile will tile the plane. It consists of the following requirements: The tile must be a closed topological disk with six consecutive points A, B, C, D, E, and F on the boundary such that: the boundary part from A to B is congruent to the boundary part from E to D by a translation T where T(A) = E and T(B) = D. each of the boundary parts BC, CD, EF, and FA is centrosymmetric—that is, each one is congruent to itself when rotated by 180-degrees around its midpoint. some of the six points may coincide but at least three of them must be distinct. Any prototile satisfying Conway's criterion admits a periodic tiling of the plane—and does so using only 180-degree rotations. The Conway criterion is a sufficient condition to prove that a prototile tiles the plane but not a necessary one. There are tiles that fail the criterion and still tile the plane. Every Conway tile is foldable into either an isotetrahedron or a rectangle dihedron and conversely, every net of an isotetrahedron or rectangle dihedron is a Conway tile. History The Conway criterion applies to any shape that is a closed disk—if the boundary of such a shape satisfies the criterion, then it will tile the plane. Although the graphic artist M.C. Escher never articulated the criterion, he discovered it in the mid 1920s. One of his earliest tessellations, later numbered 1 by him, illustrates his understanding of the conditions in the criterion. Six of his earliest tessellations all satisfy the criterion. In 1963 the German mathematician Heinrich Heesch described the five types of tiles that satisfy the criterion. He shows each type with notation that identifies the edges of a tile as one travels around the boundary: CCC, CCCC, TCTC, TCTCC, TCCTCC, where C means a centrosymmetric edge, and T means a translated edge. Conway was likely inspired by Martin Gardner's July 1975 column in Scientific American that discussed which convex polygons can tile the plane. In August 1975, Gardner revealed that Conway had discovered his criterion while trying to find an efficient way to determine which of the 108 heptominoes tile the plane. Examples In its simplest form, the criterion simply states that any hexagon with a pair of opposite sides that are parallel and congruent will tessellate the plane. In Gardner's article, this is called a type 1 hexagon. This is also true of parallelograms. But the translations that match the opposite edges of these tiles are the composition of two 180° rotations—about the midpoints of two adjacent edges in the case of a hexagonal parallelogon, and about the midpoint of an edge and one of its vertices in the case of a parallelogram. When a tile that satisfies the Conway Criterion is rotated 180° about the midpoint of a centrosymmetric edge, it creates either a generalized parallelogram or a generalized hexagonal parallelogon (these have opposite edges congruent and parallel), so the doubled tile can tile the plane by translations. The translations are the composition of 180° rotations just as in the case of the straight-edge hexagonal parallelogon or parallelograms. The Conway criterion is surprisingly powerful—especially when applied to polyforms. With the exception of four heptominoes, all polyominoes up through order 7 either satisfy the Conway criterion or two copies can form a patch which satisfies the criterion. References External links Conway’s Magical Pen An online app where you can create your own original Conway criterion tiles and their tessellations. Tessellation John Horton Conway
Conway criterion
[ "Physics", "Mathematics" ]
812
[ "Tessellation", "Planes (geometry)", "Euclidean plane geometry", "Symmetry" ]
42,807,835
https://en.wikipedia.org/wiki/Coexistence%20theory
Coexistence theory is a framework to understand how competitor traits can maintain species diversity and stave-off competitive exclusion even among similar species living in ecologically similar environments. Coexistence theory explains the stable coexistence of species as an interaction between two opposing forces: fitness differences between species, which should drive the best-adapted species to exclude others within a particular ecological niche, and stabilizing mechanisms, which maintains diversity via niche differentiation. For many species to be stabilized in a community, population growth must be negative density-dependent, i.e. all participating species have a tendency to increase in density as their populations decline. In such communities, any species that becomes rare will experience positive growth, pushing its population to recover and making local extinction unlikely. As the population of one species declines, individuals of that species tend to compete predominantly with individuals of other species. Thus, the tendency of a population to recover as it declines in density reflects reduced intraspecific competition (within-species) relative to interspecific competition (between-species), the signature of niche differentiation (see Lotka-Volterra competition). Types of coexistence mechanisms Two qualitatively different processes can help species to coexist: a reduction in average fitness differences between species or an increase in niche differentiation between species. These two factors have been termed equalizing and stabilizing mechanisms, respectively. For species to coexist, any fitness differences that are not reduced by equalizing mechanisms must be overcome by stabilizing mechanisms. Equalizing mechanisms Equalizing mechanisms reduce fitness differences between species. As its name implies, these processes act in a way that push the competitive abilities of multiple species closer together. Equalizing mechanisms affect interspecific competition (the competition between individuals of different species). For example, when multiple species compete for the same resource, competitive ability is determined by the minimum level of resources a species needs to maintain itself (known as an R*, or equilibrium resource density). Thus, the species with the lowest R* is the best competitor and excludes all other species in the absence of any niche differentiation. Any factor that reduces R*s between species (like increased harvest of the dominant competitor) is classified as an equalizing mechanism. Environmental variation (which is the focus of the Intermediate Disturbance Hypothesis) can be considered an equalizing mechanism. Since the fitness of a given species is intrinsically tied to a specific environment, when that environment is disturbed (e.g. through storms, fires, volcanic eruptions, etc.) some species may lose components of their competitive advantage which were useful in the previous version of the environment. Stabilizing mechanisms Stabilizing mechanisms promote coexistence by concentrating intraspecific competition relative to interspecific competition. In other words, these mechanisms "encourage" an individual to compete more with other individuals of its own species, rather than with individuals of other species. Resource partitioning (a type of niche differentiation) is a stabilizing mechanism because interspecific competition is reduced when different species primarily compete for different resources. Similarly, if species are differently affected by environmental variation (e.g., soil type, rainfall timing, etc.), this can create a stabilizing mechanism (see the storage effect). Stabilizing mechanisms increase the low-density growth rate of all species. Chesson's categories of stabilizing mechanisms In 1994, Chesson proposed that all stabilizing mechanisms could be categorized into four categories. These mechanisms are not mutually exclusive, and it is possible for all four to operate in any environment at a given time. Variation-independent mechanisms (also called fluctuation-independent mechanisms) are any stabilizing mechanism that functions within a local place and time. Resource partitioning, predator partitioning, and frequency-dependent predation are three classic examples of variation-independent mechanisms. When a species is at very low density, individuals gain an advantage, because they are less constrained by competition across the landscape. For example, under frequency-dependent predation, a species is less likely to be consumed by predators when they are very rare. The storage effect occurs when species are affected differently by environmental variation in space or time. For example, coral reef fishes have different reproductive rates in different years, plants grow differently in different soil types, and desert annual plants germinate at different rates in different years. When a species is at low density, individuals gain an advantage because they experience less competition in times or locations that they grow best. For example, if annual plants germinate in different years, then when it is a good year to germinate, species will be competing predominately with members of the same species. Thus, if a species becomes rare, individuals will experience little competition when they germinate whereas they would experience high competition if they were abundant. For the storage effect to function, species must be able to "store" the benefits of a productive time period or area, and use it to survive during less productive times or areas. This can occur, for example, if species have a long-lived adult stage, a seed bank or diapause stage, or if they are spread out over the environment. A fitness-density covariance occurs when species are spread out non-uniformly across the landscape. Most often, it occurs when species are found in different areas. For example, mosquitoes often lay eggs in different locations, and plants who partition habitat are often found predominately where they grow best. Species can gain two possible advantages by becoming very rare. First, because they are physically separated from other species, they mainly compete with members of the same species (and thus experience less competition when they become very rare). Second, species are often more able to concentrate in favorable habitat as their densities decline. For example, if individuals are territorial, then members of an abundant species may not have access to ideal habitat; however, when that species becomes very rare, then there may be enough ideal habitat for all of the few remaining individuals. The Janzen-Connell hypothesis is an excellent example of a stabilizing mechanism that operates (in part) through fitness-density covariance. Relative nonlinearity occurs when species benefit in different ways from variation in competitive factors. For example, two species might coexist if one can grow better when resources are rare, and the other grows better when resources are abundant. Species will be able to coexist if the species which benefits from variation in resources tends to reduce variation in resources. For example, a species which can rapidly consume excess resources tends to quickly reduce the level of excess resources favoring the other species, whereas a species which grows better when resources are rare is more likely to cause fluctuations in resource density favoring the other species. Quantifying stabilizing mechanisms A general way of measuring the effect of stabilizing mechanisms is by calculating the growth rate of species i in a community as where: is the long-term average growth rate of the species i when at low density. Because species are limited from growing indefinitely, viable populations have an average long-term growth rate of zero. Therefore, species at low-density can increase in abundance when their long-term average growth rate is positive. is a species-specific factor that reflects how quickly species i responds to a change in competition. For example, species with faster generation times may respond more quickly to a change in resource density than longer lived species. In an extreme scenario, if ants and elephants were to compete for the same resources, elephant population sizes would change much more slowly to changes in resource density than would ant populations. is the difference between the fitness of species i when compared to the average fitness of the community excluding species i. In the absence of any stabilizing mechanisms, species i will only have a positive growth rate if its fitness is above its average competitor, i.e. where this value is greater than zero. measures the effect of all stabilizing mechanisms acting within this community. Example calculation: Species competing for resource In 2008 Chesson and Kuang showed how to calculate fitness differences and stabilizing mechanisms when species compete for shared resources and competitors. Each species j captures resource type l at a species-specific rate, cjl. Each unit of resource captured contributes to species growth by value vl. Each consumer requires resources for the metabolic maintenance at rate μi. In conjunction, consumer growth is decreased by attack from predators. Each predator species m attacks species j at rate ajm. Given predation and resource capture, the density of species i, Ni, grows at rate where l sums over resource types and m sums over all predator species. Each resource type exhibits logistic growth with intrinsic rate of increase, rRl, and carrying capacity, KRl = 1/αRl, such that growth rate of resource l is Similarly, each predator species m exhibits logistic growth in the absence of the prey of interest with intrinsic growth rate rPm and carrying capacity KPm = 1/αPm. The growth rate of a predator species is also increased by consuming prey species where again the attack rate of predator species m on prey j is ajm. Each unit of prey has a value to predator growth rate of w. Given these two sources of predator growth, the density of predator m, Pm, has a per-capita growth rate where the summation terms is contributions to growth from consumption over all j focal species. The system of equations describes a model of trophic interactions between three sets of species: focal species, their resources, and their predators. Given this model, the average fitness of a species j is where the sensitivity to competition and predation is The average fitness of a species takes into account growth based on resource capture and predation as well as how much resource and predator densities change from interactions with the focal species. The amount of niche overlap between two competitors i and j is which represents the amount to which resource consumption and predator attack are linearly related between two competing species, i and j. This model conditions for coexistence can be directly related to the general coexistence criterion: intraspecific competition, αjj, must be greater than interspecific competition, αij. The direct expressions for intraspecific and interspecific competition coefficients from the interaction between shared predators and resources are and Thus, when intraspecific competition is greater than interspecific competition, which, for two species leads to the coexistence criteria Notice that, in the absence of any niche differences (i.e. ρ = 1), species cannot coexist. Empirical evidence A 2012 study reviewed different approaches which tested coexistence theory, and identified three main ways to separate the contributions of stabilizing and equalizing mechanisms within a community. These are: Experimental manipulations, which involved determining the effect of relative fitness or stabilizing mechanisms by manipulating resources or competitive advantages. Trait-Phylogeny-Environment relationships, in which the phylogeny of members of a set of communities can be tested for evidence of trait clustering, which would suggest that certain traits are important (and perhaps necessary) to thrive in that environment, or trait overdispersion, which would suggest a high ability of species to exclude close relatives. Such tests have been widely used, although they have also been criticized as simplistic and flawed. Demographic analyses, which can be used to recognize frequency- or density-dependent processes simply by measuring the number and per-capita growth rates of species in natural communities over time. If such processes are operating, the per-capita growth rate would vary with the number of individuals in species comprising the community. A 2010 review argued that an invasion analysis should be used as the critical test of coexistence. In an invasion analysis, one species (termed the "invader") is removed from the community, and then reintroduced at a very low density. If the invader shows positive population growth, then it cannot be excluded from the community. If every species has a positive growth rate as an invader, then those species can stably coexist. An invasion analysis could be performed using experimental manipulation, or by parameterizing a mathematical model. The authors argued that in the absence of a full-scale invasion analysis, studies could show some evidence for coexistence by showing that a trade-off produced negative density-dependence at the population level. The authors reviewed 323 papers (from 1972 to May 2009), and claimed that only 10 of them met the above criteria (7 performing an invasion analysis, and 3 showing some negative-density dependence). However, an important caveat is that invasion analysis may not always be sufficient for identifying stable coexistence. For example, priority effects or Allee effects may prevent species from successfully invading a community from low density even if they could persist stably at a higher density. Conversely, high order interactions in communities with many species can lead to complex dynamics following an initially successful invasion, potentially preventing the invader from persisting stably in the long term. For example, an invader that can only persist when a particular resident species is present at high density could alter community structure following invasion such that that resident species' density declines or that it goes locally extinct, thereby preventing the invader from successfully establishing in the long term. Neutral theory and coexistence theory The 2001 Neutral theory by Stephen P. Hubbell attempts to model biodiversity through a migration-speciation-extinction balance, rather through selection. It assumes that all members within a guild are inherently the same, and that changes in population density are a result of random births and deaths. Particular species are lost stochastically through a random walk process, but species richness is maintained via speciation or external migration. Neutral theory can be seen as a particular case of coexistence theory: it represents an environment where stabilizing mechanisms are absent (i.e., ), and there are no differences in average fitness (i.e., for all species). It has been hotly debated how close real communities are to neutrality. Few studies have attempted to measure fitness differences and stabilizing mechanisms in plant communities, for example in 2009 or in 2015 These communities appear to be far from neutral, and in some cases, stabilizing effects greatly outweigh fitness differences. Cultural coexistence theory Cultural Coexistence Theory (CCT), also called Social-ecological Coexistence Theory, expands on coexistence theory to explain how groups of people with shared interests in natural resources (e.g., a fishery) can come to coexist sustainably. Cultural Coexistence Theory draws on work by anthropologists such as Frederik Barth and John Bennett, both of whom studied the interactions among culture groups on shared landscapes. In addition to the core ecological concepts described above, which CCT summarizes as limited similarity, limited competition, and resilience, CCT argues the following features are essential for cultural coexistence: Adaptability describes the ability of people to respond to change or surprise. It is essential to CCT because it helps capture the importance of human agency. Pluralism describes where people value cultural diversity and recognize the fundamental rights of people not like them to live in the same places and access shared resources. Equity as used in CCT describes whether social institutions exist that ensure that people's basic human rights, including the ability to meet basic needs, are protected, and whether people are protected from being marginalized in society. Cultural Coexistence Theory fits in under the broader area of sustainability science, common pool resources theory, and conflict theory. References Ecology Ecological theories Community ecology Theoretical ecology
Coexistence theory
[ "Biology" ]
3,209
[ "Ecology" ]
42,809,296
https://en.wikipedia.org/wiki/Gompertz%20constant
In mathematics, the Gompertz constant or Euler–Gompertz constant, denoted by , appears in integral evaluations and as a value of special functions. It is named after Benjamin Gompertz. It can be defined via the exponential integral as: The numerical value of is about = ...   . When Euler studied divergent infinite series, he encountered via, for example, the above integral representation. Le Lionnais called the Gompertz constant because of its role in survival analysis. In 2009 Alexander Aptekarev proved that at least one of the Euler–Mascheroni constant and the Euler–Gompertz constant is irrational. This result was improved in 2012 by Tanguy Rivoal where he proved that at least one of them is transcendental. Identities involving the Gompertz constant The most frequent appearance of is in the following integrals: which follow from the definition of by integration of parts and a variable substitution respectively. Applying the Taylor expansion of we have the series representation Gompertz's constant is connected to the Gregory coefficients via the 2013 formula of I. Mező: The Gompertz constant also happens to be the regularized value of the following divergent series: It is also related to several polynomial continued fractions: Notes External links Wolfram MathWorld OEIS entry Analysis Mathematical constants
Gompertz constant
[ "Mathematics" ]
282
[ "Mathematical constants", "Mathematical objects", "Numbers", "nan" ]
42,809,646
https://en.wikipedia.org/wiki/Radiophysical%20Research%20Institute
The Radiophysical Research Institute (NIRFI), based in Nizhny Novgorod, Russia, is a research institute that conducts basic and applied research in the field of radiophysics, radio astronomy, cosmology and radio engineering. It is also known for its work in solar physics, sun-earth physics as well as the related geophysics. It also does outreach for the Russian education system. It was formed in 1956 as the Radiophysical Research Institute of the (Soviet) Ministry of Education and Science. Projects NIRFI Sura Ionospheric Heating Facility Zimenkovsky radio-astronomical observatory Radio telescope - RT-14 laboratories NIRFI Staraya Pustin + two RT-7 Further reading 254 pages. References Astrophysics Radio astronomy Astronomy in Russia History of science and technology in Russia Physics research institutes Research institutes in Russia Research institutes in the Soviet Union 1956 establishments in the Soviet Union Astronomy in the Soviet Union Research institutes established in 1956
Radiophysical Research Institute
[ "Physics", "Astronomy" ]
193
[ "Radio astronomy", "Astronomical sub-disciplines", "Astrophysics" ]
42,810,068
https://en.wikipedia.org/wiki/Organismic%20computing
Organismic computing is a form of engineered human computation that employs technology to enable "shared sensing, collective reasoning, and coordinated action" within human groups toward goal-directed behavior. This biomimetic approach to augmenting group efficacy seeks to improve synergy by allowing a group of individuals to function as a single intelligent superorganism. Rationale For many tasks, increasing the size of a group leads to diminishing returns. That is, each new person contributes less to overall group performance. This suggests that the benefit-cost ratio associated with adding a new person decreases as the group gets larger. The organismic approach to augmenting group efficacy seeks to leverage the quadratic growth in the number of possible relationships among group members, as described by Metcalfe's law. By increasing the number of relationships realized and by sufficiently increasing the utility of those relationship, each new group member would add more value to the group than previous members. Approach The organismic model of group efficacy assumes that enabling real-time distributed sensing, reasoning, and acting, using the right augmentation methods, will increase group efficacy via synergistic effects that result from more and improved connections among individuals in a group. Indeed, organismic computing research is focused primarily on the pursuit of augmentation methods that are optimal for different applications of group behavior. Additionally, the application space may dictate a greater emphasis on one of the following members of the "synergistic triad". Shared sensing Shared sensing is the notion that individual or aggregated sensory experiences are shared in real-time across members of a group, toward greater awareness of information relevant to an individual's goals. Collective reasoning Collective reasoning includes a broad space of methods that enable the creation and dissemination of information due to distributed cognition. Coordinated action Coordinated action involves methods that enable effective, synchronous group behaviors. Challenges A key challenge in developing effective organismic computing methods is the problem of information overload. Because humans are limited capacity systems, which include both attentional and processing bottlenecks, the availability or imposition of additional information may create interference that reduces goal-related performance. Evidence A 2013 pilot study examined performance in a hide-and-seek task within a simulated augmented reality environment. Synergistic effects seemed to increased with group size and level of augmentation. A 2010 collective intelligence study of group problem solving performance revealed strong evidence that "Group IQ" correlated strongly with the social intelligence of each group member and only weakly with individual IQ, suggesting that interaction dynamics among group members is a better predictor of group problem solving performance than individual problem solving abilities. Applications Organismic computing, due to its emphasis on agency, is best suited to interaction in the physical, simulated, or augmented world. Thus, potential applications include crisis relief, first response, and counter-terrorism, as well as problem-solving in artificial environments by recasting abstract problems using real-world metaphors. See also Douglas Engelbart Global brain References Human-based computation
Organismic computing
[ "Technology" ]
612
[ "Information systems", "Human-based computation" ]
42,810,092
https://en.wikipedia.org/wiki/Dog%20Land%20%28app%29
Dog Land is an app currently for iPhone only which allows users to upload photos, add filters and effects, and share with other users. Users may also directly message users in their area or around the world. The Dog Land app also serves as a resource for finding dog-friendly places like dog parks, cafes, shops, and services and hotels. The app is one of the largest mobile communities for dog owners. References External links "5 Essential Apps for Dog Owners" Dog Living. Retrieved 2024-05-27. "USC, UCLA Grads Create New App Designed for Dog Lovers — Lu Parker Reports". KTLA. 2014-01-29. Retrieved 2024-05-27. IOS software
Dog Land (app)
[ "Technology" ]
146
[ "Mobile software stubs", "Mobile technology stubs" ]
42,810,674
https://en.wikipedia.org/wiki/In%20vitro%20to%20in%20vivo%20extrapolation
In vitro to in vivo extrapolation (IVIVE) refers to the qualitative or quantitative transposition of experimental results or observations made in vitro to predict phenomena in vivo, biological organisms. The problem of transposing in vitro results is particularly acute in areas such as toxicology where animal experiments are being phased out and are increasingly being replaced by alternative tests. Results obtained from in vitro experiments cannot often be directly applied to predict biological responses of organisms to chemical exposure in vivo. Therefore, it is extremely important to build a consistent and reliable in vitro to in vivo extrapolation method. Two solutions are now commonly accepted: (1) Increasing the complexity of in vitro systems where multiple cells can interact with each other in order recapitulate cell-cell interactions present in tissues (as in "human on chip" systems). (2) Using mathematical modeling to numerically simulate the behavior of a complex system, whereby in vitro data provides the parameter values for developing a model. The two approaches can be applied simultaneously allowing in vitro systems to provide adequate data for the development of mathematical models. To comply with push for the development of alternative testing methods, increasingly sophisticated in vitro experiments are now collecting numerous, complex, and challenging data that can be integrated into mathematical models. Pharmacology IVIVE in pharmacology can be used to assess pharmacokinetics (PK) or pharmacodynamics (PD).. Since biological perturbation depends on concentration of the toxicant as well as exposure duration of a candidate drug (parent molecule or metabolites) at that target site, in vivo tissue and organ effects can either be completely different or similar to those observed in vitro. Therefore, extrapolating adverse effects observed in vitro is incorporated into a quantitative model of in vivo PK model. It is generally accepted that physiologically based PK (PBPK) models, including absorption, distribution, metabolism, and excretion of any given chemical are central to in vitro - in vivo extrapolations. In the case of early effects or those without inter-cellular communications, it is assumed that the same cellular exposure concentration cause the same effects, both experimentally and quantitatively, in vitro and in vivo. In these conditions, it is enough to (1) develop a simple pharmacodynamics model of the dose–response relationship observed in vitro and (2) transpose it without changes to predict in vivo effects. However, cells in cultures do not mimic perfectly cells in a complete organism. To solve that extrapolation problem, more statistical models with mechanistic information are needed, or we can rely on mechanistic systems of biology models of the cell response. Those models are characterized by a hierarchical structure, such as molecular pathways, organ function, whole-cell response, cell-to- cell communications, tissue response and inter-tissue communications. References Quignot N., Hamon J., Bois F., 2014, Extrapolating in vitro results to predict human toxicity, in In Vitro Toxicology Systems, Bal-Price A., Jennings P., Eds, Methods in Pharmacology and Toxicology series, Springer Science, New York, USA, p. 531-550 Latin biological phrases Alternatives to animal testing
In vitro to in vivo extrapolation
[ "Chemistry", "Biology" ]
669
[ "Latin biological phrases", "Animal testing", "Alternatives to animal testing" ]
62,813,378
https://en.wikipedia.org/wiki/Dibenzyl%20sulfide
Dibenzyl sulfide is a symmetrical thioether. It contains two C6H5CH2- (benzyl) groups linked by a sulfide bridge. It is a colorless or white solid that is soluble in nonpolar solvents. Crystallography The crystal structure of the solid is of the orthorhombic system with space group Pbcn; number 60. The unit cell dimensions are a=13.991 Å b=11.3985 Å c 7.2081 Å. The molecules in the gas take the same form as in the solid with a C2 symmetry. Production Benzyl sulfide is commercially manufactured by treating potassium sulfide with benzyl chloride, followed by distillation of the product. It is also obtainable by desulfurization of dibenzyldisulfide with phosphine reagents. References Thioethers Aromatic compounds
Dibenzyl sulfide
[ "Chemistry" ]
187
[ "Organic compounds", "Aromatic compounds" ]
62,813,442
https://en.wikipedia.org/wiki/Masahide%20Sasaki
Masahide Sasaki was a Japanese chemist. He developed the first fully-automated laboratory, and he popularized this innovation internationally. Personal life Masahide was born in Yamaguchi Prefecture of Japan on August 27, 1933. He married his wife, Tokyo, and had three children, Mika, Kyoko, and Masanori. He died of cancer on September 23, 2005. Career As noted in his obituary in Clinical Chemistry, Masahide "graduated from Yamaguchi Medical School in 1961. During 1965, he served as an internist for the Hiroshima Atomic Bomb Casualty Committee. Several years later, in 1967, he was appointed the Chief of the Clinical Chemistry Department at Kawasaki Hospital. In 1970, he did a fellowship in the United States at the Michael Reese Hospital in Chicago, IL, which gave him exposure to the US medical system. Two years later he became an Assistant Professor of Internal Medicine at Kawasaki Medical School and rose quickly in the academic ranks to become a Full Professor of Laboratory Diagnosis in 1976 and, ultimately, Vice President of Kawasaki Paramedical College." He was appointed Professor and Director of the Department of the Clinical Laboratory at Kochi Medical School, Kochi, Japan in 1981. There, he developed his automation system. Masahide published many papers between 1981 and 1999, the most notable of which was a monograph on laboratory automation sponsored by the A&T Corporation. Automation development Masahide provided the first and most prominent example of a totally automated laboratory. References Japanese chemists 20th-century Japanese chemists 1933 births 2005 deaths Place of death missing People from Yamaguchi Prefecture 20th-century Japanese physicians Clinical chemists
Masahide Sasaki
[ "Chemistry" ]
331
[ "Biochemists", "Clinical chemists" ]
62,813,878
https://en.wikipedia.org/wiki/Puccinia%20smyrnii
Puccinia smyrnii, or alexanders rust, is a fungus species and plant pathogen which causes rust on alexanders (Smyrnium olusatrum). It was originally found in Sicily. It is found in Europe and parts of north Africa. References External links Aphotofungi smyrnii Fungal plant pathogens and diseases Fungi described in 1894 Fungi of Africa Fungi of Europe Fungus species
Puccinia smyrnii
[ "Biology" ]
85
[ "Fungi", "Fungus species" ]
62,814,350
https://en.wikipedia.org/wiki/Muwaqqit
In the history of Islam, a muwaqqit (, more rarely mīqātī; ) was an astronomer tasked with the timekeeping and the regulation of prayer times in an Islamic institution like a mosque or a madrasa. Unlike the muezzin (reciter of the call to prayer) who was usually selected for his piety and voice, a muwaqqit was selected for his knowledge and skill in astronomy. Not all mosques had a muwaqqit. The office was first recorded in the late 13th century in the Mosque of Amr ibn al-As in the Mamluk Sultanate of Cairo and then spread to various parts of the Muslim world. Even then, many major mosques only relied on muezzins to determine prayer times using traditional methods, such as observing shadow lengths and twilight phenomena. The lack of historical sources and research makes it difficult to ascertain the specific functions and roles of the muwaqqit. There is uncertainty among historians of science whether the muwaqqit was a specialised office whose holder dealt exclusively with astronomical matters, or if it was part of a broader role of a teacher (mudarris) who also worked and taught in other fields. During its peak in the fourteenth and the fifteenth centuries, prominent scientists held the post of muwaqqit. For example, ibn al-Shatir (1304–1375) and Shams al-Din al-Khalili (1320–1380) formed a team of muwaqqits in the Umayyad Mosque of Damascus. Syria and Egypt were the major centres of muwaqqit activity in these centuries, while the office spread to Palestine, Hejaz, Tunis, and Yemen. The office continued to be recorded up to the nineteenth century, although muwaqqits produced fewer treatises and instruments than in earlier times. Today, mosques use prayer time-tables produced by religious or scientific agencies or clocks programmed for this purpose. These allow for the exact determination of prayer times without the specialised skills of a muwaqqit. Background Muslims observe salah, the daily ritual prayer, at prescribed times based on the hadith or the tradition of Muhammad (–632). Each day, there are five obligatory prayers with specific ranges of permitted times determined by daily astronomical phenomena. For example, the time for the maghrib prayer starts after sunset and ends when the red twilight has disappeared. Because the start and end times for prayers are related to the solar diurnal motion, they vary throughout the year and depend on the local latitude and longitude when expressed in local time. The term mīqāt in the sense of "time of a prayer" is attested to in the Quran and hadith, although the Quran does not explicitly define those times. The term ʻilm al-mīqāt refers to the study of determining prayer times based on the position of the Sun and the stars in the sky and has been recorded since the early days of Islam. Before the muwaqqits appeared, the muezzin had been the office most associated with the regulation of the prayer times. The post can be traced back to Muhammad's lifetime and its role and history are well documented. The main duty of a muazzin is to recite the adhan to announce the beginning of a prayer time. Before the use of a loudspeaker, this was usually done from the top of a minaret. The minaret provided the muezzin with a vantage point to observe phenomena such as sunset which marks the start time of maghrib. Duties The main duty of the muwaqqit was timekeeping and the regulation of daily prayer times in mosques, madrasas, or other institutions using astronomy and other exact sciences. At its zenith in the fourteenth and fifteenth centuries, major mosques often employed prominent astronomers as muwaqqits. In addition to regulating prayer times, they wrote treatises on astronomy, especially on timekeeping and the use of related instruments such as quadrants and sundials. They were also responsible for other religious matters related to their astronomical expertise, such as the keeping of the Islamic calendar and the determination of the qibla (the direction to Mecca used for prayers). David A. King, a historian of astronomy, presents the muwaqqit as a specialised profession, a mosque astronomer "in the service of Islam" who produced a large body of treatises and instruments, even though their work did not necessarily influence the practices of the muezzins and the fuqahā who largely used traditional methods. The knowledge of a muwaqqit was passed to his students who specifically intended to be the next generation of the profession. King's description is based on his research into the primary works of the muwaqqits and contemporary Islamic legal texts. On the other hand, historian of science, Sonja Brentjes, proposes that muwaqqit is to be seen as "only one facet of another persona, mostly that of a mudarris (teacher)". The astronomical keeping of prayer times as well as the construction and maintenance of a mosque's astronomical instruments were just a normal part of academic activities in Muslim cities of the time. Someone titled muwaqqit was also likely to be highly learned in other disciplines, including fiqh and philosophy. The discipline of ʻilm al-mīqāt was widely learned and not only by someone who aspired to be a muwaqqit; a muezzin could well have had an identical education as a muwaqqit. Brentjes' assessment is based on secondary biographies of the muwaqqits during the Mamluk era, including the works of al-Sakhawi, a prominent 15th-century author and hadith scholar. Both King and Brentjes say that it is difficult to ascertain the role of the muwaqqits due to the lack of research and historical sources on the topic. Salary Little information is available about the salary of the muwaqqits. King could only provide several figures given in waqfiyyas or financial documents of mosques in fifteenth and sixteenth century Cairo. The Mosque of the Emir of Qanim paid a muwaqqit 200 dirhams (silver coin) per month, compared to 900 for an imam, 500 for a khatib, 200 for a muezzin and 300 for a servant mentioned in the same document. Other figures King found were cumulative: 1400 dirham divided among about 16 muezzins and muwaqqits, and 600 dirham divided among an unknown number of muwaqqits. According to Brentjes, these remunerations were relatively low, leading a muwaqqit to take up other jobs at the same time, including teaching. The data presented by King is limited to one city and does not cover mosques with prominent muwaqqits, such as the Umayyad Mosque in Damascus. Relations with the muezzin The responsibilities of a muwaqqit were related to those of the muezzins who announced the start time of a prayer by reciting the adhan. Unlike the office of the muwaqqit which required special knowledge in astronomy, the muezzin were typically chosen for their piety and beautiful voice. Mosques did not always have muwaqqits. Even major mosques often relied on a muezzin's traditional knowledge to determine prayer times, such as observing shadow lengths for daytime prayers, twilight phenomena for night prayers, and lunar stations for general timekeeping at night. Brentjes speculates that the muwaqqit might have evolved from a specialised muezzin, and that there might not have been a clear delineation between the two offices. Some celebrated muwaqqits, including Shams al-Din al-Khalili and ibn al-Shatir, were known to have once been muezzins, and many individuals held both offices simultaneously. History Unlike the muazzin whose history and origin has been well-studied, the origin of the muwaqqit is unclear. The earliest known record shows that the office already existed in the thirteenth century Mamluk Sultanate. According to King, the first muwaqqit known by name was Abu al-Hasan ali ibn Abd al-Malik ibn Sim'un (died 685 AH or 1286/1287 CE), a muwaqqit in the Mosque of Amr ibn al-As in Fustat, Egypt for 30 years. His son Muhammad al-Wajih (died 701 AH or 1301/1302 CE) and grandson Muhammad al-Majd also served as muwaqqit there. At the same time, similar offices likely existed in Al-Andalus and the Maghreb with different names. In Al-Andalus, in the late 13th century, astronomers Ahmad and Husayn—father and son from the Ibn Baso family—computed prayer times for the Great Mosque of Granada. Manuscripts refer to them with various titles, including al-muadhdhin al-mubarak, al-imam al-mu'addil al-mubarak, al-shaykh al-mu'addil, amin al-awqat, and muwaqqit. The University of al-Qarawiyyin in Fez employed the astronomer Muhammad al-Sanhaji () in a similar position with the title al-mu'addil. A manual of professions from around 1300 by the Egyptian author Ibn al-Ukhuwwa mentioned the post of the muazzin and its duties and requirements but did not mention the muwaqqit. In the 14th and 15th centuries If the office of the muwaqqit indeed originated in Egypt, it soon spread to Syria and Palestine. The Ibrahimi Mosque in Hebron employed the muwaqqit Ibrahim ibn Ahmad. In 1306, he made a copy of an astronomical work by Nasir al-Din ibn Sim'un (died 1337), a member of the same family as the early muwaqqits in Fustat. Another muwaqqit, Ibn al-Sarraj (), served in Aleppo where he designed and created various astronomical instruments and wrote treatises about their construction and use. Still in Syria, Ibn al-Shatir (1304–1375) led a team of muwaqqits in the Umayyad Mosque, Damascus. He wrote two zijes (astronomical tables) and made astrolabes and sundials. Apart from timekeeping, he also worked on planetary theories and wrote a treatise on the movements of the Sun, the moon, and the planets. He moved away from Ptolemaic geocentrism and produced models which were still geocentric but were mathematically identical to those later proposed by Copernicus (1473 – 1543). According to King, Ibn al-Shatir's works represent the "culmination" of planetary astronomy in the Islamic world. Ibn al-Shatir's colleague Shams al-Din al-Khalili (1320–1380), a muwaqqit of the Yalbugha Mosque before joining the Umayyad Mosque, wrote prayer timetables for Damascus and tables for finding direction to Mecca from any locality. The activities of the muwaqqits were not universally approved of by Islamic jurists. The qadi (judge) of Damascus Taj al-Din al-Subki denounced the muwaqqits, whose ranks according to him were filled with astrologers (munajjimun) and magicians (kuhhan). Astrological topics were inevitably read by astronomers of the time because they were often included in astronomy textbooks, and a few muwaqqits were recorded to have studied astrology. By the end of the fourteenth century, the activity of the muwaqqits had been recorded in Egypt, Syria, Palestine, the Hejaz (including Mecca and Medina), Tunis, and Yemen. In the following century, the practice spread to Asia Minor. According to King, there is no evidence of muwaqqit activity in more easterly parts of the Islamic world, including Iraq, Iran, India and Central Asia. According to Brentjes, it is possible that the discipline of miqat spread eastwards as part of an exchange prompted by trade, pilgrimage, and travel for knowledge even though no written evidence has been found. In the fifteenth century, the center of muwaqqit activities shifted to Egypt, especially the al-Azhar Mosque in Cairo, but their scientific outputs were reduced. Among the well-known muwaqqits, Sibt al-Maridini (1423–1506) of Al-Azhar wrote treatises on timekeeping. He used simpler astronomical methods which became popular in Egypt and Syria. King speculates that he might have "unwittingly" contributed to the decline of astronomy in the Middle East because his works outcompeted more advanced texts. Other muwaqqits recorded in various mosques in fifteenth century Cairo include al-Kawm al-Rishi, 'Izz al-Din al-Wafa'i, al-Karadisi, and Abd al-Qadir al-Ajmawi. In addition, Egyptian astronomers Ibn al-Majdi and Ibn Abi al-Fath al-Sufi wrote extensively on religious timekeeping using more advanced astronomy than Sibt al-Maradani, but they were not formally attached to any mosque. After the fifteenth century ʿIlm al-miqat and the activity of the muwaqqits (, singular ) continued into the time of the Ottoman Empire (which conquered the Mamluks in 1517), although now they produced less scientific works compared to the zenith in the 14th and 15th centuries. Their work was overseen by the müneccimbaşı (chief imperial astrologer). The Turkish historian of science Aydın Sayılı noted that many mosques in Istanbul have buildings or rooms called ("lodge of the muwaqqit"). Ottoman sultans and other notables built and patronized them as acts of piety and philanthropy. Such constructions became more common over time, peaking during the late eighteenth and the nineteenth century. Ottoman astronomers produced prayer timetables in locations previously without them, and in the eighteenth century, the architect Salih Efendi wrote timekeeping tables which were popular among the muwaqqits of the imperial capital. As the use of mechanical clocks became common during the eighteenth century, the muwaqqits included them as part of their standard tools and many became experts at making and repairing clocks. Ottoman muwaqqits also adapted existing tables to the Ottoman convention of defining 12:00 o'clock at sunset, requiring varying amounts of time shifts each day. Setting one's personal watch according to the clocks at muvakkithanes was a common practice after the spread of personal timepieces in late eighteenth century. Activities of the muwaqqits were also recorded in Syria (especially the Umayyad Mosque) and Egypt up to the nineteenth century. Calculating prayer times today From the nineteenth century, various religious agencies or scientific agencies approved by religious authorities began to produce annual prayer timetables. The times of prayer are included in calendars, annual almanacs, and newspapers. During the sacred month of Ramadan, tables called imsakiyya, containing times of prayer as well as that of the imsak (time to stop eating for the fast) for the whole month, are printed and distributed. In the past few decades, some mosques have installed electronic clocks capable of calculating local prayer times and sounding reminders accordingly. Today a muazzin in a mosque can broadcast the call to prayer by consulting a table or a clock without requiring the specialised skill of a muwaqqit. See also Dar al-Muwaqqit References Bibliography Mosques Astronomy in the medieval Islamic world Timekeeping Salah
Muwaqqit
[ "Physics", "Astronomy" ]
3,333
[ "Physical quantities", "Time", "History of astronomy", "Timekeeping", "Astronomy in the medieval Islamic world", "Spacetime" ]
62,815,091
https://en.wikipedia.org/wiki/Dibromodiethyl%20sulfone
Dibromodiethyl sulfone is a sulfone containing two 2-bromo-ethyl substituents. Production Dibromodiethyl sulfone is produced from dibromodiethyl sulfide by oxidation by chromic acid. References Organobromides Sulfones
Dibromodiethyl sulfone
[ "Chemistry" ]
67
[ "Sulfones", "Functional groups" ]
62,816,644
https://en.wikipedia.org/wiki/Steam%20cracking
Steam cracking is a petrochemical process in which saturated hydrocarbons are broken down into smaller, often unsaturated, hydrocarbons. It is the principal industrial method for producing the lighter alkenes (or commonly olefins), including ethene (or ethylene) and propene (or propylene). Steam cracker units are facilities in which a feedstock such as naphtha, liquefied petroleum gas (LPG), ethane, propane or butane is thermally cracked through the use of steam in steam cracking furnaces to produce lighter hydrocarbons. The propane dehydrogenation process may be accomplished through different commercial technologies. The main differences between each of them concerns the catalyst employed, design of the reactor and strategies to achieve higher conversion rates. Olefins are useful precursors to myriad products. Steam cracking is the core technology that supports the largest scale chemical processes, i.e. ethylene and propylene. Process description General In steam cracking, a gaseous or liquid hydrocarbon feed like naphtha, liquified petroleum gas (LPG), or ethane is mixed with very hot steam and briefly heated in a furnace in the absence of oxygen. The reaction temperature is very high, at around 850 °C. This causes the hydrocarbons to break up into smaller molecules such as small olefins and hydrogen. The reaction occurs rapidly: the residence time is on the order of milliseconds. Flow rates approach the speed of sound. After the cracking temperature has been reached, the gas is quickly quenched in a transfer line heat exchanger or inside a "quenching header" using quench oil in order to prevent further reactions such as decomposing into carbon and hydrogen.. The products produced in the reaction depend on the composition of the feed, the hydrocarbon-to-steam ratio, and on the cracking temperature and furnace residence time. Light hydrocarbon feeds such as ethane, LPGs, or light naphtha give mainly lighter alkenes, including ethylene, propylene, and butadiene. Heavier hydrocarbon (full range and heavy naphthas as well as other refinery products) feeds give some of these same products, but also those rich in aromatic hydrocarbons and hydrocarbons suitable for inclusion in gasoline or fuel oil. A higher cracking temperature (also referred to as severity) favors the production of ethene and benzene, whereas lower severity produces higher amounts of propene, C4-hydrocarbons and liquid products. The process also results in the slow deposition of coke, a form of carbon, on the reactor walls. This degrades the efficiency of the reactor, so reaction conditions are designed to minimize this. Nonetheless, a steam cracking furnace can usually only run for a few months at a time between de-cokings. Decokes require the furnace to be isolated from the process and then a flow of steam or a steam/air mixture is passed through the furnace coils. This converts the hard solid carbon layer to carbon monoxide and carbon dioxide. Once this reaction is complete, the furnace can be returned to service. Process details The areas of an ethylene plant are: steam cracking furnaces: primary and secondary heat recovery with quench; a dilution steam recycle system between the furnaces and the quench system; primary compression of the cracked gas (3 stages of compression); hydrogen sulfide and carbon dioxide removal (acid gas removal); secondary compression (1 or 2 stages); drying of the cracked gas; cryogenic treatment; all of the cold cracked gas stream goes to the demethanizer tower. The overhead stream from the demethanizer tower consists of all the hydrogen and methane that was in the cracked gas stream. Cryogenically (−250 °F (−157 °C)) treating this overhead stream separates hydrogen from methane. Methane recovery is critical to the economical operation of an ethylene plant. the bottom stream from the demethanizer tower goes to the deethanizer tower. The overhead stream from the deethanizer tower consists of all the C2's that were in the cracked gas stream. The C2 stream contains acetylene, which is explosive above 200 kPa (29 psi). If the partial pressure of acetylene is expected to exceed these values, the C2 stream is partially hydrogenated. The C2's then proceed to a C2 splitter. The product ethylene is taken from the overhead of the tower and the ethane coming from the bottom of the splitter is recycled to the furnaces to be cracked again; the bottom stream from the de-ethanizer tower goes to the depropanizer tower. The overhead stream from the depropanizer tower consists of all the C3's that were in the cracked gas stream. Before feeding the C3's to the C3 splitter, the stream is hydrogenated to convert the methylacetylene and propadiene (allene) mix. This stream is then sent to the C3 splitter. The overhead stream from the C3 splitter is product propylene and the bottom stream is propane which is sent back to the furnaces for cracking or used as fuel. The bottom stream from the depropanizer tower is fed to the debutanizer tower. The overhead stream from the debutanizer is all of the C4's that were in the cracked gas stream. The bottom stream from the debutanizer (light pyrolysis gasoline) consists of everything in the cracked gas stream that is C5 or heavier. Since ethylene production is energy intensive, much effort has been dedicated to recovering heat from the gas leaving the furnaces. Most of the energy recovered from the cracked gas is used to make high pressure (1200 psig (8300 kPa)) steam. This steam is in turn used to drive the turbines for compressing cracked gas, the propylene refrigeration compressor, and the ethylene refrigeration compressor. An ethylene plant, once running, does not need to import steam to drive its steam turbines. A typical world scale ethylene plant (about 1.5 billion pounds (680 KTA) of ethylene per year) uses a 45,000 horsepower (34,000 kW) cracked gas compressor, a 30,000 hp (22,000 kW) propylene compressor, and a 15,000 hp (11,000 kW) ethylene compressor. Even though the thorough energy integration within a steam cracking plant, this process produces an unsurmountable amount of carbon dioxide. Per tonne of ethylene, 1–1.6 tonne of carbon dioxide (depending on the feedstock) is being produced. Resulting in a staggering amount of more than 300 million tonnes of carbon dioxide that is annually emitted into the atmosphere of which 70–90% is directly attributed to the combustion of fossil fuel. In the last few decades, several advances in steam cracking technology have been implemented to increase its energy efficiency. These changes include oxy-fuel combustion, new burner technology, and 3D reactor geometries. However, as is common within mature technologies these changes only led to marginal gains in energy efficiency. To drastically curb the greenhouse gas emission of steam cracking, electrification does offer a solution as renewable electricity can be directly transformed into heat by, for example, resistive and inductive heating. As a result, several petrochemical companies joined forces resulting in the development of several joint agreements in which they combine R&D efforts to investigate how naphtha or gas steam crackers could be operated using renewable electricity instead of fossil fuel combustion. Steam cracking furnaces licensors Several proprietary designs are available under a license that must be purchased from the design developer by any petroleum refining company desiring to construct and operate a Steam Cracking unit of a given design. These are the major steam cracking furnaces designers and licensors: Lummus Technology Technip Energies Linde KBR See also Petroleum Petroleum refining processes Natural gas Cracking (chemistry) Notes and references Alkenes Petroleum production
Steam cracking
[ "Chemistry" ]
1,674
[ "Organic compounds", "Alkenes" ]
62,817,045
https://en.wikipedia.org/wiki/Zinc%20oxide%20nanostructure
Zinc oxide (ZnO) nanostructures are structures with at least one dimension on the nanometre scale, composed predominantly of zinc oxide. They may be combined with other composite substances to change the chemistry, structure or function of the nanostructures in order to be used in various technologies. Many different nanostructures can be synthesised from ZnO using relatively inexpensive and simple procedures. ZnO is a semiconductor material with a wide band gap energy of 3.3eV and has the potential to be widely used on the nanoscale. ZnO nanostructures have found uses in environmental, technological and biomedical purposes including ultrafast optical functions, dye-sensitised solar cells, lithium-ion batteries, biosensors, nanolasers and supercapacitors. Research is ongoing to synthesise more productive and successful nanostructures from ZnO and other composites. ZnO nanostructures is a rapidly growing research field, with over 5000 papers published during 2014-2019. Synthesis ZnO creates one of the most diverse range of nanostructures, and there is a great amount of research on different synthesis routes of various ZnO nanostructures. The most common methods to synthesise ZnO structures is using chemical vapor deposition (CVD), which is best used to form nanowires and comb or tree-like structures. Chemical vapor deposition (CVD) In vapor deposition processes, zinc and oxygen are transported in gaseous form and react with each other, creating ZnO nanostructures. Other vapor molecules or solid and liquid catalysts can also be involved in the reaction, which affect the properties of the resultant nanostructure . To directly create ZnO nanostructures, one can decompose zinc oxide at high temperatures where it splits into zinc and oxygen ions and when cooled it forms various nanostructures, including complex structures such as nanobelts and nanorings. Alternatively, zinc powder can be transported through oxygen vapor which react to form nanostructures . Other vapours such as nitrous oxide or carbon oxides can be used by themselves or in combination. These methods are known as vapor-solid (VS) processes due to their reactants states. VS processes can create a variety of ZnO nanostructures but their morphology and properties are highly dependent on the reactants and reaction conditions such as the temperature and vapor partial pressures. Vapor deposition processes can also use catalysts to assist the growth of nanostructures. These are known as vapor-liquid-solid (VLS) processes, and use a catalytic liquid alloy phase as an extra step in nanostructure synthesis to accelerate growth. The liquid alloy, which includes zinc, is attached to nucleated seeds made usually of gold or silica. The alloy absorbs the oxygen vapor and saturates, facilitating a chemical reaction between zinc and oxygen. The nanostructure develops as the ZnO solidifies and grows outwards from the gold seed. This reaction can be highly controlled to produce more complex nanostructures by modifying the size and arrangement of gold seeds, and of the alloys and vapor constituents. Aqueous solution growth A large variety of ZnO nanostructures can also be synthesised by growth in an aqueous solution, which is desirable due to its simplicity and low processing temperature. A ZnO seed layer is used to begin uniform growth and to ensure nanowires are oriented. A solution of catalysts and molecules containing zinc and oxygen are reacted and nanostructures grow from the seed layer. An example of such a reaction involves hydrolysing ZnO(NO3)2 (zinc nitrate) and the decomposition of hexamethyltetramine (HMT) to form ZnO. Altering the growth solution and its concentration, temperature and structure of the seed layer can change the morphology of the synthesised nanostructures. Nanorods, aligned nanowire arrays, flower-like and disc like nanowires and nanobelt arrays, along with other nanostructures, can all be created in aqueous solutions by varying the growth solution. Electrodeposition Another method to synthesise ZnO nanostructures is electrodeposition, which uses electric current to facilitate chemical reactions and deposition on electrodes. Its low temperature and ability to create precise thickness structures makes it a cost-effective and environmentally friendly method. Structured nanocolumnar crystals, porous films, thin films and aligned wires have been synthesised in this way. The quality and size of these structures depends on substrates, current density, deposition time and temperature. The bandgap energy is also dependent on these parameters, since it is dependent not only on the material but also its size due to the nanoscale effect on the band structure. Defects and Doping ZnO has a rich defect and dopant chemistry that can significantly alter properties and behaviour of the material. Doping ZnO nanostructures with other elements and molecules leads to a variety of material characteristics, because the addition or vacancy of atoms changes the energy levels in the band gap. Native defects due to oxygen and zinc vacancies or zinc interstitials create its n-type semiconductor properties, but the behaviour is not fully understood. Carriers created by doping have been found to exhibit a strong dominance over native defects. Nanostructures contain small length scales, and this results in a large surface to volume ratio. Surface defects have hence been the primary focus of research into defects of ZnO nanostructures. Deep level emissions also occur, affecting material characteristics. ZnO can occupy multiple types of lattices, but is often found in a hexagonal wurtzite structure. In this lattice all of the octahedral sites are empty, hence there is space for intrinsic defects, Zn interstitials, and also external dopants to occupy gaps in the lattice, even when the lattice is at a nanoscale. Zn interstitials occur when extra zinc atoms are located inside the crystal ZnO lattice. They occur naturally but their concentration can be increased by using Zn vapor rich synthesis conditions. Oxygen vacancies are common defects in metal oxides where an oxygen atom is left out of the crystal structure. Both oxygen vacancies and Zn interstitials increase the number of electron charge carriers, thus becoming an n-type semiconductor. Since these defects occur naturally as a by-product of the synthesis process, it is difficult to make p-type ZnO nanostructures. Defects and dopants are usually introduced during the synthesis of the ZnO nanostructure, either by controlling their formation or accidentally obtained during the growing process through contamination. Since it is difficult to control these processes, defects occur naturally. Dopants can diffuse into the nanostructure during synthesis. Alternatively, the nanostructures can be treated after synthesis such as through plasma injection or exposure to gases. Unwanted dopants and defects can also be manipulated so that they are removed or passivated. Crudely, the region of the nanostructure can be fully removed, such as cutting off the surface layer of a nanowire. Oxygen vacancies can be filled using plasma treatment, where an oxygen containing plasma inserts oxygen back into the lattice. At temperatures where the lattice is mobile, oxygen molecules and gaps can be moved using electric fields to change the nature of the material. Defects and dopants are used in most ZnO nanostructure applications. Indeed, the defects in ZnO enable a variety of semiconductor properties with different band gaps. By combining ZnO with dopants, a variety of electrical and material characteristics can be achieved. For example, optical properties of ZnO can change through defects and dopants. Ferromagnetic properties can be introduced into ZnO nanostructures through doping with transition metal elements. This creates magnetic semiconductors, which is a focus of spintronics. Application ZnO nanostructures can be used for many different applications. Here are a few examples. Dye Sensitised Solar Cells Dye sensitised solar cells (DSSCs) are a type of thin film solar cell that uses a liquid dye to absorb sunlight. Currently TiO2 (titanium dioxide) is mostly in use for DSSCs as the photoanode material. However ZnO is found to be a good candidate for photoanode material in DSSCs. This is because the nanostructure synthesis is easy to control, it has higher electron transport properties, and it is possible to use organic material as hole transporter, unlike when TiO2 is the photoanode material. Researchers have found that the structure of ZnO nanostructure affects the solar cell performance. There are also disadvantages for using ZnO nanostructures, like a so called voltage leakage that needs more investigation. Batteries and supercapacitors Rechargeable lithium-ion batteries (LIBs) are currently the most common power source since they produce high power and have a high energy density. The use of metal oxides as anodes has largely improved the limitations of the batteries, and ZnO is particularly seen as an up-and-coming potential anode. This is due to its low toxicity and costs, and its high theoretical capacity (978 mAhg−1). ZnO experiences volume expansion during processes resulting in a loss of electrical disconnection, decreasing capacity. A solution may be to dope with different materials and to develop on the nanoscale with nanostructures, such as porous surfaces, that allow for volume changes during the chemical process. Alternatively, lithium storage components can be mixed in with the ZnO nanostructures to create a more stable capacity. Research has been successful in synthesising such composite ZnO nanostructures with carbon, graphite, and other metal oxides. Another commonly used energy storage appliance are supercapacitors (SCs). The SCs are mostly used in electric vehicles and as backup power systems. They are known for being environmentally friendly and may replace the currently used energy storage devices. This is due to its more advanced stability, power density and overall greater performance. Because of its remarkable energy density of 650Aħg−1 and electrical conductivity of 230Scm−1 ZnO is recognized as a great potential electrode material. Nonetheless it has poor electrical conductivity as its small surface area makes for a restricted capacity. Just as for the batteries, multiple combinations of carbon structures, graphene, metal oxides with ZnO nanostructures have improved capacitance of these materials. A composite with ZnO base has not only a better power density and energy density, but is also more cost-effective and eco-friendly. Biosensors and biomedical It has already been discovered that ZnO nanostructures are able to bind biological substances. Recent research shows that because of this trait and because of its surface selectivity, ZnO is a good candidate for a biosensor. It can naturally form anisotropic nanostructures that are used to deliver drugs. ZnO based biosensors can also help in diagnosing the early stages of cancer. There is ongoing research to see if ZnO nanostructures can be used for bioimaging. It has so far only been tested on mice and shows positive results. In addition, ZnO nanomaterials are already used in cosmetic products, like face creams and sun cream It is, however, not yet clear what the effect of ZnO nanostructures is on human cells and the environment. Since used ZnO biosensors will eventually dissolve and release Zn ions, they may be absorbed by the cells and the local effect of this is not yet known. Nanomaterials in cosmetics will eventually be washed off and released in the environment. Due to these unknown risks, there needs to be a lot more research before ZnO can be safely applied in the biomedical field. References Zinc oxide Nanomaterials
Zinc oxide nanostructure
[ "Materials_science" ]
2,490
[ "Nanotechnology", "Nanomaterials" ]
62,817,424
https://en.wikipedia.org/wiki/Hemispherical%20electron%20energy%20analyzer
A hemispherical electron energy analyzer or hemispherical deflection analyzer is a type of electron energy spectrometer generally used for applications where high energy resolution is needed—different varieties of electron spectroscopy such as angle-resolved photoemission spectroscopy (ARPES), X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES) or in imaging applications such as photoemission electron microscopy (PEEM) and low-energy electron microscopy (LEEM). It consists of two concentric conductive hemispheres that serve as electrodes that bend the trajectories of the electrons entering a narrow slit at one end so that their final radii depend on their kinetic energy. The analyzer, therefore, provides a mapping from kinetic energies to positions on a detector. Function An ideal hemispherical analyzer consists of two concentric hemispherical electrodes (inner and outer hemispheres) of radii and held at proper voltages. In such a system, the electrons are linearly dispersed, depending on their kinetic energy, along the direction connecting the entrance and the exit slit, while the electrons with the same energy are first-order focused. When two voltages, and , are applied to the inner and outer hemispheres, respectively, the electric potential in the region between the two electrodes follows from the Laplace equation: The electric field, pointing radially from the center of the hemispheres out, has the familiar planetary motion form The voltages are set in such a way that the electrons with kinetic energy equal to the so-called pass energy follow a circular trajectory of radius . The centripetal force along the path is imposed by the electric field . With this in mind, The potential difference between the two hemispheres needs to be . A single pointlike detector at radius on the other side of the hemispheres will register only the electrons of a single kinetic energy. The detection can, however, be parallelized because of nearly linear dependence of the final radii on the kinetic energy. In the past, several discrete electron detectors (channeltrons) were used, but now microchannel plates with phosphorescent screens and camera detection prevail. In general, these trajectories are described in polar coordinates for the plane of the great circle for electrons impinging at an angle with respect to the normal to the entrance, and for the initial radii to account for the finite aperture and slit widths (typically 0.1 to 5 mm): where As can be seen in the pictures of calculated electron trajectories, the finite slit width maps directly into energy detection channels (thus confusing the real energy spread with the beam width). The angular spread, while also worsening the energy resolution, shows some focusing as the equal negative and positive deviations map to the same final spot. When these deviations from the central trajectory are expressed in terms of the small parameters defined as , , and having in mind that itself is small (of the order of 1°), the final radius of the electron's trajectory, , can be expressed as . If electrons of one fixed energy were entering the analyzer through a slit that is wide, they would be imaged on the other end of the analyzer as a spot wide. If their maximal angular spread at the entrance is , an additional width of is acquired, and a single energy channel is smeared over at the detector side. But there, this additional width is interpreted as energy dispersion, which is, to the first order, . It follows that the instrumental energy resolution, given as a function of the width of the slit, , and the maximal incidence angle, , of the incoming photoelectrons, which is itself dependent on the width of the aperture and slit, is . The analyzer resolution improves with increasing . However, technical problems related to the size of the analyzer put a limit on its actual value, and most analyzers have it in the range of 100–200 mm. Lower pass energies also improve the resolution, but then the electron transmission probability is reduced, and the signal-to-noise ratio deteriorates accordingly. The electrostatic lenses in front of the analyzer have two main purposes: they collect and focus the incoming photoelectrons into the entrance slit of the analyzer, and they decelerate the electrons to the range of kinetic energies around , in order to increase the resolution. When acquiring spectra in swept (or scanning) mode, the voltages of the two hemispheres – and hence the pass energy – are held fixed; at the same time, the voltages applied to the electrostatic lenses are swept in such a way that each channel counts electrons with the selected kinetic energy for the selected amount of time. In order to reduce the acquisition time per spectrum, the so-called snapshot (or fixed) mode can be used. This mode exploits the relation between the kinetic energy of a photoelectron and its position inside the detector. If the detector energy range is wide enough, and if the photoemission signal collected from all the channels is sufficiently strong, the photoemission spectrum can be obtained in one single shot from the image of the detector. See also Mass spectrometry References Electron spectroscopy
Hemispherical electron energy analyzer
[ "Physics", "Chemistry" ]
1,079
[ "Electron spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
62,817,500
https://en.wikipedia.org/wiki/Leakage%20%28machine%20learning%29
In statistics and machine learning, leakage (also known as data leakage or target leakage) is the use of information in the model training process which would not be expected to be available at prediction time, causing the predictive scores (metrics) to overestimate the model's utility when run in a production environment. Leakage is often subtle and indirect, making it hard to detect and eliminate. Leakage can cause a statistician or modeler to select a suboptimal model, which could be outperformed by a leakage-free model. Leakage modes Leakage can occur in many steps in the machine learning process. The leakage causes can be sub-classified into two possible sources of leakage for a model: features and training examples. Feature leakage Feature or column-wise leakage is caused by the inclusion of columns which are one of the following: a duplicate label, a proxy for the label, or the label itself. These features, known as anachronisms, will not be available when the model is used for predictions, and result in leakage if included when the model is trained. For example, including a "MonthlySalary" column when predicting "YearlySalary"; or "MinutesLate" when predicting "IsLate". Training example leakage Row-wise leakage is caused by improper sharing of information between rows of data. Types of row-wise leakage include: Premature featurization; leaking from premature featurization before Cross-validation/Train/Test split (must fit MinMax/ngrams/etc on only the train split, then transform the test set) Duplicate rows between train/validation/test (e.g. oversampling a dataset to pad its size before splitting; e.g. different rotations/augmentations of a single image; bootstrap sampling before splitting; or duplicating rows to up sample the minority class) Non-i.i.d. data Time leakage (e.g. splitting a time-series dataset randomly instead of newer data in test set using a TrainTest split or rolling-origin cross validation) Group leakage—not including a grouping split column (e.g. Andrew Ng's group had 100k x-rays of 30k patients, meaning ~3 images per patient. The paper used random splitting instead of ensuring that all images of a patient were in the same split. Hence the model partially memorized the patients instead of learning to recognize pneumonia in chest x-rays.) A 2023 review found data leakage to be "a widespread failure mode in machine-learning (ML)-based science", having affected at least 294 academic publications across 17 disciplines, and causing a potential reproducibility crisis. Detection Data leakage in machine learning can be detected through various methods, focusing on performance analysis, feature examination, data auditing, and model behavior analysis. Performance-wise, unusually high accuracy or significant discrepancies between training and test results often indicate leakage. Inconsistent cross-validation outcomes may also signal issues. Feature examination involves scrutinizing feature importance rankings and ensuring temporal integrity in time series data. A thorough audit of the data pipeline is crucial, reviewing pre-processing steps, feature engineering, and data splitting processes. Detecting duplicate entries across dataset splits is also important. Analyzing model behavior can reveal leakage. Models relying heavily on counter-intuitive features or showing unexpected prediction patterns warrant investigation. Performance degradation over time when tested on new data may suggest earlier inflated metrics due to leakage. Advanced techniques include backward feature elimination, where suspicious features are temporarily removed to observe performance changes. Using a separate hold-out dataset for final validation before deployment is advisable. See also AutoML Concept drift (where the structure of the system being studied evolves over time, invalidating the model) Overfitting Resampling (statistics) Supervised learning Training, validation, and test sets References Machine learning Statistical classification
Leakage (machine learning)
[ "Engineering" ]
817
[ "Artificial intelligence engineering", "Machine learning" ]
62,817,568
https://en.wikipedia.org/wiki/Demolition%20of%20Ile-Arugbo
Demolition of Ile-Arugbo is the decision of the Committee on Review of Property of Kwara state government, north-central Nigeria released on 1 July 2019 to reclaim the plots of land acquired by Dr. Olusola Saraki without proper documentation. Ile-Arugbo ( Old peoples home) is located directly opposite the family house of Saraki's in Ilorin, Kwara state. The building was constructed on plots of land owned by Dr Saraki to foster for aged people who visited him during his lifetime. In 1970, the state acquired the land for the construction of the phase II of its secretariat which was later abandoned. In 1980, the project was redesigned for the construction of a civil Service Unit, State Secretariat and their parking space. In 1982, only the State Clinic was completed from the project design while the remaining pieces of land were allocated to Asa investment company. The decision of the government was informed by the Committee on Review of Sales of Property of Kwara State Government from 1999 that the said land was acquired without proof of payment. The government announced its decision to reclaim the land on 27 December 2019 before the demolition exercise took place on 2 January 2020. The demolition led to varies mixed feeling between the faction loyal to the state government, on one hand, People Democratic Party and the Saraki's dynastic in Ilorin led by Senator Bukola Saraki, Senate president of the 8th Nigeria National Assembly. on the other hand. Ile-Arugbo (Old peoples home) Ile-Arugbo is a property owned by the late Olusola Saraki. The concept behind the structure is to serve as a platform for reaching out to aged people in Ilorin during the second republic where food, money and health care services are provided for the concerned people in the society. The said land was acquired by the state in 1970 for the construction of its Civil Service Unit, State Secretariat and parking spaces which was later abandoned. In 1980, it was redesigned but only the State Clinic was built while the rest of the land was allocated to Asa Investment Company in 1982, an entity owned by late Dr. Saraki Controversy and reactions The controversy behind the demolition of the said property started on 27 December when an action informed by the decision of the Committee on Review of Sales of Property of Kwara state government since 1999 announced that the property opposite the family house of the late Dr Olusola Saraki was acquired without proof of payment. The committee announced its decision on 27 December 2019 and went ahead to effect its action on 2 January by 3 am ( WAT) In July 2019, the committee identified several properties of the state government that were allocated without proper documentation for reclamation. It further said the land in question was originally allocated for the construction of the state secretariat. Senator Gbemisola Saraki disagreed with the position of the state government and tagged the demolition exercise as a political vendetta against is family. According to Channels Tv, Gbemisola said "Again, as a loyal and dedicated daughter of my father, Dr Abubakar Olusola Saraki, whom I hold in very high esteem, I did not want to express my opinion on the propriety of the Governor’s recent political actions as it would be seen as biased because the late Waziri is my father. "However, given the turn of events and the violent nature of the Governor’s position, it is only right for me to speak now." "There might have been some elements within my party, APC, who wanted to change the OTOGe narrative of the 2019 elections to be about the Sarakis and not about what it was – the removal of a failing PDP Administration." “But clearly by some recent steps taken, especially with Thursday’s actions, Kwara State APC must be careful to not allow a few elements with their own agenda, other than governance, to turn their personal vendetta into the official position of APC in the State. They must not be allowed to hijack the narrative of what our party stands for". The People Democratic Party in the state also petitioned the National Human Rights Commission (NHRC) for the alleged use of live ammunition and tear-gas by the state government to harass the aged people who protested against the exercise. Litigation References Ilorin Saraki family Demolition
Demolition of Ile-Arugbo
[ "Engineering" ]
875
[ "Construction", "Demolition" ]
62,818,196
https://en.wikipedia.org/wiki/OpenAirPhilosophy
OpenAirPhilosophy is a project presenting a selection of the work in environmental philosophy of Norwegian philosophers Arne Naess, Sigmund Kvaløy Setreng, and Peter Wessel Zapffe. The project promotes the inherent worth of living beings regardless of their instrumental utility to human needs, as well as looking at restructuring modern human societies in accordance with such ideas. The project's website holds biographies, selected works and interviews of the three philosophers. The name of the project comes from a practice in Norway called friluftsliv, which translates as “open-air life.” The term evokes a sense of belonging to the land, making friends with free nature. Arne Naess, the father of “Deep ecology", was always searching for the existence of what he called “greatness other than human.” Far from moralizing about how other people ought to live, he would invite them to “act beautifully," and to experience how natural it feels to act in ecologically responsible ways. When Naess was asked about his expectations for the future, he would sometimes answer, to the surprise of his interviewers, "I am a pessimist for the 21st century, but an optimist for the 22nd century." This response exemplifies the kind of against-the-grain thinking of the three Norwegian ecophilosophers whose work that is presented at OpenAirPhilosophy: Peter Wessel Zapffe, Sigmund Kvaløy Setreng, and Arne Naess. All demonstrated a surprising ability both to identify and to face directly the vastness of the ecological crisis as it was starting to unfold in their times. Their analysis, however, did not stop at making a dire diagnosis; they also chose to develop and embrace a deeper and more long-term view in which we humans are not automatically assigned centre stage in the pageant of life. Despite writing forthrightly about the grave challenges facing the Earth, each retained a parallel sense of living life to the full, of enjoying the conviviality of being among friends and the fulfilment that comes from working for change - all fueled by experiences in nature. The aim of the project is to engage readers, provoke additional scholarship, and expand the community of activists around the globe who embrace ecocentrism - a worldview in which all life is acknowledged to have intrinsic value. Content editors for this project are Jan van Boeckel and Ceciel Verheij. Ceciel Verheij translated several Norwegian texts which are published for the first time in English on this website. Andrés Stubelt and Cara Nelson/Swift Trek Media provided web design and development. PDF design and typesetting are by Kevin Cross. Tom Butler of Tompkins Conservation served as overall project director. References External links Rewilding website, link to podcast episode with Jan Van Boeckel Environmental philosophy
OpenAirPhilosophy
[ "Environmental_science" ]
598
[ "Environmental philosophy", "Environmental social science" ]
62,819,282
https://en.wikipedia.org/wiki/Hilpda
Hypoxia inducible lipid droplet-associated (Hilpda, also known as C7orf68 and HIG-2) is a protein that in humans is encoded by the HILPDA gene. Discovery HILPDA was originally discovered in a screen to identify new genes that are activated by low oxygen pressure (hypoxia) in human cervical cancer cells. The protein consists of 63 amino acids in humans and 64 amino acids in mice. Expression HILPDA is produced by numerous cells and tissues, including cancer cells, immune cells, fat cells, and liver cells. Low oxygen pressure (hypoxia), fatty acids, and beta-adrenergic agonists stimulate HILPDA expression. Function Nearly all cells have the ability to store excess energy as fat in special structures in the cell called lipid droplets. The formation and breakdown of lipid droplets is controlled by various enzymes and lipid droplet-associated proteins. One of the lipid droplet-associated proteins is HILPDA. HILPDA acts as a regulatory signal that blocks the breakdown of the fat stores in cells when the external fat supply is high or the availability of oxygen is low. In cells, HILPDA is located in the endoplasmic reticulum and around lipid droplets. Gain and loss-of-function studies have shown that HILPDA promotes fat storage in cancer cells, macrophages and liver cells. This effect is at least partly achieved by suppressing triglyceride breakdown by inhibiting the enzyme adipose triglyceride lipase. The binding of HILPDA to adipose triglyceride lipase occurs via the conserved N-terminal portion of HILPDA, which is similar to a region in the G0S2 protein. Clinical significance The deficiency of HILPDA in mice that are prone to develop atherosclerosis led to a reduction in atherosclerotic plaques, suggesting that HILPDA may be a potential therapeutic target for atherosclerosis. In addition, HILPDA may be targeted for the treatment of non-alcoholic fatty liver disease. References Proteins Genetics
Hilpda
[ "Chemistry" ]
441
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
62,819,770
https://en.wikipedia.org/wiki/Gift-exchange%20game
The gift-exchange game, also commonly known as the gift exchange dilemma, is a common economic game introduced by George Akerlof and Janet Yellen to model reciprocacy in labor relations. The gift-exchange game simulates a labor-management relationship execution problem in the principal-agent problem in labor economics. The simplest form of the game involves two players – an employee and an employer. The employer first decides whether they should award a higher salary to the employee. The employee then decides whether to reciprocate with a higher level of effort (work harder) due to the salary increase or not. Like trust games, gift-exchange games are used to study reciprocity for human subject research in social psychology and economics. If the employer pays extra salary and the employee puts in extra effort, then both players are better off than otherwise. The relationship between an investor and an investee has been investigated as the same type of a game. The gift exchange game serves as a valuable lens through which to understand economic theory as it demonstrates that self-interest maximization is not the sole determinant of economic decision-making. Rather, reciprocity is a fundamental factor that shapes individuals' behaviour in economic contexts. By simulating labor relations between an employer and employee, the game explicates that when employer offer a higher salary, employees are more inclined to reciprocate with great effort, leading to mutually beneficial outcomes. Gift exchange games have been used to study economic and social phenomena such as labor contracts, market transactions, strike and the decline of unionization. The gift-exchange theory also incorporates a social component, where homogenous agents who are employed with an equivalent wage level will exert greater effort. This then continues to result in a higher market efficiency and higher rent than those agents receiving different wages. The first examination of this component is referred to as the fair uniform-wage hypothesis, where experiments establish the significant efficiency premium of uniform wages. However, this is not a consequential result of a stronger level or reciprocity by the agents, although the reinforcement of endorsing these options on the side of principals with uniform wages is why implementing boundaries to freedom can lead to efficiency-enhancing results. Equilibrium analysis In the game theory, the equilibrium analysis can be implemented to determine and examine strategic decisions between the players in a game. Nash equilibrium is the situation of a game where no player has an incentive to change the strategy given the strategic decision based on the other player. It is implemented to evaluate these situations and the decisions by the players that are made affecting each players out come. The extra effort in gift-exchange games is modelled to be a negative payoff if not compensated by salary. The IKEA effect of own extra work is not considered in the payoff structure of this game. Therefore, this model rather fits labor conditions, which are less meaningful for the employees. Like in trust games, game-theoretic solution for rational players predicts that employees’ effort will be minimum for one-shot and finitely repeated interactions. The difference constitutes by the sequentiality of gift-exchange game. In the gift exchange game, the employer pays a high or low salary first and then the employee makes the decision. So employers have no incentive to pay high salaries if workers know what they are choosing. If the employer pays a higher salary, it is irrational for the employee to put extra effort, since effort will reduce his or her payoff. It is also irrational for the employee to put extra effort while receiving a lower salary. Therefore, the minimum salary and the minimum effort is the equilibrium of this game. As this game is considered a perfect information game where all players are aware of previous actions, backwards induction can also be used to determine the equilibrium of this game. As both players are rational, they will both work to maximise their utility. Looking at game tree shown on the right, the last decision is determined by the employee. The utilities for the employer and employee are outlined in red and blue respectively. In both a high and low salary option, the employee will benefit more from choosing the low effort option. In a high salary path, the high effort choice yields a utility of 2 for the employee whereas in the low effort path the utility is 3. In the low salary high effort path, the employees utility will be 0 compared to the low effort choice which results in a utility of 1. Thus the employee will choose low effort regardless of the salary choice made by the employer. Moving a decision back, it is now the employers choice between high or low salary. Knowing the employee will choose low effort, the employer will also choose the option that maximises their utility. In the high salary low effort option, their utility eventuates to 0 whereas in the low salary low effort option, their utility is 1. Knowing the employee will choose the low effort option as it maximises their utility, the employer will choose low salary. This will lead to the equilibrium of low salary and low effort. Contrast with Prisoner's dilemma The payoff matrix of the gift-exchange game has the same structure as the payoff matrix of Prisoner's dilemma when strategies are involved in the decision-making between the players. However, there are key differences between the two games. The difference constitutes by the sequentiality of gift-exchange game, with the gift exchange games being based on social norms of reciprocity, where participants are incentivized to act in ways that other players deem fair. The goal of the game is to maximize the amount of money each player receives, and to follow the expectations of the group. Although, in contrast to the prisoner's dilemma two participants are faced with either cooperation or betrayal, without knowing what the other players will decide and the payoff of each possible outcome is determined by the choices of both players. The prisoner's dilemma aims to show how rational decisions can lead to sub-optimal outcomes for both parties, even when cooperation is in the best interest of both parties. The goal of both games is to maximize the amount of money each player receives, but in the prisoner's dilemma, rational decisions can lead to sub-optimal outcomes for both parties, even when cooperation is in the best interest of both players. In the gift-exchange game, the choices of all players are interdependent, and the social norms of reciprocity incentivize participants to act in ways that benefit the group as a whole. In our follow-up experiments with the gift exchange game, we found that only a few people in the real world would choose the minimum wage and minimum effort to reach the Nash equilibrium. Experimental Methods and Results A positive relationship between salary and effort has been observed in a large number of gift-exchange experiments performed in a laboratory setting. This behaviour obviously deviates from the equilibrium. A study on 84 undergraduate students at the University of Amsterdam was conducted to observe the difference in findings that were predicted to occur when the gift-exchange game performed between one employer one employee was compared to a game performed between one employer and four employees. The results indicated the number of employees did not have a significant impact on the level of effort that was chosen, with both mean effort levels increasing with wage at similar rates. Another experiment with students from Tilburg University showed that only 33% of games ended up in the Nash equilibrium with minimal salary and minimal effort. Data from another experiment on 123 students from University of Nottingham showed a rate of 69% for high salary being paid by employer in advance. Fehr, Kirchsteiger and Riedl (1993, QJE) designed a market in which "employers" and "employees" do not meet. All wages and effort and "employees" are put in different rooms for many experiments, and are told that the counterparties of each transaction are different, and both sides keep "anonymous" transactions from beginning to end. In this way, the influence of expectations of both sides on the "long-term future" is excluded, and the choice of the level of effort of "employees" is completely self-conscious. According to the traditional economic view, employees will be willing to accept any wage greater than 0, and provide the minimum level of effort after receiving the salary. However, the experimental results show that employers always offer wages much higher than the minimum level, while employees almost always provide efforts much higher than the minimum level. This proves that even if there are no other supervision and punishment mechanisms, the wage level in the labor market is often higher than the market-clearing price for some "fair" and "goodwill" motives in exchange for the labor provider's initiative and loyalty. It's also an example of how social norms and reciprocity can affect human behavior even in the absence of regulation. The experiment of charness (2000, JEBO) wanted to explore what would happen if the benefit of high wages was not given by the employer but a random result or a third party. The results of this experiment are as follows: (1) if wages are generated randomly, employees usually give extra efforts to show "fairness" or compensation, considering that they are after all the employer's money. (2) If, at almost the same income level, employees are told that the wage level is determined by the experimenter, they will think that they don't have to pay a lot of responsibility for the "loss" to the employer, so they will relatively reduce their efforts. Gneezy and List (2006, Econometrica) were two of the first economists to investigate whether similar results found in the previous laboratory experiments could be replicated in a field setting. They did so by conducting two separate experiments, each involving a number of volunteer participants who were required to perform a specific task for six hours in total. Each participant would receive an hourly wage that had been previously advertised to them. The participants in each experiment were divided into two groups, one of which were informed that the actual wage they would be receiving was higher than the wage advertised to them. Each of the two experiments yielded similar results. For both experiments, the group receiving the increased wage only performed the task more efficiently in the initial stage of the task. Towards the end of the six hours, the groups yielded similar outputs and were performing at the same productivity level. While these results are inconsistent with previous laboratory finds, they endorse one feature of "reciprocal behaviour", that is, as time goes on, preferential treatment will be taken for granted, thus reducing the willingness of employees to supply labour. Some other observations noted about whether players will follow the expected nash equilibrium or were more likely to deviate and provide the gift and extra effort included, how much the game was repeated and if the players were familiar to each other. Players who were strangers or had not entered the game as many times before were far more likely to follow the Nash equilibrium, unable to achieve the higher possible pay-offs. However, if the game is repeated more times or the players were 'partners' (worked together and knew each other better) they had far greater success at maintaining higher pay-offs in the game. This study published by the Tinbergen Institute also concluded that ‘simpler’ or smaller gifts were far more likely to be reciprocated and maintained appropriately than larger gifts that could be more appealing to exploit. Van Den Akker, Olmo R. van Assen et al. found that subjects in the gift-exchange game exhibit selfish behaviour in specific labour markets and other principal-agent environments. Chaudhuri, Ananish Sbai, Erwann found in a study of sex differences in trust and reciprocity in repeated gift exchange games that there were no significant gender differences in trust, and women performed better in reciprocity. However, when the experimental context was placed in a specific labor market, female reciprocity performance decreased. Kean Siang, Ch'Ng's experiment explores the role of relative information and reciprocity in the gift-exchange game. They found that lack of enforcement was not the only reason to explain employees' reluctance to work hard, so the concept of 'relative reciprocal' was creatively introduced. There is a problem of information asymmetry between employees and employers. When employees have access to market information about average wages, they decide whether to work hard or not by comparing their current wage with the average wage and getting a relative reciprocal from their employer. But this increases the pressure on employers and competition between them, as wages are determined by the market offer. Especially, in organizations where control is separated from ownership, the relationship between wage increases and effort cannot be observed through the relative reciprocal. Moreover, Tagiew and Ignatov (2014) have conducted an experiment at the University of Nottingham using one-shot games, where participants did not participate more than once. The study included participates of both genders, with an average age of 20. Each game involved three players, an originator and two followers, who had the option to award or not award each other with gifts. The originator received an initial amount of £8.3, and each follower received £11.1. The originator could offer a fixed amount of £1.6 to a follower, while a follower could give £1, £2, or £3, with a corresponding reduction in their payoff. The experiment aimed to explore the effects of gift-giving on the players' payoffs and the dynamics of reciprocity. The study found that the frequency of non-gift giving was lower than what an egoistic payoff maximization assumption would predict, with an average non-gift frequency of 69% in the studied one-shot games. However, it was still over 50% in almost all cases. The paragraph also notes that it is difficult to create models of human behavior without access to the hidden variables that determine the players' choices. This game can help us to understand strikes, coordination, and dismissal in uniform wage settings. In a gift-exchange game in a multi-employee environment with collective action mechanisms, the employer offers a uniform wage. The breakdown of trust and reciprocity between employers and employees due to free-riding at work can lead to employee strikes or the intervention of unions and other labor organizations to coordinate. As the employer pays a flat wage, such collective action may prompt the employer to resort to dismissal mechanisms, i.e. firing the free-riding. If unions and other labor organizations step in to coordinate, employees may face increased employment risks in the absence of success in reducing free-rider behavior. These effects may help to understand the reasons for the decline in unionization in developed economies. For example, in the US, companies adopt enterprise resource planning and applications to simplify the adjustment of wage differentiation. A recent study in 2023 showed that in the gift-exchange game of labor relations, employees' costless and non-binding voice leads firms to reduce the actual workload in agreed contracts rather than increase wages. Critique Dirk Engelmann and Andreas Ortmann’s study: ‘The Robustness Of Laboratory Gift exchange: A Reconsideration’ took a subject pool of students from economics and business courses at the University of Berlin and the Institute For Empirical Research In Economics at the University of Zurich and had participants randomly selected into a employer or employee category. The managers would offer the workers a wage and an effort level that was required and the workers would choose to accept or decline the offer. The acceptance rate for the groups according to effort and wage were measured. The study suggested that there was little evidence for positive reciprocity and that laboratory gift exchange is highly sensitive to the parametrization of the model and the way the model is implemented. Engelmann also found that workers experienced negative reciprocity to negative wages. Engelmann suggested that gift-exchange is highly sensitive to changes in the parameters of the game (parametrization), the framing effect and anonymity. This has important consequence for empirical implementation. Gary Charness, Guillaume R. Frechette, and John H. Kagel’s experiment, 'How Robust is Laboratory Gift Exchange?', studied the effect of gift-exchange in the US. While they found positive reciprocity attributable to the gift-exchange effect, they also found that the gift-exchange effect is sensitive to innocuous changes. Groups of consisting of an emploer and an employee were chosen whereby the employer chooses the wage for their employee. employees were paid for their work at a self-chosen effort level and the corresponding cost of effort for that level. one group was presented with a payoff table, detailing employee and employer wages and the other ignorant of potential payoff. Charness found that when a payoff table was included in the experiment that demarcated the relationship between wages, effort and payoff, gift exchange was sharply reduced. Charness suggests that this reduction in gift exchange could be due to the framing effect. The framing affect would reduce positive reciprocation by reducing the positive effect caused by an unexpected bonus and replacing it with a mutual understanding of the firms expectations. Reciprocity & Social Norms Reciprocity is a fundamental concept within game theory that offers the idea that agents are more likely to cooperate if they believe that the cooperation will be reciprocated back. Ie. You do something for me, I’ll do something for you; a mutual gain. Within the gift exchange game, it has been identified within numerous large-scale studies that the higher the gift the higher the quality levels or effort put in. An example of reciprocity due to social norms was a field study conducted by the University of Bon to investigate the gift exchange theory in a natural setting. Findings found that out of roughly 10,000 solicitation letter to potential donors, one third contained no gift to accompany the call for donations, one third a small gift and one third a large gift with random assignments. The data confirmed that potential donors were much more likely to donate with the relative frequency being 75% more likely to receive a donation from a large gift recipient. These results are backed by a number of similar studies from University of Amsterdam, Tilburg University and University of Nottingham with the data showing a contrast to what is considered the nash equilibrium. The results demonstrate the commonality for individuals to experience a feeling of duty or reciprocate actions with commensurate worth or significance. To put in other words, If the employer expects the employee to put in a higher effort when offered a higher salary it may be in the employees best interests to put in a greater effort. Additionally, if the employee puts in a higher effort it may result in an increase in wage down the track. Thus indicating that the gift exchange game may have multiple equilibria, dependant on expectations, beliefs and if social preferences are two-sided. Reciprocity within the gift exchange game shows how social interaction play a large role in what are considered economic decisions and is the factor that balances and maintains the stability between give and take. In recognising the important of reciprocity, employees and employers can foster advantageous relationships with others to contribute to more harmonious and productive relations. Work Field usage The gift exchange model is used to explain workers' effort and wages provided by firms in the real world, especially involuntary unemployment. George A. Akerlof described labor contracts as "partial gift exchange". Unlike what is depicted in the simple model above, in real life, employees may exceed the minimum work required and firms may pay more than the market-clearing wage. According to Akerlof's model, this is because the worker’s effort not only depends on the effort itself, wage rate if employed, and the unemployed benefit if unemployed, but also the norm for effort. Thus, to affect these norms, firms may pay more. Akerlof's model has become the topic of several experiments aimed at understanding employee motivation and behaviour, as well as the effect of fairness from employers. The results of these experiments have been mixed and are highly dependent on the experimental setting. Several studies have been conducted in a laboratory setting, such as Fehr, Kirchsteiger, and Riedl (1993), which have presented strong evidence of the relationship between increased fixed wages and its influence in eliciting positive reciprocity from employees in the form of increased effort. However, these results have not been reflected in field studies, which have largely found no or little evidence of the relationship. Kube, Marechal, and Puppe (2012) found that in the field setting, there was no significant increase in effort after increased fixed wages. However, they did find that gifts of equivalent value that took alternative forms than fixed wages significantly increased effort. One particular non-monetary gift believed to incentivise employees is the attention they receive from their employer. In a model developed by Prof. Robert Dur, altruistic managers who signal a level of attention towards their employees can achieve the same level of output as an egoistic manager (who doesn't provide any attention to their employees) who is paying a higher wage to retain their employees. Eventually, the altruistic manager's marginal cost of attention exceeds a point where increasing the employees wage becomes a better alternative. This outcome is dependent on the employee exhibiting "neutral" or "warm" feelings towards their employer such that their expected utility increases with the attention they receive. Rather than contradict, this model supplements traditional game-exchange theory by demonstrating in a real world setting, managers have socioemotional tools at their disposal that may be preferred to a monetary gift. Many laboratory experiments support the theory of using gift exchange as an incentive mechanism. However, field evidence has resulted in conflicting effectiveness. A study conducted by Evesteves-Sorenson and Macera (2013) aimed to investigate removing any theoretical factors that could be “dampening gift exchange in the field”. The study identified a few factors that could be impacting gift exchange effectiveness such as “habituation to the gift, fatigue, and small gift size”. Accounting for these factors and subsequently implementing a field experiment to remove them, the study’s results found no evidence to support gift exchange in the work place. The multitude of factors for this discrepancy between laboratory and field settings is the topic of much subsequent research, but it is not entirely clear. While conflicting results do degrade the reliability of the application of the gift exchange game, it is important to note that it still provides valuable insight into employee and employer behavior. Other fields usages The gift-exchange game is not only used in the workplace but can also be practiced in other areas. For example, in the field of charitable giving, when a charity first makes a gift to a potential donor as part of a donation solicitation, more generous gifts are associated with higher frequency donations, resulting in more donations to the charity. Some user interaction systems use the gift-exchange game as the right gamification model. Modification of game conditions usage The experiment of Franke et al is based on a modified gift-exchange game, where workers can participate in wage setting. The results of this experiment show that when workers have the right to make wage decisions, they show a positive incentive to work harder. However, if firms want employees to exert high effort, firms need to offer high enough offers or delegate substantial decisions to employees. In practical applications, different mechanisms of co-determination might lead to very different incentive structures and performance outcomes. Most laboratory and field studies regarding the gift-exchange game focus on a bilateral relationship (one employee and one employer). An experiment conducted by Maximiano, Sloof and Sonnemans (2013) focused on creating a more complex laboratory environment to allow for further extrapolation of the results into real world relationships found in the labour market. The paper explored multi-level hierarchies and focused on the complex structure where “ownership and control are separated”. The classical gift-exchange game was manipulated to mimic a trilateral relationship where the firm is controlled by a manager but owned by a shareholder. This experiment found that employees rewarded higher wages with higher effort regardless of whether the manager shared in the firms’ profits or not. See also Trust game Prisoner's dilemma Ultimatum game Efficiency wage References Non-cooperative games Game theory game classes
Gift-exchange game
[ "Mathematics" ]
4,943
[ "Game theory game classes", "Game theory", "Non-cooperative games" ]
62,819,863
https://en.wikipedia.org/wiki/Apparatgeist
Apparatgeist theory is defined as “the spirit of the machine that influences both the designs of the technology as well as the initial and subsequent significance accorded to them by users, non-users and anti-users.” The theory was developed by James E. Katz and Mark Aakhus to explore the social, cultural and material aspects of the mobile and personal communication technologies (PCTs). “Regardless of culture, when people interact with PCTs, they tend to standardise infrastructure and gravitate towards consistent tastes and universal features,” Katz states. The two scholars proposed this term to bring the primary focus upon the human use and consequences of PCTs. In an effort to explain the patterns associated with PCTs, Katz and Aakhus advanced the concept of Apparatgeist by identifying several cross-cultural trends in the adoption, use and conceptualization of mobile phones. These trends have emerged in many social contexts, including participation in social networks, changes in traditional communication habits to accommodate mobile communication, competent mobile communication and unanticipated behaviors from mobile communication. Background This theory examines one's relationship with his/her technology, as well as the relationship that the two have with society. The term refers to “the common set of strategies or principles of reasoning about technology evident in the identifiable, consistent and generalized patterns of technological advancement throughout history.” Apparatgeist is a neologism in the field of new media and communication and it is in some ways “lead to New Age kinds of spiritualism represented in attempts to suggest a new kind of community technospirit which emerges within a particular medium." Katz and Aakhus argue that individuals tend to "standardize infrastructure and gravitate towards consistent tastes and universal features." Users thus engage in mobile telephone use in largely similar ways. The essence of the Apparatgeist theory is that technology use is socially constructed and not technologically deterministic. These norms are established as a shared understanding of how one's technology should be used. This shared understanding is derived from the social construction theory and is commonly referred to as "social constructionism." Theoretical elements Pertaining to how PCTs bring about the Apparatgeist: PCTs have a Geist which can be likened to the expansion of freedom. PCTs have their own logic that informs the judgements people make about the utility or value of the technologies in their environment.” PCTs inform the predictions that scientists and technology producers might make about personal technologies. “PCTs have a socio-logic that results from ‘communities of people “thinking and acting together over time.”’ In making possible Apparatgeist PCTs, ‘the compelling image of perpetual contact is the image of pure communication ... which is an idealization of communication committed to the prospect of sharing one’s mind with another, like the talk of angels that occurs without the constraints of the body.' Application of Apparatgeist theory in other research Yuan focuses on the effects of Chinese culture on mobile communication usage behavior and patterns. Through a snowball sampling technique, the research gathered in-depth interviews with Chinese people living in metropolitan areas. The results showed that there was a clear distinction in the way Chinese people communicated on mobile phones versus people in the West. Contrary to western cultures’ emphasis  on keeping a small and tight knit circle of contacts in their mobile phones, the findings in this research showed that Chinese mobile users have a large and open network of mobile contact. “Contextualized mobility” was found to be more significant than the theoretical constructions of the Apparatgeist and perpetual contact. Kneidinger-Müller extends the Apparatgeist theory to understand the social factors that understand the effects of parallel communication habits in the usage of mobile phones. The research surveyed 339 smartphone users in Germany and found that social factors were equally as important as usage and technological factors to understand communication practices. Tojib et al. applies both the Apparatgeist and domestication theory as a theoretical groundwork to show how the symbolic use of smartphones brings about positive effect on user attachment to mobile phones. Subsequently, this leads to the experimental value of using value-added mobile services that is defined as “any services beyond voice calls and short messaging services offered by mobile telecommunication service providers.” Apparatgeist helps to support the idea that the add-on activities via value-added mobile services bring “purposeful engagement” and ultimately brings experiential value and value-expressiveness to users. Axelsson examines culture and life stages as factors to see which is more a primary determiner of mobile-phone usage and attitudinal patterns. Deriving data from a Swedish national survey of 18–24 year olds in Sweden, Axelsson find that “young adults (compared with older people) seem to be in perpetual contact with family, friends and colleagues.” This finding shows that life stages are a greater determinant factor than culture in the use of and attitudes towards mobile phones. The Apparatgeist theory supports the hypothesis in this research; “the mobile phone is used in rather similar ways, regardless of cultural context.” Vanden Abeele explores the variations in lifestyles within mobile youth culture by constructing a user typology of Flemish adolescents and measuring the gratifications received from the use of mobile phones. Apparatgeist is used as a theoretical basis to emphasize shared commonalities in developmental challenges that adolescents face, particularly when it came to the similarities in mobile phone gratifications regardless different cultural contexts. The research concludes that a complex relationship is visible among the “structural and social-psychological backgrounds of youths, developmental tasks, and the functionalities of mobile media technologies as they are recognized in a particular time and context.” Tan et al. conducted a multi-method study to understand whether email and SMS—two types of PCTs—were more or less suitable for different environments. Consistent with the apparatgeist and social construction theory, this research shows that PCTs carry a common set of meanings about their nature and purpose that are general across social settings. Consumers overwhelmingly “perceived SMS as more intimate and also more intrusive than email”. Nonetheless, the study also validates the context-cultural dimension differences such as different preferences for use in dissemination of commercial messages between Chinese and Swiss consumers. Campbell drew on the apparatgeist theory to explore the extent to which the use of mobile telephones by individuals in different cultures show similarities or variations. By sampling college students from Hawaii, Japan, Sweden, Taiwan and the US, Campbell concluded that although there are apparent varieties in communication practices in different cultures, there is also an inherent universality in the way people interact through mobile phones that stem from a basic human need. This idea of communication as a universal aspect of humanity comes from the basis of Apparatgeist. Shuter et al. explores the impact of cultural values and observes the contextual norms on mobile phone activity between American and Danes. Apparatgeist and SCOT (the social construction of technology) theory are used as an initial starting point for this research. Extending beyond the contextual and inmate human values factor found in the Apparatgeist and SCOT theory, the findings of this research identifies a universal logic and indigenous cultural factors as a foundation to the study of cross-national attitudes and usage of mobile phones. Contesting views Mizuko Ito, an anthropologist at the University of California, Irvine, believes technologies are both constructive and constructed by historical, social, and cultural contexts. Rather than conducting a comparative and global survey of mobile phone use, Ito looked at the multifaceted and sustained engagement of mobile phone use in one national context – Japan. Through this approach, Ito finds significance in the social and cultural diversity in mobile phone use across different cultures. Scott Campbell of the University of Michigan, author of several papers on mobile-phone usage, expects some persistence of cultural variations. Campbell's study “suggest that cultural values may influence the norms of mobile communicators, with individualists and collectivists." Campbell believes that people behave differently in public settings that stem from different cultural and social norms. He extends this idea by establishing terms such as horizontal and vertical individualists to display different mobile phone norms. References Communication theory Social constructionism New media Neologisms
Apparatgeist
[ "Technology" ]
1,687
[ "Multimedia", "New media" ]
62,820,005
https://en.wikipedia.org/wiki/Ethics%20of%20philanthropy
Philanthropy poses a number of ethical issues: How donors should choose beneficiaries and ensure that their donations are effective. Acceptable marketing practices for grant seekers. A recipient may violate the donor's intent in spirit or in law. A donor's activities may be considered incompatible with those of the institution's mission. Specifically, a recipient may be perceived as complicit with or oblivious to a donor's unethical practices, thus tainting its own good name, especially when an institution grants naming rights. A donor may receive a quid pro quo for all or part of a donation. Giving effectively Choosing suitable recipients of philanthropy, and ensuring that the aid is effective, is a difficult ethical problem, first addressed by Aristotle. Marketing practices Ethical questions include: how to compensate fund-raising agents; how to compete with other causes; how much deception, if any, is acceptable; whether some images ("pornography of poverty") should not be used, even if they are effective. Donor intent Many gifts are accompanied by a statement of intent, which may be a formal, legal agreement, or a less formal understanding. To what extent the recipient must respect that intent is an ethical and legal issue, especially as circumstances and social norms change. Incompatible missions When a person's activities are incompatible with an institution's mission, associating with them or accepting donations from them may be considered inappropriate or dishonest marketing (cf. greenwashing), a form of conflict of interest. For example, children's museums generally refuse sponsorship from manufacturers of junk food. Protests against David Koch's support for climate change denial led to his resignation from the board of the American Museum of Natural History. Tainted donors Funds derived from, and donors engaged in, unethical, immoral, or criminal activities pose a problem for the recipient, as accepting a donation or continuing to benefit from it may be interpreted as benefiting from or ignoring the disreputable activity. Such donations have been characterized as "toxic philanthropy". This is an issue for the donor's behavior both before and after the donation. Institutions may react by returning the money, removing the acknowledgement, or by keeping the money. The Sackler family has been a major donor to many cultural and educational institutions, and has had many buildings and programs named for it. Their association with the opioid epidemic has caused many activists to urge the recipients to remove the Sackler name from their buildings and programs, and some institutions have announced that they will remove the name or accept no further donations from the family. Harvard has said that it will not remove the name from the Arthur M. Sackler Museum because "Dr. Arthur Sackler died before Oxycontin was developed. His family sold their interest in the company before the drug was developed.... he had absolutely no relationship to it". Similarly, the sex offender Jeffrey Epstein was a major donor to many university programs, even after his conviction for sex crimes. After it emerged that the director of the MIT Media Lab, Joi Ito, was aware of Epstein's misdeeds and took steps to solicit donations while hiding their source, Ito resigned. MIT and Harvard have both initiated reviews of donations by Epstein. The MIT review concluded that: Since MIT had no policy or processes for handling controversial donors in place at the time, the decision to accept Epstein's post-conviction donations cannot be judged to be a policy violation. But it is clear that the decision was the result of collective and significant errors in judgment that resulted in serious damage to the MIT community. Quid pro quo Donors are generally acknowledged publicly for their donations, which benefits their reputation. It has been argued that this should be treated as a business transaction. Many philosophers have argued that donations should be anonymous for this reason. Receiving something of value in return for a donation is also considered both legally and ethically a quid pro quo. Further reading Peter Singer, "Dirty money and tainted philanthropy", Project Syndicate, February 6, 2019 Ernie Smith, "Amid Epstein Scandal, Fundraising Group puts focus on Ethics in Philanthropy", Associations Now September 19, 2019 Jim Rendon, "How to Protect Your Nonprofit From Controversial Donors", The Chronicle of Philanthropy, September 19, 2019 See also Charity fraud Charity scandals List of Philanthropists Philanthropy in the United States Effective altruism References Donation Applied ethics Philanthropy
Ethics of philanthropy
[ "Biology" ]
890
[ "Behavior", "Altruism", "Philanthropy", "Human behavior", "Applied ethics" ]
62,821,224
https://en.wikipedia.org/wiki/Phenotype%20modification
Phenotype modification is the process of experimentally altering an organism's phenotype to investigate the impact of phenotype on the fitness. Phenotype modification has been used to assess the impact of parasite mechanical presence on fish host behaviour. References Genetics
Phenotype modification
[ "Biology" ]
54
[ "Genetics" ]
62,821,393
https://en.wikipedia.org/wiki/2-Bromoethyl%20ether
2-Bromoethyl ether (or Bis(2-bromoethyl) ether) is an organobromine compound that is also an ether. It is used in the manufacture of pharmaceuticals and crown ethers. References Ethers Organobromides
2-Bromoethyl ether
[ "Chemistry" ]
55
[ "Organic compounds", "Functional groups", "Ethers" ]
62,822,080
https://en.wikipedia.org/wiki/Dibutoxy%20ethyl%20phthalate
Dibutoxy ethyl phthalate is an organic compound and phthalate ester, baring 2-butoxyethanol groups. It is used as a plasticizer in polyvinyl chloride, polyvinyl acetate and cellulose acetate. Like most phthalates it is non-volatile, and remains liquid over a wide range of temperatures. Although its water solubility is low, it remains one of the most water soluble of the common phthalates. References Phthalate esters Ethers
Dibutoxy ethyl phthalate
[ "Chemistry" ]
111
[ "Organic compounds", "Functional groups", "Ethers" ]
62,822,188
https://en.wikipedia.org/wiki/Dibutoxymethane
Dibutoxymethane is an oligoether (more than one -O- grouping) or acetal containing two butyl groups and a methylene grouping. It is used in cosmetics, as a cleansing agent, or solvent. It reduces the formation of soot and nitrogen oxides when added to diesel fuel. It can be classed as a green solvent, as it contains no halogens, and is not very toxic. References Formals
Dibutoxymethane
[ "Chemistry" ]
93
[ "Functional groups", "Formals" ]
62,823,244
https://en.wikipedia.org/wiki/Improved%20Layer%202%20Protocol
IL2P (Improved Layer 2 Protocol) is a data link layer protocol originally derived from layer 2 of the X.25 protocol suite and designed for use by amateur radio operators. It is used exclusively on amateur packet radio networks. IL2P establishes link layer connections, transferring data encapsulated in frames between nodes, and detecting errors introduced by the communications channel. The Improved Layer 2 Protocol (IL2P) was created by Nino Carrillo, KK4HEJ, based on AX.25 version 2.0 and implements Reed Solomon Forward Error Correction for greater accuracy and throughput than either AX.25 or FX.25. Specifically, in order to achieve greater stability on link speeds higher than 1200 baud. IL2P can be used with a variety of modulation methods including AFSK and GFSK. The direwolf software TNC contains the first open source implementation of the protocol. IL2P Specification The IL2P draft specification v0.6 was published via the Terrestrial Amateur Radio Packet Network (TARPN) on March 16, 2024. As of version 0.6, It added Trailing CRC description. Removed Weak Signal Extensions. Corrected description of block scrambling. Removed reference to Baseline FEC level. Added BPSK and QPSK symbol maps. Updated example encoded packets. Minor edits for readability. Implementations IL2P was first implemented in the closed source and proprietary ninoTNC to solve for lossy network links due to low Signal-to-noise ratio or weak signal strength. The specification itself outlines several design goals including: Forward error correction Eliminating bit-stuffing Streamlining the AX.25 header format Improved packet detection in the absence of Decode (DCD) and for open-squelch receive Produce a bitstream suitable for modulation on various physical layers Avoid bit-error-amplifying methods (differential encoding and free-running LFSRs) Increase efficiency and simplicity over FX.25 Forward Error Correction See also NCpacket group References Packet radio Link protocols
Improved Layer 2 Protocol
[ "Technology" ]
410
[ "Wireless networking", "Packet radio" ]
62,823,306
https://en.wikipedia.org/wiki/Automorphism%20of%20a%20Lie%20algebra
In abstract algebra, an automorphism of a Lie algebra is an isomorphism from to itself, that is, a bijective linear map preserving the Lie bracket. The set of automorphisms of are denoted , the automorphism group of . Inner and outer automorphisms The subgroup of generated using the adjoint action is called the inner automorphism group of . The group is denoted . These form a normal subgroup in the group of automorphisms, and the quotient is known as the outer automorphism group. Diagram automorphisms It is known that the outer automorphism group for a simple Lie algebra is isomorphic to the group of diagram automorphisms for the corresponding Dynkin diagram in the classification of Lie algebras. The only algebras with non-trivial outer automorphism group are therefore and . There are ways to concretely realize these automorphisms in the matrix representations of these groups. For , the automorphism can be realized as the negative transpose. For , the automorphism is obtained by conjugating by an orthogonal matrix in with determinant -1. Derivations A derivation on a Lie algebra is a linear map satisfying the Leibniz rule The set of derivations on a Lie algebra is denoted , and is a subalgebra of the endomorphisms on , that is . They inherit a Lie algebra structure from the Lie algebra structure on the endomorphism algebra, and closure of the bracket follows from the Leibniz rule. Due to the Jacobi identity, it can be shown that the image of the adjoint representation lies in . Through the Lie group-Lie algebra correspondence, the Lie group of automorphisms corresponds to the Lie algebra of derivations . For finite, all derivations are inner. Examples For each in a Lie group , let denote the differential at the identity of the conjugation by . Then is an automorphism of , the adjoint action by . Theorems The Borel–Morozov theorem states that every solvable subalgebra of a complex semisimple Lie algebra can be mapped to a subalgebra of a Cartan subalgebra of by an inner automorphism of . In particular, it says that , where are root spaces, is a maximal solvable subalgebra (that is, a Borel subalgebra). References E. Cartan, Le principe de dualité et la théorie des groupes simples et semi-simples. Bull. Sc. math. 49, 1925, pp. 361–374. . Morphisms Lie algebras
Automorphism of a Lie algebra
[ "Mathematics" ]
537
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Category theory", "Mathematical relations", "Morphisms" ]
62,824,320
https://en.wikipedia.org/wiki/Research%20in%20Number%20Theory
Research in Number Theory is a peer-reviewed mathematics journal covering number theory and arithmetic geometry. The editors-in-chief are Jennifer Balakrishnan (Boston University), Florian Luca (University of Witwatersrand), Ken Ono (University of Virginia), and Andrew Sutherland (Massachusetts Institute of Technology). It was established in 2015 as a full open access journal, but is now a hybrid open access journal, published by Springer Science+Business Media. Abstracting and indexing The journal is abstracted and indexed in EBSCO databases, Emerging Sources Citation Index, MathSciNet, Scopus, and Zentralblatt MATH. References External links English-language journals Hybrid open access journals Number theory journals Quarterly journals Springer Science+Business Media academic journals Algebraic geometry journals
Research in Number Theory
[ "Mathematics" ]
158
[ "Number theory", "Number theory journals" ]
62,824,376
https://en.wikipedia.org/wiki/Persistent%20Chat
Persistent Chat is a messaging concept for group chat software that consists of standing, topic-based chatrooms with an emphasis on real-time messaging, that preserves conversation history over time which is visible to both current and future participants. This form of messaging was adopted by many organizations in regulated industries, to ensure compliance and was popular with the major financial centers around the world. History The Persistent Chat feature was first introduced in Microsoft Lync 2013 as a group chat offering that allows teams to create topic-focused discussions. However, in the past it has been labeled as Group Chat, MindAlign Group Chat by Parlano was created in 2000 and was later acquired by Microsoft in 2007. MindAlign provides an IRC like chatroom experience where topical chatrooms can be looked back through. Since then the Group Chat feature has surfaced in Microsoft's Office Communications Server 2007 R2 which later became Lync 2010 before the term Persistent Chat was finally introduced in Microsoft Lync 2013 (now known as Skype for Business). Use cases According to Microsoft, persistent chat rooms should be considered for the following use cases: “Coordinate events, create ask-the-expert and Q&A forums, Brainstorm, create a bulletin-board environment for evolving topics, collect feedback from colleagues and test new features, and share information among employees across different working hours and locations.” Before the introduction of the Persistent Chat feature in Microsoft's Unified Communications products, it was used by the financial services sector to “keep client banks up-to-the-minute on changes” in the foreign exchange market. Persistent Chat today Most chat applications' conversation history persists today and is usually not referred to as ‘Persistent Chat’ but rather as ‘Chat History’ from consumer applications, like WhatsApp, to enterprise applications, like Slack or Microsoft Teams, each of these applications features chat history. As such persisting chat history has become somewhat of a norm in messaging applications. Chat that persists versus Persistent Chat Although the use of the term ‘Persistent Chat’ is not commonly used in most applications, Persistent Chat differs from applications that support conversation or chat history regarding the original use case the technology was built for. Along with conversation history that persists over time, historically MindAlign (Persistent Chat) was and is still used as an always-on communication channel "for conducting ongoing business-critical conversations" that are often transactional, specifically in the financial services sector. References Business chat software
Persistent Chat
[ "Technology" ]
499
[ "Instant messaging", "Business chat software" ]
62,826,558
https://en.wikipedia.org/wiki/Protein%20structure%20reconstruction
Protein structure reconstruction refers to constructing an atomic-resolution model of a protein structure from incomplete coarse-grained representations like, for example, protein contact maps, positions of alpha carbon atoms only or backbone chain atoms only. There are many computational tools for protein structure reconstruction that are usually focused on specific reconstruction tasks which include: backbone reconstruction from alpha carbons, side-chains reconstruction from backbone chain atoms, hydrogen atoms reconstruction from heavy atoms positions and recovery of protein structure from contact maps. Software Backbone reconstruction Pulchra BBQ PD2 Side chain reconstruction Pulchra SCWRL References Proteins
Protein structure reconstruction
[ "Chemistry" ]
118
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins" ]
62,827,955
https://en.wikipedia.org/wiki/V752%20Centauri
V752 Centauri (HD 101799) is multiple star system and variable star in the constellation of Centaurus. An eclipsing binary, its apparent magnitude has a maximum of 9.10, dimming to 9.66 during primary eclipse and 9.61 during secondary eclipse. Its variability was discovered by Howard Bond in 1970. From parallax measurements by the Gaia spacecraft, the system is located at a distance of from Earth. V752 Centauri is a contact binary of the W Ursae Majoris type, composed of two F-type stars with a combined spectral type of F7/G0(V). Individually, the components have been classified as F8 + F5, and F8 + F7.5. With effective temperatures of 5,955 and 6,221 K, the system is classified as a W Ursae Majoris variable of subtype W, where the secondary star is hotter than the primary; for this reason, the primary eclipses are caused by the occultation of the secondary star. The system has an orbital period of only 0.3702 days and a separation of 2.59 solar radii. The orbit is inclined by 82° in relation to the plane of the sky. The combination of photometric and spectroscopic data have allowed the direct determination of the parameters of the stars. The primary component has a mass of 1.31 times the solar mass, radius of 1.30 times the solar radius and a luminosity double that of the Sun. The secondary has only 0.39 times the solar mass, 0.77 times the solar radius, and 0.75 times the solar luminosity. Since the stars are in contact, there is considerable mass transfer from the secondary to the primary. It is estimated that the secondary star was initially the more massive star, with 1.76 times the solar mass, while the primary had an initial mass of 0.84 time the solar mass. The system's age is estimated at 3.8 billion years. All contact binary stars are expected to eventually merge into a single, fast-rotating star. The system's spectrum shows the spectral lines of a third star, which seems to be a K-type main sequence star. This third star is itself a spectroscopic binary with a period of 5.147 days, with a small companion that is probably an M-type red dwarf. The V752 Centauri system is thus composed of four stars, with two binary pairs that orbit each other. Most contact binary stars have one or more distant companions, and were possibly formed by angular momentum loss due to gravitational interactions with these companion stars. The light curve analysis of V752 Centauri reveals that between 1970 and 2000, the orbital period of the eclipsing binary remained approximately constant, indicating there was no significant mass transfer. Around the year 2000, the period abruptly increased, possibly accompanied by a slightly dimmer primary eclipse. Since then, the period has been increasing at a rate of 0.044 seconds per year, which is caused by mass transfer from the less massive star to the more massive one at a rate of 2.52  per year. This period change and the beginning of the mass transfer phase were possibly caused by interactions with the companion binary star. References W Ursae Majoris variables Centaurus F-type main-sequence stars K-type main-sequence stars 4 Spectroscopic binaries Durchmusterung objects 101799 057129 Centauri, V752
V752 Centauri
[ "Astronomy" ]
729
[ "Centaurus", "Constellations" ]
62,828,055
https://en.wikipedia.org/wiki/List%20of%20space%20technology%20awards
This list of space technology awards is an index to articles about notable awards related to space technology. This includes awards for development of spacecraft, satellites, space stations, and support infrastructure, equipment, and procedures. The list shows the country of the sponsoring organization, but awards are not necessarily limited to people or organizations based in that country. Awards See also Lists of awards Lists of science and technology awards List of aviation awards List of astronomy awards List of challenge awards References Space-related
List of space technology awards
[ "Technology" ]
96
[ "Science and technology awards", "Space-related awards", "Lists of science and technology awards" ]
62,828,326
https://en.wikipedia.org/wiki/Blue%E2%80%93green%20deployment
In software engineering, blue–green deployment is a method of installing changes to a web, app, or database server by swapping alternating production and staging servers. Overview In blue–green deployments, two servers are maintained: a "blue" server and a "green" server. At any given time, only one server is handling requests (e.g., being pointed to by the DNS). For example, public requests may be routed to the blue server, making it the production server and the green server the staging server, which can only be accessed on a private network. Changes are installed on the non-live server, which is then tested through the private network to verify the changes work as expected. Once verified, the non-live server is swapped with the live server, effectively making the deployed changes live. Using this method of software deployment offers the ability to quickly roll back to a previous state if anything goes wrong. This rollback is achieved by simply routing traffic back to the previous live server, which still does not have the deployed changes. An additional benefit to the blue–green method of deployment is the reduced downtime for the server. Because requests are routed instantly from one server to the other, there is ideally no period where requests will be unfulfilled. The blue–green deployment technique is often contrasted with the canary release deployment technique and it has similarities with A/B testing. History Dan North and Jez Humble encountered differences between their test environments and the production environment while running Oracle WebLogic Server for a client sometime around 2005. To ensure safe deployment, they introduced a method where the new application version was deployed alongside the live system. This approach allowed for thorough testing and easy rollback in case of issues. The team initially considered naming these environments A and B but decided against it to avoid the perception that one was primary and the other secondary. They instead chose color-based names like blue, green, orange, and yellow, eventually using only blue and green since "having two was sufficient". This naming convention was adopted while working on the original Continuous delivery book published in 2010 and became a common term in the industry afterwards. Benefits and challenges Blue–green deployment is widely recognized for its ability to reduce downtime during application updates and minimize the risk of introducing defects into production environments. By maintaining two separate environments—blue (the current live environment) and green (the environment with the updated version)—traffic can easily be switched between the two, ensuring that updates are rolled out without disrupting users. This method enables quick rollback in case of deployment failure, thus improving overall system resilience and user experience. While blue–green deployment reduces risks during updates, it also requires additional resources since two environments need to be maintained simultaneously. The cost of running duplicate infrastructure, even temporarily, can be prohibitive for smaller organizations. Furthermore, complex database migrations may pose challenges, as the system must ensure that both the blue and green environments have consistent data. Solutions to these issues often involve using database migration tools that allow for backward compatibility between environments. Implementation There are several approaches to implementing blue–green deployments, each offering varying levels of automation and ease of use depending on the platform and tools available. AWS CodeDeploy AWS CodeDeploy facilitates blue–green deployments by automating the entire process across services such as Amazon EC2 and AWS Lambda. The service shifts traffic between the old (blue) environment and the new (green) environment, minimizing downtime and ensuring a smooth transition. AWS CodeDeploy also allows the use of lifecycle event hooks, enabling developers to run tests and verification steps before routing traffic to the green environment. Sample CodeDeploy configuration: version: 0.0 os: linux files: - source: / destination: /var/www/html hooks: AfterInstall: - location: scripts/start_server.sh timeout: 300 runas: root Deployment command: aws deploy create-deployment --application-name MyApp --deployment-group-name MyDeploymentGroup --s3-location bucket=my-bucket,key=my-app.zip,bundleType=zip Kubernetes Kubernetes supports blue–green deployments through its native service capabilities. Using multiple deployments and services, Kubernetes allows operators to manage traffic routing between blue and green environments with minimal risk of service interruptions. Tools like ArgoCD or Spinnaker further enhance automation by integrating deployment pipelines directly with Kubernetes clusters. Google Cloud Deployment Manager Google Cloud offers blue–green deployment capabilities through Deployment Manager. By defining resources in a declarative format, Deployment Manager allows users to create, update, and delete resources as part of a blue–green deployment process. Like AWS CodeDeploy, it minimizes downtime by shifting traffic from the old to the new environment after performing necessary tests. Setting up the environment: Install the gcloud CLI or use Google Cloud Shell. Set your default project: gcloud config set project <YOUR_PROJECT_ID>Cloning the sample repository:gcloud source repos clone copy-of-gcp-mig-simple cd copy-of-gcp-mig-simpleTo modify the configuration, navigate to the configuration file (e.g., infra/main.tfvars) and update the environment from blue to green:sed -i'' -e 's/blue/green/g' infra/main.tfvarsAdding, committing, and pushing your changes to trigger the deployment:git add . git commit -m "Promote green" git pushExample of how the main.tfvars file might look like:# main.tfvars project_id = "<YOUR_PROJECT_ID>" region = "us-central1" zone = "us-central1-a" # Load balancer settings blue_instance_group = "blue-instance-group" green_instance_group = "green-instance-group" # Health check settings health_check = { name = "http-health-check" request_path = "/" check_interval_sec = 10 timeout_sec = 5 healthy_threshold = 2 unhealthy_threshold = 2 } Azure Container Apps Azure Container Apps provides blue–green deployment capabilities by using container app revisions, traffic weights, and revision labels. In this deployment model, two identical environments—referred to as "blue" and "green"—are used. The blue environment hosts the current stable version of the application, while the green environment holds the new version. Once the green environment is fully tested, production traffic is routed to it, and the blue environment is deprecated until the next deployment cycle. To implement blue–green deployment, you create revisions of the container apps and assign traffic weights. The blue revision is assigned 100% of the traffic initially, while the green revision is deployed with no production traffic. After successful testing of the green revision, the traffic is switched over smoothly without downtime. If any issues arise in the green environment, a rollback is easily executed, routing traffic back to the blue revision. Create a container app with a new revision:az containerapp create --name $APP_NAME \ --environment $APP_ENVIRONMENT_NAME \ --resource-group $RESOURCE_GROUP \ --image mcr.microsoft.com/k8se/samples/test-app:$BLUE_COMMIT_ID \ --revision-suffix $BLUE_COMMIT_ID \ --ingress external \ --target-port 80 \ --revisions-mode multipleDeploy a new green revision:az containerapp update --name $APP_NAME \ --resource-group $RESOURCE_GROUP \ --image mcr.microsoft.com/k8se/samples/test-app:$GREEN_COMMIT_ID \ --revision-suffix $GREEN_COMMIT_ID \ --set-env-vars REVISION_COMMIT_ID=$GREEN_COMMIT_ID Switch 100% of production traffic to the green revision:az containerapp ingress traffic set \ --name $APP_NAME \ --resource-group $RESOURCE_GROUP \ --label-weight blue=0 green=100 References Software distribution System administration Software release
Blue–green deployment
[ "Technology" ]
1,736
[ "Information systems", "System administration" ]
54,285,532
https://en.wikipedia.org/wiki/Ibn%20al-Samh
Abū al‐Qāsim Aṣbagh ibn Muḥammad ibn al‐Samḥ al‐Gharnāṭī al-Mahri () (born 979, Córdoba; died 1035, Granada), also known as Ibn al‐Samḥ, was an Arab mathematician and astronomer from Al-Andalus. He worked at the school founded by Al-Majriti in Córdoba, until political unrest forced him to move to Granada, where he was employed by Ḥabbūs ibn Māksan. He is known for treatises on the construction and use of the astrolabe, as well as the first known work on the planetary equatorium. Furthermore, in mathematics he is remembered for a commentary on Euclid and for contributions to early algebra, among other works. He is one of several writers referred to in Latin texts as "Abulcasim." The exoplanet Samh, also known as Upsilon Andromedae c, is named in his honor as part of the IAU's NameExoWorlds project. References 979 births 1035 deaths 10th-century Arab people 11th-century Arab people 11th-century people from al-Andalus Astronomers from al-Andalus 11th-century astronomers Mathematicians from al-Andalus Inventors of the medieval Islamic world
Ibn al-Samh
[ "Astronomy" ]
271
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
54,286,399
https://en.wikipedia.org/wiki/Curie%27s%20principle
Curie's principle, or Curie's symmetry principle, is a maxim about cause and effect formulated by Pierre Curie in 1894: The idea was based on the ideas of Franz Ernst Neumann and Bernhard Minnigerode. Thus, it is sometimes known as the Neuman–Minnigerode–Curie principle. References Group theory Concepts in physics Symmetry
Curie's principle
[ "Physics", "Mathematics" ]
76
[ "Group theory", "Fields of abstract algebra", "nan", "Geometry", "Symmetry" ]
54,286,982
https://en.wikipedia.org/wiki/Aspergillus%20pisci
Aspergillus pisci is a species of fungus in the genus Aspergillus. References pisci Fungi described in 2014 Fungus species
Aspergillus pisci
[ "Biology" ]
32
[ "Fungi", "Fungus species" ]
54,287,671
https://en.wikipedia.org/wiki/Plasmon%20coupling
Plasmon coupling is a phenomenon that occurs when two or more plasmonic particles approach each other to a distance below approximately one diameter's length. Upon the occurrence of plasmon coupling, the resonance of individual particles start to hybridize, and their resonance spectrum peak wavelength will shift (either blueshift or redshift), depending on how surface charge density distributes over the coupled particles. At a single particle's resonance wavelength, the surface charge densities of close particles can either be out of phase or in phase, causing repulsion or attraction and thus leading to increase (blueshift) or decrease (redshift) of hybridized mode energy. The magnitude of the shift, which can be the measure of plasmon coupling, is dependent on the interparticle gap as well as particles geometry and plasmonic resonances supported by individual particles. A larger redshift is usually associated with smaller interparticle gap and larger cluster size. Plasmon coupling can also cause the electric field in the interparticle gap to be boosted by several orders of magnitude, which far-exceeds the field enhancement for a single plasmonic nanoparticle. Many sensing applications such as surface enhanced Raman spectroscopy (SERS) utilize the plasmon coupling between nanoparticles to achieve ultralow detection limit. Plasmon ruler Plasmon ruler refers to a dimer of two identical plasmonic nanospheres linked together through a polymer, typically DNA or RNA. Based on the Universal Scaling Law between spectral shift and the interparticle separations, the nanometer scale distance can be monitored by the color shifts of dimer resonance peak. Plasmon rulers are typically used to monitor distance fluctuation below the diffraction limit, between tens of nanometers and a few nanometers. Plasmon coupling microscopy Plasmon coupling microscopy is a ratiometric widefield imaging approach that allows monitoring of multiple plasmon rulers with high temporal resolution. The entire field of view is imaged simultaneously on two wavelength channels, which corresponds to the red and blue flank of the plasmon ruler resonance. The spectral information of an individual plasmon ruler is expressed in the intensity distribution on the two monitored channels, quantified as R=(I1-I2)/(I1+I2). Each R value corresponds to a certain nanometer scale distance which can be calculated using computer simulation or generated from experiments. References Plasmonics
Plasmon coupling
[ "Physics", "Chemistry", "Materials_science" ]
517
[ "Plasmonics", "Surface science", "Condensed matter physics", "Nanotechnology", "Solid state engineering" ]
54,287,754
https://en.wikipedia.org/wiki/Mott%E2%80%93Schottky%20plot
In semiconductor electrochemistry, a Mott–Schottky plot describes the reciprocal of the square of capacitance versus the potential difference between bulk semiconductor and bulk electrolyte. In many theories, and in many experimental measurements, the plot is linear. The use of Mott–Schottky plots to determine system properties (such as flatband potential, doping density or Helmholtz capacitance) is termed Mott–Schottky analysis. Consider the semiconductor/electrolyte junction shown in Figure 1. Under applied bias voltage the size of the depletion layer is (1) Here is the permittivity, is the elementary charge, is the doping density, is the built-in potential. The depletion region contains positive charge compensated by ionic negative charge at the semiconductor surface (in the liquid electrolyte side). Charge separation forms a dielectric capacitor at the interface of the metal/semiconductor contact. We calculate the capacitance for an electrode area as (2) replacing as obtained from equation 1, the result of the capacitance per unit area is (3) a equation describing the capacitance of a capacitor constructed of two parallel plates both of area separated by a distance . Replacing equation (3) in (1) we obtain the result (4). Therefore, a representation of the reciprocal square capacitance, is a linear function of the voltage, which constitutes the Mott–Schottky plot as shown in Fig. 1c. The measurement of the Mott–Schottky plot brings us two important pieces of information. The slope gives the doping (semiconductor) density (provided that the dielectric constant is known). The intercept to the x axis provides the built-in potential, or the flatband potential (as here the surface barrier has been flattened) and allows establishing the semiconductor conduction band level with respect to the reference of potential. In liquid junction the reference of potential is normally a standard reference electrode. In solid junctions, we can take as a reference the metal Fermi level, if the work function is known, which provides a full energy diagram in the physical scale. The Mott–Schottky plot is sensitive to the electrode surface in contact with solution, see Figure 2. A more accurate analysis considering the statistics of electrons provides the following result for the size of the depletion region (5) in this case the Mott–Schottky equation is (6) When the interfacial barrier is of the order , special care has to be taken to interpret the capacitance measurement. In fact at these small voltages the capacitance makes a peak that can be used for the determination of the built-in voltage. The Mott–Schottky analysis can more generally resolve a variable doping profile in the semiconductor as follows (7) The derivative gives the doping at the edge of the depletion region, . This method only provides a spatial resolution of the order of a Debye length systems where more than one process gives a substantial kinetic response, it is necessary to adopt Electrochemical Impedance Spectroscopy that resolves the different capacitances in the system. For example, in the presence of a surface state at the semiconductor/electrolyte interface, the spectra show two arcs, one at low frequency and another one at high frequency. The depletion capacitance leading to Mott–Schottky plot is situated in the high frequency arc, as the depletion capacitance is a dielectric capacitance. On the other hand, the low frequency feature corresponds to the chemical capacitance of the surface states. The surface state charging produces a plateau as indicated in Fig. 1d. Similarly, defect levels in the gap affect the changes of capacitance and conductance. Another widely used method to scan deep levels in Schottky barriers is termed admittance spectroscopy and consists on measuring the capacitance at a fixed frequency while varying the temperature. Surface photovoltage technique or potentiostatically induced Burstein-Moss shifts can be used to determine the position of the band edges. References Semiconductors
Mott–Schottky plot
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
872
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
54,288,469
https://en.wikipedia.org/wiki/Template-guided%20self-assembly
Template-guided self-assembly is a versatile fabrication process that can arrange various micrometer to nanometer sized particles into lithographically created template with defined patterns. The process contain the following four steps. Create Template The "template" can be created by either photolithography or e-beam lithography to define binding sites for various building blocks. The binding sites should reflect the footprint of the building blocks or clusters to be bound. Surface Treatment After film development, the created pattern is treated with charged polymers in order to “stick” the particles. Take poly-lysine as an example, the poly-lysine will cover the negatively charged glass surface and turn the charge to be positive; it thus can non-specifically bind negatively charged metallic nanoparticles. Particle Assembly To do particle assembly, treated pattern is submerged in a small amount of aqueous solution of particles. A few approaches can be used to facilitate the binding efficiency. One of them is to use capillary force at the edge of the aqueous droplet to “push” the particles into the binding sites. If assembling multiple types of particles, the particles should be assembled in the order of decreasing sizes. For example, if assembling both 60   nm gold nanoparticles as well as 40   nm silver nanoparticles, 60   nm gold nanoparticles should be applied first because it is too big to enter binding sites tailored for 40   nm particles. Rationally design the binding sequence as well as the binding site sizes can result in minimizing the binding errors from occurring. Remove Template After binding of all building blocks, the template can be removed by either dissolving in an organic solvent, or stripped off by a scotch tape. References Microtechnology
Template-guided self-assembly
[ "Materials_science", "Engineering" ]
360
[ "Materials science", "Microtechnology" ]
54,288,480
https://en.wikipedia.org/wiki/STANAG%204427%20on%20CM
STANAG 4427 on Configuration Management in System Life Cycle Management is the Standardization Agreement (STANAG) of NATO nations on how to do configuration management (CM) on defense systems. The STANAG, and its supporting NATO publications, provides guidance on managing the configuration of products and services. It is unique in its full life cycle perspective, requiring a Life Cycle CM Plan, and in its approach to contracting for CM, using an ISO standard as the base, and building-up additional requirements (as opposed to the classical tailoring-down). History STANAG 4427 is NATO’s agreement on how to do configuration management on defense systems. Edition 1 was originally promulgated in 1997 and updated with Edition 2 in 2007. The first iteration of the Standardization Agreement was entitled Introduction of Allied Configuration Management Publications (ACMPs), and it called on ratifying nations to use seven NATO publications (ACMP 1-7) as the agreed upon contractual clauses for configuration management. In 2010, NATO undertook to review and revise the STANAGs and ACMPs with two major assignments: make the NATO guidance useful and extend the guidance through the full project life cycle. This work resulted in the promulgation of STANAG 4427 Edition 3, Configuration Management in System Life Cycle Management, in 2014. As of 2017, it has been ratified by 19 nations. Overview With Edition 3, NATO published three new ACMPs: ACMP-2000, Policy on Configuration Management; ACMP-2009, Guidance on Configuration Management; and ACMP-2100, Configuration Management Contractual Requirements. This trio of publications uses a civil standard as the platform (ISO 10007), requires the acquirer to prepare and maintain a Life Cycle CM Plan for the system, to use a combination of governance and insight that is required to achieve the specific system objectives, and to build-up contractual requirements based on defined needs, rather than boilerplates. NATO publications covered by STANAG 4427 Edition 3 ACMP-2000 Ed. A Ver. 2 – Policy on Configuration Management Promulgated ACMP-2009 Ed. A Ver. 2 – Guidance on Configuration Management Promulgated ACMP-2100 Ed. A Ver. 2 – Configuration Management Contractual Requirements ACMP-2009-SRD-10 Ed. A Ver. 1 – Nato CM Training Package Promulgated ACMP-2009-SRD-40 Ed. A Ver. 1 – Predefined Levels of CM Requirement Build-Up ACMP-2009-SRD-41 Ed. A Ver. 2 – Examples of CM Plan Requirements ACMP-2009-SRD-51 Ed. A Ver. 1 – Nci Agency CM Tools Promulgated SRD-2009-49 Ed. A Ver. 1 – NATO-UU Configuration Management Contract Scoping Tool References Copies of NATO Configuration Management publications are available, for free, at the NATO Standardization Office web sites below, or at this site: NATO STANDARDIZATION OFFICE http://nso.nato.int/nso/nsdd/stanagdetails.html?idCover=8517&LA=EN Configuration management Defense Standardization Program Journal, October/December 2011, NATO Revises Configuration Management Guidelines http://www.dsp.dla.mil/Portals/26/Documents/Publications/Journal/111001-DSPJ.pdf NATO_STANAG_4427_on_CM STANAG_4427_on_Configuration_Management 4427_on_CM NATO_STANAG_4427_on_CM Military standardization
STANAG 4427 on CM
[ "Engineering" ]
731
[ "Systems engineering", "Configuration management" ]
54,288,689
https://en.wikipedia.org/wiki/NGC%207098
NGC 7098 is a doubled barred spiral galaxy located about 95 million light-years away from Earth in the constellation of Octans. NGC 7098 has an estimated diameter of 152,400 light-years. NGC 7098 was discovered by astronomer John Herschel on September 22, 1835. NGC 7098 has a very prominent bar that is shaped like a broad oval with very prominent, nearly straight ansae. Surrounding the bar, an inner ring made of four tightly wrapped spiral arms is found. Located outside of the inner ring, a well-defined outer ring surrounding the inner region appears to have formed due to the wrapping of two spiral arms. It appears that both rings are being affected by new star formation. However, there is no star formation in the core of NGC 7098 as shown by the absence of dust lanes. See also NGC 7013 NGC 7020 References External links SIMBAD NGC 70-- Project Ring galaxies Barred spiral galaxies Octans 7098 67266 Astronomical objects discovered in 1835
NGC 7098
[ "Astronomy" ]
208
[ "Octans", "Constellations" ]
54,290,289
https://en.wikipedia.org/wiki/Aporpium
Aporpium is a genus of fungi in the order Auriculariales. Basidiocarps (fruit bodies) are formed on dead wood and have a poroid hymenium. Species were often formerly referred to the genera Elmerina or Protomerulius, but molecular research, based on cladistic analysis of DNA sequences, has shown that Aporpium is a distinct, mainly north temperate genus. References External links Auriculariales Agaricomycetes genera
Aporpium
[ "Biology" ]
102
[ "Fungus stubs", "Fungi" ]
54,290,686
https://en.wikipedia.org/wiki/ARC%20Centre%20of%20Excellence%20in%20Future%20Low-Energy%20Electronics%20Technologies
The ARC Centre of Excellence in Future Low-Energy Electronics Technologies (or FLEET) is a collaboration of physicists, electrical engineers, chemists and material scientists from seven Australian universities developing ultra-low energy electronics aimed at reducing energy use in information technology (IT). The Centre was funded in the 2017 ARC funding round. Aims FLEET aims to develop a new generation of ultra-low resistance electronic devices, capitalising on Australian research in atomically thin materials, topological materials, exciton superfluids and nanofabrication. Programmes FLEET is pursuing three broad research themes to develop devices in which electrical current can flow without resistance: Topological insulators: a relatively new class of materials and recognised by the 2016 Nobel Prize in Physics, topological insulators conduct electricity only along their edges, and strictly in one direction. This one-way path conducts electricity without loss of energy due to resistance. Approaches being used within FLEET to study topological materials include magnetic topological insulators and quantum anomalous Hall effect (QAHE), topological Dirac semimetals (including oxide ‘antiperovskites’) and artificial topological systems (artificial graphene and 2D topological insulators). Exciton superfluids: a quantum state known to achieve electrical current flow with minimal wasted dissipation of energy. FLEET aims to develop superfluid devices that operate at room temperature, without the need for expensive, energy-intensive cooling. Approaches being used within FLEET’s include exciton–polariton bosonic condensation in atomically thin materials, topologically-protected exciton–polariton flow, and exciton superfluids in twin-layer materials. Light-transformed materials: a material can be temporarily forced into a new state by applying an intense light beam. FLEET aims to study the fundamental physics behind this temporary state change. Approaches being pursued in FLEET include optically-induced Floquet topological states (topological states that change with time), nonequilibrium superfluidity and creation of topological states in multi-dimensional extensions of the kicked quantum rotor. These approaches are enabled by the following two technologies: Atomically thin materials: FLEET seeks to find new ways of controlling the properties of two-dimensional materials via synthesis, substrates, and tuning electric and magnetic ordering. Nanodevice fabrication: FLEET aims to work on new techniques to integrate novel atomically thin materials into high-quality device structures with suitable performance. Participants FLEET is an Australian initiative, headquartered at Monash University, and in conjunction with the Australian National University, the University of New South Wales, the University of Queensland, RMIT University, the University of Wollongong and Swinburne University of Technology, complemented by a group of Australian and international partners. It is funded by the Australian Research Council and by the member universities. FLEET's Director is Michael Fuhrer, who is an ARC Laureate Fellow in the School of Physics and Astronomy at Monash University studying two-dimensional materials (of which graphene is the most well known example), and topological insulators. Deputy Director is Alexander Hamilton at the University of New South Wales. FLEET partners include Australian Nuclear Science and Technology Organisation, the Australian Synchrotron, California Institute of Technology, Columbia University in the City of New York, Johannes Gutenberg University at Mainz, University of Maryland Joint Quantum Institute & National Institute of Standards and Technology, Max Planck Institute of Quantum Optics, the National University of Singapore, the University of Colorado Boulder, University of Maryland Center for Nanophysics and Advanced Materials, the University of Texas at Austin, Tsinghua University at Beijing, and the University of Würzburg in Germany. References External links FLEET Official Website Research organisations in Australia Physics organizations Electrical engineering organizations Chemistry organizations Materials science organizations
ARC Centre of Excellence in Future Low-Energy Electronics Technologies
[ "Chemistry", "Materials_science", "Engineering" ]
768
[ "Materials science", "Materials science organizations", "nan", "Electrical engineering organizations", "Electrical engineering" ]
54,291,216
https://en.wikipedia.org/wiki/Micropound
The micropound (abbreviation μlb) is a small unit of avoirdupois weight and mass in the US and imperial systems of measurement, equal to one-millionth () pound. It is equal to exactly kg or about 453.6μg. See also English, US, & imperial units of measurement Avoirdupois pound References Citations Bibliography . Units of mass
Micropound
[ "Physics", "Mathematics" ]
82
[ "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
54,291,810
https://en.wikipedia.org/wiki/Associated%20Signature%20Containers
Associated Signature Containers (ASiC) specifies the use of container structures to bind together one or more signed objects with either advanced electronic signatures or timestamp tokens into one single digital container. Regulatory context Under the eIDAS-regulation, an associated signature container (ASiC) for eIDAS is a data container that is used to hold a group of file objects and digital signatures and/or time assertions that are associated to those objects. This data is stored in the ASiC in a ZIP format. European Commission Implementing Decision 2015/1506 of 8 September 2015 laid down specifications relating to formats of advanced electronic signatures and advanced seals to be recognised by public sector bodies pursuant to Articles 27 and 37 of the eIDAS-regulation. EU Member States requiring an advanced electronic signature or an advanced electronic signature based on a qualified certificate, shall recognise XML, CMS or PDF advanced electronic signature at conformance level B, T or LT level or using an associated signature container, where those signatures comply with the following technical specifications: XAdES Baseline Profile - ETSI TS 103171 v.2.1.1. CAdES Baseline Profile - ETSI TS 103173 v.2.2.1. PAdES Baseline Profile - ETSI TS 103172 v.2.2.2. Associated Signature Container Baseline Profile - ETSI TS 103174 v.2.2.1 Technical specification of ASiCs have been updated and standardized since April 2016 by the European Telecommunications Standards Institute in the standard Associated Signature Containers (ASiC)(ETSI EN 319 162-1 V1.1.1 (2016-04), but this updated standard is not required by the European Commission Implementing Decision. Structure The internal structure of an ASiC includes two folders: A root folder that stores all the container's content, which might include folders that reflect the structure of that content. A “META-INF” folder that resides in the root folder and contains files that hold metadata about the content, including its associated signature and/or time assertion files. Such an electronic signature file would contain a single CAdES object or one or more XAdES signatures. A time assertion file would either contain a one timestamp token that will conform to IETF RFC 3161, whereas a single evidence record would conform to IETF RFC 4998 or IETF RFC6283. How ASiC is used One of the purposes of an electronic signature is to secure the data that it is attached to it from being modified. This can be done by creating a dataset that combines the signature with its signed data or to store the detached signature to a separate resource and then utilize an external process to re-associate the signature with its data. It can be advantageous to use detached signatures because it prevents unauthorized modifications to the original data objects. However, by doing this, there is the risk that the detached signature will become separated from its associated data. If this were to happen, the association would be lost and therefore, the data would become inaccessible. One of the most widespread deployments of the ASiC standard is the Estonian digital signature system with the use of multiplatform (Windows, Linux, MacOS (OSX)) software called DigiDoc. Types of ASiC containers Using the correct tool for each job is always important. Using the correct type of ASiC container for the job at hand is also important: ASiC Simple (ASiC-S) With this container, a single file object is associated with a signature or time assertion file. A “mimetype” file that specifies the media type might also be included in this container. When a mimetype file is included, it is required to be the first file in the ASiC container. This container type will allow additional signatures to be added in the future to be used to sign stored file objects. When long-term time-stamp tokens are used, ASiC Archive Manifest files are used to protect long-term time-stamp tokens from tampering. ASiC Extended (ASiC-E) This type of container can hold one or more signature or time assertion files. ASiC-E with XAdES deals with signature files, while ASiC-E with CAdES deals with time assertions. The files within these ASiC containers apply to their own file object sets. Each file object might have additional metadata or information that is associated with it that can also be protected by the signature. An ASiC-E container could be designed to prevent this modification or allow its inclusion without causing damage to previous signatures. Both of these ASiC containers are capable of maintaining long-term availability and integrity when storing XAdES or CAdES signatures through the use of time-stamp tokens or evidence record manifest files that are contained within the containers. ASiC containers must comply with the ZIP specification and limitations that are applied to ZIP. ASiC-S time assertion additional container This container operates under the baseline requirements of the ASiC Simple (ASiC-S) container but it also provides additional time assertion requirements. Additional elements may be within its META-INF folder and requires the use of “SignedData” variable to include certificate and revocation information. ASiC-E CAdES additional container This container has the same baselines as an ASiC-E container, but with additional restrictions. ASiC-E time assertion additional container This container complies with ASiC-E baseline requirements along with additional requirements and restrictions. Reduced risk of loss of electronic signature The use of ASiC reduces the risk of an electronic signature becoming separated from its data by combining the signature and its signed data in a container. With both elements secured within an ASiC, it is easier to distribute a signature and guarantee that the correct signature and its metadata is being used during validation. This process can also be used when associating time assertions, including evidence records or time-stamp tokens to their associated data. References External links DSS : A free and open-source Java library for creating/manipulating PAdES/CAdES/XAdES/ASiC Signatures DSS : GitHub repository The AdES toolset Authentication methods Computer law Cryptography standards ETSI Regulation Signature
Associated Signature Containers
[ "Technology" ]
1,264
[ "Computer law", "Computing and society" ]
54,292,224
https://en.wikipedia.org/wiki/Sustainable%20return%20on%20investment
Sustainable return on investment (S-ROI) is a methodology for identifying and quantifying environmental, societal, and economic impacts of investment in projects and initiatives (e.g., factories, new product development, civil infrastructure, efficiency and recycling programs, etc.). The goal of S-ROI is to make risk-opportunity assessments more robust by providing new visibility into intangible internal costs and benefits, and externalities - social, economic, and environmental effects that are typically not considered in traditional cash-oriented project planning. Because it includes environmental impacts, S-ROI is distinct from the similarly named methodology of Social Return on Investment (SROI). Overview and Cost Types A fundamental principle of S-ROI is the creation of monetized models of non-cash benefits and costs. Benefits might include emissions avoided, resources saved, or improvements in health and productivity, while costs could include adverse effects on public health, risk associated with rising costs for resources or disposal, or impacts of a project on nearby farms, fisheries, or tourism sites. Quantifying these factors documents intangible values of an investment, and allows them to be incorporated into the decision-making process alongside traditional financial ROI metrics, providing additional insight, confidence, and transparency. S-ROI findings can also be used in support of requests for public or private funding of projects Like its predecessor methodology, Total Cost Assessment (TCA), S-ROI considers five different cost types. The first two, Direct and Indirect Costs, are the same as in traditional ROI, and include benefits, such as revenue increases. The third cost type, Contingent Liabilities, includes risks (such as fines, penalties, clean-up, etc.) which are not certain, but are easy to see in a financial statement should they occur. The last two cost types, Internal and External Intangibles are not easy to see in the financial statement, but represent real costs nonetheless. Internal costs are costs to the company, such as loss of brand value, or poor productivity stemming from low morale. External costs, also known as negative externalities, are costs to society, such as environmental degradation and effects on housing prices. In all categories, S-ROI also considers benefits, a category that was ignored in TCA. History and Evolution from Total Cost Assessment (TCA) Sustainability Return on Investment (S-ROI) grew out of the Total Cost Assessment (TCA) methodology, codified by the American Institute of Chemical Engineers (AIChE). TCA was first considered by General Electric in the late 1980s for better selection and justification of waste-management investment decisions. The US and New Jersey Environmental Protection Agencies then commissioned the Tellus Institute to investigate and apply the methodology to several projects in the early 1990s. While this work showed promise, members of the Center for Waste Reduction Technologies at the AIChE felt the method needed a better-defined protocol. A team of 13 industry experts worked with consultants from Arthur D. Little to develop a process for conducting a Total Cost Assessment and published a workbook describing the method in 2000. The initial methodology was designed to include direct and indirect environmental and safety costs into a corporate assessment of a decision. The methodology was devised by industry collaborators for use in industry and had a vetting period, during which the Chief Financial Officers (CFOs) of Fortune 500 companies in the chemical industry were brought in to ensure the financial calculations met their stringent requirements. Although the initial methodology had a narrow scope and focus, practitioners have found that the basic method can be applied beyond environmental and safety costs to include health risks, societal costs, and benefits in all categories. Several practitioners and government and industry partners continued the development of the methodology to include benefits and the multi-stakeholder perspective that are included in the S-ROI concept. Process and Applications An initial data-gathering phase typically involves concurrent dialog with stakeholders inside and outside the project-planning organization, or proxies for these groups, to identify types of impacts from the project under consideration. Examples of stakeholders for a factory project could include employees, suppliers, area residents, emergency responders, and local government. It is important in this process that stakeholders hear what other stakeholders and the decision-maker are saying, to foster mutual understanding and create otherwise-impossible arrangements that satisfy the needs of the most-critical groups (i.e., to optimize the decision). Stakeholder inputs are used to quantify uncertainties and evaluate benefits and costs under different scenarios. These findings can then be incorporated into a probabilistic modeling process to systematically identify possible events that could affect an investment's payback, assess the consequences, and identify opportunities for optimizing overall outcomes. A Net Present Value (NPV) assessment can be made for each stakeholder, using Monte Carlo analysis to generate best-case, worst-case, and most-likely assessments of an investment's profitability. One example would be the possible replacement of a dangerous chemical with a more benign alternative. The S-ROI process can evaluate possible effects of industrial accidents, including the risk of fines, lawsuits, and damage to brand reputation and employee relations. These types of analyses can show whether preventive measures like extra training or redundant safety systems create an unnecessary burden or provide payback through risk reduction. Other examples are S-ROI of waste to energy facilities and implementation of a system for recycling waste generated during the production of concrete for construction projects. The S-ROI analysis assessed payback by analyzing up-front and operational costs, cost savings from reduced water usage and waste disposal, and potential scalability of the program. The S-ROI method can also be used to explore broader issues. Dow Chemical has used S-ROI, and its precursor TCA, to assess its 10-year sustainability goals over the last three cycles. The assessment helps the company justify what might seem to be low-return policies and select and optimize goals for the best return to all stakeholders. References Financial ratios
Sustainable return on investment
[ "Mathematics" ]
1,212
[ "Financial ratios", "Quantity", "Metrics" ]
54,292,633
https://en.wikipedia.org/wiki/Formamide-based%20prebiotic%20chemistry
Formamide-based prebiotic chemistry is a reconstruction of the beginnings of life on Earth, assuming that formamide could accumulate in sufficiently high amounts to serve as the building block and reaction medium for the synthesis of the first biogenic molecules. Formamide (NH2CHO), the simplest naturally occurring amide, contains all the elements (hydrogen, carbon, oxygen, and nitrogen), which are required for the synthesis of biomolecules, and is a ubiquitous molecule in the universe. Formamide has been detected in galactic centers, star-forming regions of dense molecular clouds, high-mass young stellar objects, the interstellar medium, comets, and satellites. In particular, dense clouds containing formamide, with sizes on the order of kiloparsecs, have been observed in the vicinity of the Solar System. Formamide forms under a variety of conditions, corresponding to both terrestrial environments and interstellar media: e.g., on high-energy particle irradiation of binary mixtures of ammonia (NH3) and carbon monoxide (CO), or from the reaction between formic acid (HCOOH) with NH3. It has been suggested that in hydrothermal pores formamide may accumulate in sufficiently high concentrations to enable synthesis of biogenic molecules. Ab initio molecular dynamics simulations suggest that formamide could be a key intermediate of the Miller–Urey experiment as well. The combinatorial power of carbon is manifested in the composition of the molecular populations detected in circum- and interstellar media (see the Astrochemistry.net web site). The number and the complexity of carbon-containing molecules are significantly higher than those of inorganic compounds, presumably all over the universe. One of the most abundant C-containing three-atoms molecule observed in space is hydrogen cyanide (HCN). The chemistry of HCN has thus attracted attention in origin of life studies since the earliest times, and the laboratory synthesis of adenine from HCN under presumptive prebiotic conditions was reported as early as 1961. The intrinsic limit of HCN stems from its high reactivity, which leads in turn, to instability and the difficulty associated with its concentration and accumulation in unreacted form. The “Warm Little Pond” in which life is supposed to have started, as imagined by Charles Darwin and re-elaborated by Alexander Oparin, had most likely to reach sufficiently high concentrations to start creating the next levels of complexity. Hence the necessity of a derivative of HCN that is sufficiently stable to survive for time periods extended enough to allow its concentration in the actual physico-chemical settings, but that is sufficiently reactive to originate new compounds in prebiotically plausible environments. Ideally, this derivative should be able to undergo reactions in various directions, without prohibitively high energy barriers, thus allowing the production of different classes of potentially prebiotic compounds. Formamide fulfils all these requirements and, due to its significantly higher boiling point (210 °C), enables chemical synthesis in a much broader temperature range than water. Prebiotic chemistry Current living forms on Earth are essentially composed of four types of molecular entities: (i) nucleic acids, (ii) proteins, (iii) carbohydrates, and (iv) lipids. Nucleic acids (DNA and RNA) embody and express the genetic information and, together, constitute the genome and the apparatus for its expression (the genotype). Proteins, carbohydrates, and lipids form the structures, which harness and handle energy from the environment for organizing matter according to the instructions specified by the genotype, aiming to its conservation and transmission. The ensemble of proteins, carbohydrates, lipids and nucleic acids constitute the phenotype. Life is thus made of the interaction of metabolism and genetics, of the genotype with the phenotype. Both are built around the chemistry of the most common elements of the universe (hydrogen, oxygen, nitrogen, and carbon), important although ancillary roles being played by phosphorus and sulphur, and by other elements. Given the overwhelming variety of the chemically conceivable molecules, the fact that in biological systems we observe only a small subset of organic molecules has raised questions how and which different reaction pathways could have plausibly lead to the synthesis of pre-biological molecules on the primordial Earth. These are the main objectives of prebiotic chemistry research. Precursor of biogenic molecules Figure 1 summarizes the basic chemistry of formamide and its chemical connection with HCN and ammonium formate (NH4+HCOO−), considering selected examples of preparative and degradative reactions. The synthesis of purine from formamide was first reported in 1980. A series of studies building on this observation was started 20 years later: the synthesis of a large panel of prebiotically relevant compounds (including purine, adenine, cytosine, and 4(3H)pyrimidinone) in good yields was reported in 2001. These products were obtained by heating formamide in the presence of simple catalysts such as calcium carbonate (CaCO3), silica (SiO2), or alumina (Al2O3). In addition to nucleobases, sugars, carboxylic acids, amino acids, as well as heterogeneous compounds of various classes, (including urea and carbodiimide) were also synthesized. The catalysts studied include, in addition to those mentioned, titanium oxides, clays, cosmic dust analogues, phosphates, iron sulphide minerals, zirconium minerals, borate minerals, or numerous materials of meteoritic origin encompassing iron, stony-iron, chondrites, and achondrites meteorites. Various energy sources, including thermal energy, UV-radiation, irradiation with high-energy (terawatt) laser pulses, or slow protons were tested. Mimics of different formamide-based prebiotic scenarios have been reconstructed and analyzed, including space-wise solar wind irradiation of meteorites, dynamic chemical gardens, and meteorites in aqueous environments. It has been suggested that the stepwise decrease of the temperature of the prebiotic environment could induce a sequence of strongly non-equilibrium chemical events that led to the emergence of more and more complex species from formamide on the early Earth. For each studied combination of catalyst/energy source/environment, formamide condensed into a variety of different prebiotically relevant compounds, each combination giving rise to a specific set of relatively complex molecules, usually encompassing several nucleobases, amino acids, and carboxylic acids. The highest level of complexity was attained for the formamide/meteorite system, using proton irradiation as the energy source, where the one-pot synthesis of four nucleosides (uridine, cytidine, adenosine, thymidine) was observed. So far, no other one-carbon atom compound has shown the versatility of products that can be formed from formamide under plausible prebiotic conditions in a one-pot chemistry (see Figure 2). In addition to its dual function of substrate and solvent in one-pot syntheses affording prebiotic compounds as complex as nucleosides and long aliphatic chains, it has been observed that formamide plays a role in the generation of molecules which are closer to the biological domain. In the presence of a phosphate source (e.g., phosphate minerals), formamide promotes the phosphorylation of nucleosides, leading to the formation of nucleotides, and strongly stimulates the non-enzymatic polymerization of 3’,5’ cyclic nucleotides, leading to the abiotic synthesis of RNA oligomers. This is the reason why formamide is considered a plausible medium for prebiotic phosphorylation reactions also in the “discontinuous synthesis” scenario of the origin of life. As well as phosphorylation, formamide has been shown to be a competent medium for the production of amino acid derivatives from their simple aldehyde and nitrile precursors, demonstrating that water is not the only solvent that this process can occur in. Most notably, formamide provides a medium for the prebiotic synthesis of cysteine derivatives, not considered previously considered plausible in strictly aqueous prebiotic environments. References Prebiotic chemistry
Formamide-based prebiotic chemistry
[ "Chemistry", "Biology" ]
1,753
[ "Biological hypotheses", "Origin of life", "Prebiotic chemistry" ]
54,292,928
https://en.wikipedia.org/wiki/Sentinus
Sentinus is an educational charity based in Lisburn, Northern Ireland that provides educational programs for young people interested in science, technology, engineering and mathematics (STEM). History Northern Ireland produces around 2,000 qualified IT workers each year; there are around 16,000 IT jobs in the Northern Ireland economy. Function It works with EngineeringUK and the Council for the Curriculum, Examinations & Assessment (CCEA). It works with primary and secondary schools in Northern Ireland. It runs summer placements for IT workshops for those of sixth form age (16-18). It offers Robotics Roadshows for primary school children. Sentinus Young Innovators Sentinus hosts the annual Big Bang Northern Ireland Fair which incorporates Sentinus Young Innovators. This is a one day science and engineering project exhibition for post-primary students. It is one of largest such events in the United Kingdom. In 2019 over 3,000 students participated from 130 schools across both Northern Ireland and the Republic of Ireland. The competition is affiliated with the International Science and Engineering Fair (ISEF) and the Broadcom MASTERS program. The overall winner represents Northern Ireland at the following year's ISEF. Past Overall Winners See also Discover Science & Engineering, equivalent in the Republic of Ireland Science Week Ireland The Big Bang Fair Young Scientist and Technology Exhibition References External links Sentinus Computer science education in the United Kingdom Educational charities based in the United Kingdom Educational organisations based in Northern Ireland Engineering education in the United Kingdom Engineering organizations Learning programs in Europe Mathematics education in the United Kingdom Science and technology in Northern Ireland Science events in the United Kingdom
Sentinus
[ "Engineering" ]
321
[ "nan" ]
54,294,051
https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9%20de%20Chimie%20Industrielle%20%28American%20Section%29
The Société de Chimie Industrielle (American Section) is an independent learned society inspired by the creation of the Société de Chimie Industrielle in Paris in 1917. The American Section was formed on January 18, 1918, and held its first meeting on April 4, 1918. The Société de Chimie Industrielle (American Section) hosts speakers, grants scholarships, and gives awards. It has given the International Palladium Medal roughly every second year since 1961, and helps to award the Othmer Gold Medal and the Winthrop-Sears Medal every year. The Société also hosts monthly talks, and presents scholarships to writers, educators, and historians of science. History One of the first societies for chemists was the Society of Chemical Industry, founded in London in 1881. This inspired a number of other groups, including the Société de Chimie Industrielle in Paris, France. The French Société was modeled on the British organization in 1917. A number of those active in forming the French Société were elected to its first set of officers, which included industrialist Paul Kestner as president, vice-presidents Albin Haller and Henry Louis Le Châtelier, and Jean Gérard as general secretary. Creation of the French Société in turn inspired creation of a related American association in New York in 1918. This was part of an effort to rebuild international connections between individuals and institutions that had been disrupted during the First World War. René Laurent Engel encouraged the re-establishment of ties between chemists in the two countries in his position as the scientific representative in a French Mission to the United States. Victor Grignard of the University of Nancy also encouraged the creation of an American organization. A circular appealed to the Chemists and Manufacturers of America to "extend to our French fellow chemists and manufacturers our moral and financial support and the right hand of good fellowship." The American section of the Société de Chimie Industrielle was formed on January 18, 1918, following the presentation of the Perkin Medal by the Society of Chemical Industry (American Section) at The Chemists' Club in New York. Engel, as secretary of the parent organization, addressed the meeting. Officers of the newly created American section of the Société de Chimie Industrielle included Leo Baekeland as president, Jerome Alexander as vice-president, Charles Avery Doremus as secretary, and George Frederick Kunz as treasurer. A report describes the Société's purpose as follows: The first official meeting of the American section of the Société de Chimie Industrielle was held on April 4, 1918 at The Chemists' Club in New York. William H. Nichols, president of the American Chemical Society, welcomed the new organization. Frederick J. LeMaistre reported on "Conditions in the French chemical industries during 1916". Governance The Société de Chimie Industrielle (American Section) is now an independent organization. It was granted tax status as a 501(c)(3), a registered nonprofit organization as of 1952. The American Section is directed by a board of officers including a president. , the president of the Société de Chimie Industrielle (American section) is [James M. Weatherall]. Activities Awards The International Palladium Medal was instituted in 1958 and first awarded in 1961. The first recipient was Ernest-John Solvay. The medal has generally been given every two years. The Société has also been involved in nominating and choosing the recipients of the Othmer Gold Medal and the Winthrop-Sears Medal, which are given yearly. Events The Société supports a program of monthly speakers featuring CEOs, government leaders, and scientists. Scholarships The Société funds scholarships for writers, educators, and historians who place chemistry in historical perspective and explore the influence of chemistry on everyday life. References External links 1918 establishments in the United States Scientific societies based in the United States Chemical engineering organizations
Société de Chimie Industrielle (American Section)
[ "Chemistry", "Engineering" ]
784
[ "Chemical engineering", "Chemical engineering organizations" ]
54,294,875
https://en.wikipedia.org/wiki/FaceApp
FaceApp is a photo and video editing application for iOS and Android developed by FaceApp Technology Limited, a company based in Cyprus. The app generates highly realistic transformations of human faces in photographs by using neural networks based on artificial intelligence. The app can transform a face to make it smile, look younger, look older, or change gender. Features FaceApp was launched on iOS in January 2017 and on Android in February 2017. There are multiple options to manipulate the photo uploaded such as editor options of adding an impression, make-up, smiles, hair colors, hairstyles, glasses, age or beards. Filters, lens blur and backgrounds along with overlays, tattoos, and vignettes are also a part of the app. The gender change transformations of FaceApp have attracted particular interest from the LGBT and transgender communities, due to their ability to realistically simulate the appearance of a person as the opposite gender. Criticism In 2019, FaceApp attracted criticism in both the press and on social media over the privacy of user data. Among the concerns raised were allegations that FaceApp stored users' photos on their servers, and that their terms of use allowed them to use users' likenesses and photos for commercial purposes. In response to questions, the company's founder, Yaroslav Goncharov, stated that user data and uploaded images were not being transferred to Russia but instead processed on servers running in the Google Cloud Platform and Amazon Web Services. According to Goncharov, user photos were only stored on servers to save bandwidth when applying multiple filters, and were deleted shortly after being uploaded. US senator Chuck Schumer expressed "serious concerns regarding both the protection of the data that is being aggregated as well as whether users are aware of who may have access to it" and called for an FBI investigation into the app. A "hot" transformation was available in the app in 2017 supposedly making its users appear more physically attractive, but this was accused of racism for lightening the skin color of black people and making them look more European. The feature was briefly renamed "spark" before being removed. Founder and chief executive Yaroslav Goncharov apologised, describing the situation as "an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour" and announcing that a "complete fix" was being worked on. In August that year, FaceApp once again faced criticism when it featured "ethnicity filters" depicting "White", "Black", "Asian", and "Indian". The filters were immediately removed from the app. See also Face of the Future Deepfake References External links Official website Android (operating system) software 2017 software IOS software Photo software Proprietary cross-platform software Social media Deep learning software applications Deepfakes
FaceApp
[ "Technology" ]
566
[ "Computing and society", "Social media" ]
54,295,335
https://en.wikipedia.org/wiki/Magnadur
Magnadur is a sintered barium ferrite, specifically BaFe12O19 in an anisotropic form. It is used for making permanent magnets. The material was invented by Mullard and was used initially particularly for focussing rings on cathode-ray tubes. Magnadur magnets retain their magnetism well, and are often used in education. Magnadur can also be used in DC motors. Physical characteristics Remanence 0.9 T Coercivity 110 kA/m Maximum energy product, 20 kJm - at 86 kAm References Ferromagnetic materials
Magnadur
[ "Physics", "Chemistry" ]
123
[ "Inorganic compounds", "Ferromagnetic materials", "Inorganic compound stubs", "Materials", "Matter" ]
54,295,731
https://en.wikipedia.org/wiki/Acetalated%20dextran
Acetalated dextran is a biodegradable polymer based on dextran that has acetal modified hydroxyl groups. After synthesis, the hydrophilic polysaccharide dextran is rendered insoluble in water, but soluble in organic solvents. This allows it to be processed in the same manner as many polyesters, like poly(lactic-co-glycolic acid), through processes like solvent evaporation and emulsion. Acetalated dextran is structurally different from acetylated dextran. History Acetalated dextran was first reported in 2008 out of the lab of Jean Fréchet at the University of California, Berkeley in the College of Chemistry. This version of acetalated dextran, often abbreviated Ac-DEX, has dextran and exceedingly low levels of acetone and methanol as degradation products. In 2012, in the laboratory of Kristy Ainslie, at Ohio State University in the College of Pharmacy, polymer synthesis was modified to release ethanol in place of methanol upon degradation. The ethanol producing version of acetalated dextran is often abbreviated Ace-DEX. Properties During the synthesis of acetalated dextran both acyclic and cyclic acetals are formed. The acyclic acetals degrade into an acetone and an alcohol, whereas cyclic acetals degrade into acetone. The ratio of cyclic to acyclic acetals varies with reaction time since acyclic acetals are kinetically favored and cyclic acetals are the thermodynamically favored. This unique formation of cyclic and acyclic acetals leads to varying degradation time because the two acetal groups hydrolyze at different rates. Acetalated dextran's degradation time can vary from hours to a month or more at pH 7.2. Also, acetalated dextran is unique because it is acid sensitive. Therefore, at lower pH acetalated dextran degrades more rapidly, which results in a polymer that degrades approximately two logs faster at pH 5 compared to pH 7. The acid-sensitivity of Ac-DEX has illustrated, when formulated into nanoparticles encapsulating a protein antigen, more efficient presentation of antigen to both MHC class I and MHC class II, over other non-acid sensitive polymers like PLGA and non degradable materials like gold nanoparticles. Applications Because of the ability of acetalated dextran to degrade more rapidly in low pH environments like the phagolysosome of a macrophage or dendritic cell, it has been used as polymeric micro/nanoparticles. Acetalated dextran was originally developed as a vaccine carrier, but has been used for drug delivery, tissue engineering and infectious disease vaccine delivery. Its unique degradation rates have led to finely tuned release of therapeutic proteins and vaccine elements. Ac-DEX has also been shown the allow proteins to be stored outside the cold chain. Formation of nanoparticles with Ac-DEX can be made through standard methods like emulsion, spray drying and electrospray. Using sonication, inorganic nanoparticles have been embedded into Ac-DEX particles to for a composite material for cancer therapy. Also they have been used as a core material for cell membrane coating. References Organic polymers
Acetalated dextran
[ "Chemistry" ]
707
[ "Organic compounds", "Organic polymers" ]
70,017,886
https://en.wikipedia.org/wiki/Amuay%20tragedy
The Amuay tragedy was an explosion of the Paraguaná Refinery Complex in Punto Fijo, Venezuela. The explosion resulted in the death of 48 people and injured 151 others. Explosion On 25 August 2012 at 01:11 (05:41 GMT), an explosion caused by the ignition of leaking gas at the Amuay refinery killed 48 people, primarily National Guard troops stationed at the plant, and injured 151 others. A 10-year-old boy was among the dead. In addition to the refinery, more than 1,600 homes were damaged by the shockwave. Reactions Three days of national mourning was declared by President Hugo Chávez. He also ordered a probe into the cause of the fire. Chávez said he was creating a US$23 million fund for clean-up operations and a replacement of destroyed homes. He said that "60 new homes were ready for affected families to move into, 60 more would be finished soon, and a further 137 houses would be handed over next month." He also rejected claims that PDVSA might be responsible for the disaster. The first were extinguished by 28 August 2012. Venezuelan presidential candidate Henrique Capriles Radonski criticized PDVSA management for their poor safety record and forwarded lack of maintenance as a cause of the accident. President Chávez, who claimed that it was too early to identify the cause, as well as minister Ramírez, said that Capriles did not "know what he's talking about". Iván Freites, the Secretary-General of the United Federation of Oil Workers, held the government responsible "lack of maintenance and investment" in the industry, considering it the main cause of the explosion. Freites denounced that since 2011, the union of oil workers had complained about problems with "damaged equipment, lack of spare parts and other unsafe conditions". See also 2020 El Palito oil spill 2023 El Palito oil spill References External links Explosions in 2012 2012 industrial disasters 2012 in Venezuela Fires in Venezuela Industrial fires and explosions Man-made disasters in Venezuela 2010s fires in South America 2012 fires 2012 disasters in Venezuela PDVSA August 2012 events in South America
Amuay tragedy
[ "Chemistry" ]
433
[ "Industrial fires and explosions", "Explosions" ]
70,019,004
https://en.wikipedia.org/wiki/Geophilus%20monoporus
Geophilus monoporus is a species of soil centipede in the family Geophilidae found in Tiba, Japan. This species can reach 45 mm in length and has 87 pairs of legs. The species name refers to the single pore at the base of each of the ultimate legs. References monoporus Zoology Arthropods of Asia Taxa named by Yosioki Takakuwa
Geophilus monoporus
[ "Biology" ]
84
[ "Zoology" ]
70,019,055
https://en.wikipedia.org/wiki/Cebuano%20numerals
The Cebuano numbers are the system of number names used in Cebuano to express quantities and other information related to numbers. Cebuano has two number systems: the native system and the Spanish-derived system. The native system is mostly used for counting small numbers, basic measurement, and for other pre-existing native concepts that deals with numbers. Meanwhile, the Spanish-derived system is mainly used for concepts that only existed post-colonially such as counting large numbers, currency, solar time, and advanced mathematics. History Unlike other Philippine languages, the native number system of Cebuano was derived solely from the non-human forms of Proto-Austronesian numerals instead of a combination of both human and non-human numerals, such as in Tagalog and Hiligaynon. The numbers were first recorded by chronicler Antonio Pigafetta during Magellan's expedition. Types The native numbers are categorized into four types: cardinal, ordinal, distributive, and multiplicative (also referred to as "viceral" or "adverbial"). The multiples of ten are formed by attaching the circumfix "ka-ø-an" (e.g. kawaloan). Those that are within the 20-60 range undergo the process of metathesis and syncope (e.g. katloan, from katuloan). Cardinal Like other Visayan languages, cardinal numbers are linked to the noun with the ligature ka. usá ka tawo a/one person kaluhaan ug usá ka bulan twenty-one months Ordinal Ordinal numbers in Cebuano are formed using the ika- prefix, except una. Distributive Distributive numbers in Cebuano are formed by attaching the tag- prefix to the numerical root. Irregular words may be formed depending on the number being attached to. Multiplicative Multiplicative (or viceral) numbers in Cebuano are formed using the ka- prefix. The prefixes "naka-" and "maka-" may also be used to specify if the number is used in the nasugdan or pagasugdan aspect, respectively. See also Cebuano language Cebuano grammar References Cebuano language Numerals
Cebuano numerals
[ "Mathematics" ]
472
[ "Numeral systems", "Numerals" ]
70,019,903
https://en.wikipedia.org/wiki/Water%20Protection%20Zone
A Water Protection Zone is a statutory regulation imposed under Schedule 11 to the Water Resources Act 1991. The power was subsequently subsumed into The Water Resources Act (Amendment) (England and Wales) Regulations 2009. The only example in the UK was applied to the River Dee in 1999 as The Water Protection Zone (River Dee Catchment) Designation Order 1999 which covers the whole of the River Dee catchment from the headwaters down to the final potable water abstraction point at Chester The creation of this protection zone gave powers to the then Environment Agency (now Natural Resources Wales) to monitor and control the use and storage of any potentially polluting substance brought into the catchment for any industrial or commercial operation - a controlled activity as defined by the order. All such controlled activities require a permit to be issued and the conditions of the permit are determined by a risk analysis mathematical model involving the nature of the substance, its quantity and the distance from any vulnerable drinking water intake. Applications for consent are required to complete a formal application Following a serious degradation of the quality of the River Wye, there have been calls for a new water protection zone to be established for that river. References Rivers Risk analysis Mathematical modeling
Water Protection Zone
[ "Mathematics" ]
236
[ "Applied mathematics", "Mathematical modeling" ]
70,021,007
https://en.wikipedia.org/wiki/Digital%20sequence%20information
Digital sequence information (DSI) is a placeholder term used in international policy fora, particularly the Convention on Biological Diversity (CBD), to refer to data derived from dematerialized genetic resources (GR). Definition The 2018 Ad Hoc Technical Expert Group on DSI reached consensus that the term was "not appropriate". Nevertheless, the term is generally agreed to include nucleic acid sequence data, and may be construed to include other data types derived from or linked to dematerialized genetic resources, including, for example, protein sequence data. The appropriateness and meaning of this term remain controversial as evidenced by its continued placeholder status, post the 15th Conference of the Parties to the CBD. DSI is crucial to research in a wide range of contexts, including public health, medicine, biodiversity, plant and animal breeding, and evolution research. Policy environment Convention on Biological Diversity & Nagoya protocol The Nagoya Protocol, a component of the Convention on Biological Diversity, establishes a right for countries to regulate, and to share in benefits derived from, their nation's genetic resources by arranging Access and Benefit Sharing Agreements with users. Academic researchers, however, generally share DSI freely and openly online, following a set of principles that align with the open science movement. Open sharing of DSI is recognized to have broad benefits, and open science is a major and growing focus of international science policy. This creates a perceived conflict with benefit sharing obligations, as individuals can access and use these open data without entering into benefit-sharing agreements. Parties to the Convention on Biological Diversity have considered a range of policy options that strike different balances between these two important international policy goals. At COP 15, parties agreed to establish a multilateral mechanism for benefit-sharing from the use of DSI. At COP 16, parties agreed to implement this by establishing a fund to which commercial users of DSI would be expected to contribute a percent of their revenue (see COP 16 below). GRATK Treaty In May 2024, a WIPO Diplomatic Conference concluded the International Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge (GRATK), which mandates disclosure requirements for patents based on genetic resources (GR). Some observers to the negotiations claim that, under the final wording of the treaty, the disclosure requirements apply to patents based on DSI, as long as the DSI was necessary to the patented invention, and/or the invention depends on the specific properties of the DSI:The draft versions of the Treaty previously contemplated mentions of the qualifier “direct” in the trigger. Drafts also contemplated mentions of the qualifier “material,” both in the trigger itself and in its definition. These qualifiers were deleted by the drafters, leaving only the criteria of necessity and dependence on specific properties. If a GR was necessary to create a claimed invention, and the invention depends on such GR, even if indirectly and/or immaterially, it falls under the scope of this Treaty. A claimed invention relying on DSI obtained from a GR will therefore have to disclose the GR from which the DSI derives. Convention on Biological Diversity COP16 At the 2024 United Nations Biodiversity Conference of the Parties to the CBD in Cali (COP16), a decision was made to establish the Cali Fund, to which industries that use DSI, like pharma, cosmetics, nutraceuticals, and biotech, will be expected to contribute a small portion of their profits to support global biodiversity conservation. A methodology for implementation, including thresholds and contribution rates, is scheduled to be agreed at the seventeenth conference of the parties to the CBD. The fund will support biodiversity protection and reward Indigenous and forest communities, with 50% allocated to local and Indigenous groups. Other treaties DSI is also an important concept in other international legally binding instruments with access and benefit-sharing obligations, including: FAO's International Treaty on Plant Genetic Resources for Food and Agriculture (Plant Treaty, or Seed Treaty), Pandemic Influenza Preparedness Framework, Antarctic Treaty System Biodiversity Beyond National Jurisdiction (BBNJ Treaty, or High Seas Treaty), a component of the United Nations Convention on the Law of the Sea. References Anti-biopiracy treaties Biopiracy Convention on Biological Diversity Genetics software Genetics studies
Digital sequence information
[ "Biology" ]
871
[ "Anti-biopiracy treaties", "Biopiracy", "Convention on Biological Diversity", "Biodiversity" ]
70,021,966
https://en.wikipedia.org/wiki/Abell%2063
Abell 63 is a planetary nebula with an eclipsing binary central star system in the northern constellation of Sagitta. Based on parallax measurements of the central star, it is located at a distance of approximately 8,810 light years from the Sun. The systemic radial velocity of the nebula is . The nuclear star system is the progenitor of the nebula and it has a combined apparent visual magnitude of 14.67. During mid eclipse the magnitude drops to 19.24. The star H.V. 5452 was found to be a candidate eclipsing binary system in 1932 by Dorrit Hoffleit, and it was given the variable star designation UU Sagittae (UU Sge). In 1955, George O. Abell discovered a nebula in the same region of the sky from photographic plates taken by the National Geographic Society – Palomar Observatory Sky Survey. The identifier 'Abell 63' comes from a follow-up publication by Abell in 1966, which identified the nebula as a homogeneous disk in diameter with a central star of magnitude 14.67. In 1976, Howard E. Bond noted that the positions of the variable star and the center of the nebula coincide. That same year, J. S. Miller and associates confirmed that UU Sge is an eclipsing binary, finding a period of 11h 09.6m with an eclipse duration of 70 minutes. The deep eclipse decreased the brightness of the pair by ~4.3 magnitudes. The general shape of this nebula appears to be a hollow tube with a prominent hyperbolic-shaped waist. The bright central rim has faint extensions leading to end caps; the primary axis of the tube being aligned along a position angle of 34°. The overall profile has a 7:1 aspect ratio spanning an angular size of , with the ends at an equal angular distance from the center. The nebula is expanding with a velocity of . Surrounding the bright central rim is a faint circular shell, which may be the remnant of the stellar wind produced as the central star passed through the asymptotic giant branch. The central system is a close detached binary with an orbital period of 11.2 hours. The length of the total eclipse of the primary component by the secondary is 13.4 minutes. They have a projected separation of at least 2.45 times the radius of the Sun. The primary is an O-type subdwarf star (sdO) that has passed through the asymptotic giant branch stage, during which it ejected the surrounding planetary nebula. It has 63% of the mass of the Sun and 35% of the Sun's radius, with an effective temperature of ~78,000 K. The secondary has the mass of an M-type main-sequence star, or 29% of the mass of the Sun. However, the effective temperature of 6,136 K is much higher than expected for an M dwarf, and the radius of 56% of the Sun is too large. This is because the point on the secondary facing the primary is being heated by its much hotter companion. The hot primary is also providing the illumination of the surrounding nebula. References Further reading Planetary nebulae Eclipsing binaries O-type subdwarfs M-type main-sequence stars Sagitta Sagittae, UU
Abell 63
[ "Astronomy" ]
680
[ "Sagitta", "Constellations" ]
70,022,238
https://en.wikipedia.org/wiki/Laser%20Interconnect%20and%20Networking%20Communication%20System
Laser Interconnect and Networking Communications System (LINCS) is a test of laser communication in space using two cubesats launched in June 2021. Background It was built by General Atomics for the US DOD's Space Development Agency. The two cubesats, LINCS A/B, were launched on SpaceX's Transporter-2 rideshare in June 2021, but communications were not established by January 2022. One theory is that helium exposure during the Falcon 9 launch affected MEMS devices in the cubesats. See also Laser communication in space Free-space optical communication References CubeSats Laser communication in space
Laser Interconnect and Networking Communication System
[ "Astronomy" ]
131
[ "Astronomy stubs", "Spacecraft stubs" ]
70,022,565
https://en.wikipedia.org/wiki/Satellite%20refuelling
Satellite refuelling is the operation of replenishing on board propellants and other consumables in satellites in orbit, e.g. in geostationary orbit around Earth. This could be for storable propellants, and later for cryogenic propellants. Examples Space Infrastructure Servicing by Canadian MDA Orbital Express — a 2007 U.S. government-sponsored mission to test in-space satellite servicing with two vehicles designed from the start for on-orbit refueling and subsystem replacement. Robotic Refueling Mission, a series of NASA projects, including cryogenics transfer tests at ISS Contracts Jan 2022: Astroscale contracted to use Orbit Fab in-orbit propellant depots. Standards Rapidly Attachable Fluid Transfer Interface, for non-cryogenic fluids and gases ASSIST, for docking, ground tested by a consortium of European companies. Alternatives Rather than refuel, another craft could attach itself to the customer satellite and provide any desired propulsion. E.g.: Mission Extension Vehicle, of Northrop Grumman, MEV-1 in operation in 2021. References Satellites
Satellite refuelling
[ "Astronomy" ]
226
[ "Outer space stubs", "Satellites", "Outer space", "Astronomy stubs" ]
70,026,998
https://en.wikipedia.org/wiki/Bioinformatics%20Research%20Network
Bioinformatics Research Network (BRN) is a non-profit open-science research-based organization aiming to provide volunteer opportunities and bioinformatics research training that is free and open to everyone. It is a community-driven 501(c)(3) non-profit organization that aims to establish a worldwide network that is open to anyone interested in bioinformatics irrespective of academic background and to provide bioinformatics training, mentorship and the opportunity to collaborate on exciting research projects. Training and projects BRN provides free training workshops through its partner group Bioinformatics Interest Group. BIG is a student club of The University of Texas Health Science Center at San Antonio established to promote the development of student bioinformaticians and encourage the growth of bioinformatics skills in the community. BRN is open to academic labs to host projects for open collaboration. These projects are then available for everyone to contribute. To work on a project, a volunteer has to complete the required skill assessments for the specific project and apply to the respected team. The decision to allow the volunteer to work depends on the team of the respective project. Publication BRN has published its projects in BioRxiv and in peer-reviewed journals. References External links Official website Bioinformatics organizations Scientific organizations established in 2021 Open science 501(c)(3) organizations
Bioinformatics Research Network
[ "Biology" ]
280
[ "Bioinformatics", "Bioinformatics organizations" ]
70,028,071
https://en.wikipedia.org/wiki/Galileo%20and%20Ulysses%20Dust%20Detectors
The Galileo and Ulysses Dust Detectors are almost identical dust instruments on the Galileo and Ulysses missions. The instruments are large-area (0.1  m2 sensitive area) highly reliable impact ionization detectors of sub-micron and micron sized dust particles. With these instruments the interplanetary dust cloud was characterized between Venus’ and Jupiter's orbits and over the solar poles. A stream of interstellar dust passing through the planetary system was discovered. Close to and inside the Jupiter system streams nanometer sized dust particles that were emitted from volcanoes on Jupiter's moon Io and ejecta clouds around the Galilean moons were discovered and characterized. Overview Following the first dust instruments from the Max Planck Institute for Nuclear Physics (MPIK), Heidelberg (Germany) on the HEOS 2 satellite and the Helios spacecraft a new dust instrument was developed by a Team of Scientists and Engineers of Eberhard Grün to detect cosmic dust in the outer planetary system. This instrument had 10 times larger sensitive area (0.1 m2) and employed a multiple coincidence of impact signals in order to cope with the low fluxes of cosmic dust and the hostile environment in the outer planets magnetospheres. The Galileo and Ulysses dust detectors use impact ionization from hypervelocity impacts of cosmic dust particles onto the hemispherical target. Electrons and ions from the impact plasma are separated by the electric field between the target and the center ion collector. Ions are partly collected by the semi-transparent grid and the center channeltron multiplier. The amplitudes of the impact, the rise-times, and time relations of the charge signals are measured, stored and transmitted to ground. Using this information noise from impacts events were separated and properties (mass and speed) of the impacting dust particles were determined. The center grid of the three grids at the entrance of the detector pick-up the electric charge of the dust particle. Unfortunately, no dust charges were reliably identified by these instruments during their space operation. The Galileo Dust Detector was developed by the Team of Scientists and Engineers led by Eberhard Grün at the Max Planck Institute for Nuclear Physics (MPIK), Heidelberg (Germany) and was selected in 1977 by NASA to explore the dust environment of Jupiter on board the Galileo Jupiter Orbiter. The Galileo spacecraft was a dual-spin spacecraft with its antenna pointing to Earth. The dust detector was mounted on the spinning section at an angle of 60° with respect to the spin axis. Galileo was launched in 1989 and cruised for 6 years interplanetary space between Venus’ and Jupiter's orbits before it started in 1995 its 7-year path through the Jovian system with several fly-bys of all Galilean moons. The Galileo dust detector operated during the whole mission. About a year after Galileo the twin instrument was selected for the out-of-ecliptic Ulysses mission. Ulysses was a spinning spacecraft with the dust detector mounted at 85° to the spin axis. Launch of Ulysses was in 1990 and the spacecraft went on a direct trajectory to Jupiter which it reached in 1992 for a swing-by maneuver which put the spacecraft on a heliocentric orbit of 80 degrees inclination. This orbit had a period of 6.2 years and a perihelion of 1.25 AU and an aphelion of 5.4 AU. Ulysses completed 2.5 orbits until the mission was ended. The Ulysses dust detector operated during the whole mission. The initial Principal Investigator for both instruments was Eberhard Grün. In 1996 the PI-ship was handed over to Harald Krüger from Max Planck Institute for Solar System Research, Göttingen, Germany. Major discoveries and observations Interplanetary dust Galileo and Ulysses traversed interplanetary space from Venus’ orbit (0.7 AU) to Jupiter’s orbit (~5 AU) and about 2 AU above and below the solar poles. During all the time the dust experiments recorded cosmic dust particles that were an important input to a model of interplanetary dust. Interstellar dust After Jupiter flyby Ulysses identified a flow of interstellar dust sweeping through the Solar System. Dust in the Jupiter system After Jupiter flyby Ulysses detected hyper-velocity streams of nano-dust which are emitted from Jupiter and then couple to the solar magnetic field. Dust streams from Jupiter, and their interactions with the Jovian satellite Io were detected, as well as ejecta clouds around the Galilean moons. References Spacecraft instruments Scientific instruments Space science experiments
Galileo and Ulysses Dust Detectors
[ "Technology", "Engineering" ]
904
[ "Scientific instruments", "Measuring instruments" ]
70,028,351
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20S22
The Samsung Galaxy S22 is a series of high-end Android-based smartphones developed, manufactured, and marketed by Samsung Electronics as part of its Galaxy S series. Unveiled at Samsung's Galaxy Unpacked event on 9 February 2022. They collectively serve as the successor to the Samsung Galaxy S21 series. The first three smartphones were unveiled at Samsung's Galaxy Unpacked event on 9 February 2022. The S22 series consists of the base Galaxy S22 model, the plus-sized Galaxy S22+ model, and the camera-note-focused Galaxy S22 Ultra model. The S22 Ultra serves as the official successor to the Samsung Galaxy Note 20 and the Note lineup, housing an integrated S-Pen. There are numerous upgrades the phones possess over the previous models, in addition to improved specifications, an enhanced camera system supporting 8K video recording (7680×4320) at 24 frames per second, and a super-resolution zoom of 30–100x, for the Ultra model. The three phones were released in the United States and Europe on 25 February 2022. The Galaxy S22, S22+, and S22 Ultra launch with prices at $799.99, $999.99, and $1199.99, respectively. It was succeeded by the Samsung Galaxy S23 series. History The Samsung Galaxy S22 series was unveiled on the 8th February 2022 as the successor of the Galaxy S21 series. All devices' design schemes are identical to the 2021 models, with the exception of the S22 Ultra, retaining the individual camera modules excluding the camera bump and rectangular body. The base model maintains a glass back instead of the (plastic) polycarbonate back the S21 originally had. The S22 and S22+ contain comparable software, composition, and hardware to the previous year's models, with minor distinctions. The S22 Ultra underwent massive reconstruction, with the phone serving as the successor to the discontinued Galaxy Note series. The device features a rectangular body, higher resolution display, and advanced camera system, and most notably, the Ultra model houses an embedded S-Pen, a signature feature of the Note series. The S22 Ultra serves as the high-end professional model of the lineup and successor to the S21 Ultra, while the base models succeed the S21 and S21+, respectively. Lineup The S22 line consists of three devices. The Galaxy S22 is the least expensive with a screen. The Galaxy S22+ has similar hardware in a larger form factor, with a screen, faster charging and a larger battery capacity. The Galaxy S22 Ultra has a screen and the largest battery capacity in the lineup, with a more advanced camera setup and a higher resolution display compared to the S22 and S22+, as well as an embedded S Pen – the first in the S Series as a whole. Design The Galaxy S22 series has a design similar to preceding S series phones, with an Infinity-O display containing a circular cutout in the top center for the front selfie camera. All three models use Gorilla Glass Victus+ for the back panel, unlike the S21 series which had plastic on the smaller S21. The rear camera array on the S22 and S22+ has a metallic surround, while the S22 Ultra has a separate lens protrusion for each camera element. Specifications Hardware Chipsets The S22 line comprises three models with various hardware specifications. Except for some African and all European countries that use the Exynos 2200, with a new gpu with AMD, all models outside these regions use the Qualcomm Snapdragon 8 Gen 1 including Argentina, Australia, Canada, China, India, Mexico and the United States. Display The S22 series feature "Dynamic AMOLED 2X" displays with HDR10+ support and "dynamic tone mapping" technology. All models use a second-generation ultrasonic in-screen fingerprint sensor. Storage The S22 and S22+ offer 8 GB of RAM with 128 GB and 256 GB options for internal storage. The S22 Ultra has 8 GB of RAM with 128 GB as well as a 12 GB option with 256 GB, 512 GB and 1 TB options for internal storage. Unlike the S21 Ultra, the S22 Ultra doesn't feature a model with a 16 GB RAM variant. All three models lack a microSD card slot. Batteries The S22, S22+, and S22 Ultra contain non-removable 3,700 mAh, 4,500 mAh, and 5000 mAh Li-Po batteries respectively. The S22 supports wired charging over USB-C at up to 25W (using USB Power Delivery) while the S22+ and S22 Ultra have faster 45W charging. Tests found there's no significant difference between the 45W and 25W charging speeds. All three have Qi inductive charging up to 15W. The phones also have the ability to charge other Qi-compatible devices from the S22's own battery power, which is branded as "Wireless PowerShare," at up to 4.5W. Connectivity All three phones support 5G SA/NSA networks. The Galaxy S22 supports Wi-Fi 6 and Bluetooth 5.2, while the Galaxy S22+ and S22 Ultra support Wi-Fi 6E and Bluetooth 5.2. The S22+ and S22 Ultra models also support Ultra Wideband (UWB) for short-range communications similar to Bluetooth (not to be confused with 5G mmWave, which is marketed as Ultra Wideband by Verizon). Samsung uses this technology for their new "SmartThings Find" feature and the Samsung Galaxy SmartTag+. Cameras The S22 and S22+ have a 50 MP wide sensor, a 10 MP telephoto sensor with 3x optical zoom, and a 12 MP ultrawide sensor. The S22 Ultra retains its predecessor's 108 MP sensor with 12-bit HDR. It also has two 10 MP telephoto sensors with 3x and 10x optical zoom as well as a 12 MP ultrawide sensor. The front-facing camera uses a 10 MP sensor on the S22 and S22+, and a 40 MP sensor on the S22 Ultra. The Galaxy S22 series can record HDR10+ video and support HEIF. Supported video modes The Galaxy S22 series supports the following video modes: 8K@24fps 4K@30/60fps 1080p@30/60/120/240fps 720p@960fps (480fps is interpolated to 960fps on the S22 Ultra) Still frames extracted from high resolution footage can act as standalone photographs. S Pen The S22 Ultra is the first S series phone to include a built-in S Pen, a hallmark feature of the Galaxy Note series. The S Pen has latency at 2.8ms, reduced from 26ms on Note 20 and 9ms on the Note 20 Ultra and S21 Ultra (although the S21 Ultra had S Pen functionality, it was not included with the phone), and marked the introduction of an 'AI-based co-ordination prediction system'. The S Pen also supports Air gestures and the Air Action system. Software The S22 phones were released with Android 12 (One UI 4.1) with Samsung's One UI software. Samsung Knox is included for enhanced device security, and a separate version exists for enterprise use. Samsung has promised 4 Android OS upgrades (till Android 16) and 5 years of security updates (till 2027). One UI 5.0 was released on 17 October 2022. Criticism Performance throttling controversy Testing performed by benchmarking utility Geekbench and media outlet Android Police reported that Samsung's Game Optimizing Service (GOS) was throttling the performance of the device significantly in a number of popular apps, but allowing it to run unthrottled for benchmarking utilities; one specific test on the S22+, using a copy of Geekbench 5 that was modified to look like Genshin Impact to the GOS, recorded a loss of 45% in single-core performance and 28% in multi-core performance versus an undisguised copy of the utility. In response, Geekbench has permanently delisted the entire S22, S21 and S10 lineup from its service. Samsung has since released an update allowing S22 users to disable GOS on their devices. Gallery See also List of longest smartphone telephoto lenses References External links Galaxy S22 5G – official website Galaxy S22 Ultra 5G – official website Galaxy S22 user manual – download Samsung Galaxy S22 user manual Android (operating system) devices Samsung Galaxy Samsung smartphones Discontinued Samsung Galaxy smartphones Discontinued flagship smartphones Mobile phones introduced in 2022 Mobile phones with multiple rear cameras Mobile phones with 4K video recording Mobile phones with 8K video recording Mobile phones with stylus
Samsung Galaxy S22
[ "Technology" ]
1,839
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
70,030,815
https://en.wikipedia.org/wiki/List%20of%20software%20using%20Electron
This is a list of application software written using the Electron software framework to provide the graphical user interface. List References Free and open-source software GitHub Microsoft free software Software using the MIT license 2013 software Google Chrome Cross-platform software Cross-platform desktop-apps development
List of software using Electron
[ "Technology" ]
56
[ "GitHub", "Computing websites" ]
70,030,960
https://en.wikipedia.org/wiki/Paleo-inspiration
Paleo-inspiration is a paradigm shift that leads scientists and designers to draw inspiration from ancient materials (from art, archaeology, natural history or paleo-environments) to develop new systems or processes, particularly with a view to sustainability. Paleo-inspiration has already contributed to numerous applications in fields as varied as green chemistry, the development of new artist materials, composite materials, microelectronics, and construction materials. Semantics and definitions While this type of application has been known for a long time, the concept itself was coined by teams from the French National Centre for Scientific Research, the Massachusetts Institute of Technology and the Bern University of Applied Sciences from the term Bioinspiration. They published the concept in a seminal paper published online in 2017 by the journal Angewandte Chemie. Different names have been used to designate the corresponding systems, in particular: paleo-inspired, antiqua-inspired, antiquity-inspired or archaeomimetic. The use of these different names illustrates the extremely large time gap between the sources of inspiration, from millions of years ago when considering palaeontological systems and fossils, to much more recent archaeological or artistic material systems. Properties sought Distinct physico-chemical and mechanical properties are sought. They may concern intrinsic properties of the paleo-inspired materials: durability (materials found in certain contexts, having resisted alteration in these environments) and resistance to corrosion or alteration electronic or magnetic properties optical properties (especially from pigments or dyes, materials used for ceramic manufacture) They can also concern processes: processes with low energy or resource consumption, with a view to chemical processes favouring sustainable development soft chemistry processes The paleo-inspired approach This approach combines several key stages. Observation: This phase concerns materials, their properties, or the manufacturing processes (in relation in particular to the study of chaîne opératoire's in archaeology, or the history of techniques, in particular that of artistic techniques), and the processes of alteration (or even the work carried out in experimental taphonomy). This is therefore a first phase of reverse engineering. Some of these studies fall within the field of anthropology. As in the case of bioinspiration, this phase is fundamental and is based on an approach that favours creative exploration of objects, with few preconceived ideas (serendipity). Re-creation: A second phase follows aimed at simplifying materials, systems and processes in order to identify the fundamental mechanisms at the origin of the observed properties. This stage requires a back and forth between the synthesis of simplified systems and the characterisation of the new objects of study. Design: Finally, there follows a conception or design phase, concerning materials, systems or processes, and aiming at their concrete implementation for applications. Practical applications Sustainable building materials Emblematic examples include the microscopic study of the mineral phases present in Roman concretes to reproduce their durability in aggressive environments, particularly in the marine environment. Durable colouring materials A notable discovery is the elucidation of the atomic structure of Maya blue, a composite pigment combining a clay with an organic dye, which has led teams to produce pigments of other colours by combining clays with distinct organic dyes, such as "Maya violet". References Materials science Archaeology Paleontology
Paleo-inspiration
[ "Physics", "Materials_science", "Engineering" ]
666
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
70,031,120
https://en.wikipedia.org/wiki/Green%20Water%20Treatment%20Plant
The Thomas C. Green Water Treatment Plant was Austin Water Utility's first water treatment plant, and the first to open in Austin, Texas. It closed in 2008 and was redeveloped into multiple skyscrapers by Trammel Crow Company. History The Green Water Treatment Plant opened as the Austin Filtration Plant in 1924 on the north shore of the Colorado River in Downtown Austin, which is now part of Lady Bird Lake. The plant opened after the development of a chemical treatment for river water by Dr. E. P. Schoch of the University of Texas in 1923. It was the only water treatment plant in the Austin Water system until 1954, when the Albert R. Davis Water Treatment Plant opened on Lake Austin. From 1984 to 1986, the plant was modernized and its capacity was doubled while remaining in operation. As the Green Water Treatment Plant aged into the 2000s, then-Mayor Will Wynn proposed the relocation or closure of the plant along with the construction of Water Treatment Plant 4 in West Austin. In 2008, the plant was officially decommissioned by Austin City Council and city staff recommended developer Trammel Crow's proposal for redevelopment of the site. Redevelopment In 2014, the City of Austin sold the Green Water parcel to Trammel Crow for $42.2 million. The site was redivided into its original blocks, as laid out in the Waller Plan of 1839; the full Blocks 1 & 185, and the southern portions of Blocks 23 and 188. Nueces Street was extended south through the site to connect to Cesar Chavez Street, and 2nd Street was extended west to Shoal Creek. 2nd Street would later be extended west to cross the creek and connect to the street grid of the Seaholm Power Plant redevelopment. The four blocks were developed into office and mixed-use towers by Trammel Crow between 2015 and 2022. Block 1 (The Northshore) Block 1 was the first of the Green Water sites to be redeveloped, with construction beginning in 2015. The plot was developed as The Northshore, a mixed-use building with office and retail space in its podium and an apartment tower stepping back from Lady Bird Lake, due to setback requirements. The tower opened in 2016 as Austin's tallest apartment building, which it remains to this day. Block 23 (500 West 2nd Street) 500 West 2nd Street was the first office tower built on the Green Water site. Construction began in 2015 and concluded in 2017, with Google as the building's only office tenant. The building was commonly referred to as "The Google Building" before the opening of Block 185. The building was designed as a pre-certified LEED Gold tower. Block 185 Block 185 was the final Green Water site to be developed, with construction beginning in 2019 and completing in 2022. The tower was the second in Austin to be leased by Google, who occupies the entire building. The tower is the tallest of the Green Water skyscrapers and the tallest office building in Austin, reaching tall. Block 185 has a unique design due to its setbacks on the southern and western facades, which face Lady Bird lake and Shoal Creek, respectively. Block 188 (Austin Proper Hotel & Residences) Austin Proper is a mixed-use hotel and condo tower facing Shoal Creek. The tower's design was first proposed in 2015, but it did not start construction until 2017. The building was completed in 2020. The tower contains the only hotel space in the Green Water redevelopment, the Austin Proper Hotel, which is a member of Marriott's Design Hotels. References Water treatment facilities
Green Water Treatment Plant
[ "Chemistry" ]
711
[ "Water treatment", "Water treatment facilities" ]
70,032,192
https://en.wikipedia.org/wiki/Quercus%20%C3%97%20subconvexa
Quercus × subconvexa is a naturally-occurring hybrid oak resulting from a cross between Q. durata and Q. garryana, found in California. References Trees of Northern America Hybrid plants subconvexa
Quercus × subconvexa
[ "Biology" ]
46
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
70,032,359
https://en.wikipedia.org/wiki/Konda%20Hakuch%C5%8D%20Haniwa%20Production%20Site
The is an archaeological site with the ruins of a Kofun period factory for the production of haniwa clay funerary pottery, located in what is now the Hakucho neighborhood of the city of Habikino in Osaka Prefecture in the Kansai region of Japan. It received protection as a National Historic Site in 1973, with the area under protection expanded in 1975. Overview The Konda Hakuchō site is located in between the Konda Mitoyama Kofun (tomb of Emperor Ōjin) and the Hakayama Kofun in the Furuichi Kofun Cluster, and was the location where the thousands of haniwa used in these, and other burial mounds in the area. The kilns are divided into two groups, with a total of eleven kiln thus far located. Each has a width of about 1.5 meters, length of about 7 meters, and is at an inclination of about 12 degrees on the slope of a hill. Only a part of each base, the fire mouth, flue and the ash field have survived. Most of the artifacts found are cylindrical haniwa pieces, but figurative haniwa pieces of various types have also been found. Nearby. the foundation pillars of several raised-floor buildings in orderly rows were found. It is possible that into the Nara period, when haniwa were no longer being produced, the site became the location of the district office for ancient Furuichi District. The remains of a Haji ware workshop from the Nara period have also been found. At present, the site is now an archaeological park with one of the kilns restored to its original appearance. The site is about a 10-minute walk from Furuichi Station on the Kintetsu Railway Minami Osaka Line. See also List of Historic Sites of Japan (Osaka) References External links Ibaraki Prefectural Board of Education Habikino City official site Kofun period History of Osaka Prefecture Habikino Historic Sites of Japan Izumi Province Japanese pottery kiln sites
Konda Hakuchō Haniwa Production Site
[ "Chemistry", "Engineering" ]
417
[ "Kilns", "Japanese pottery kiln sites" ]
70,032,887
https://en.wikipedia.org/wiki/MNL1%20Data%20Center
The MNL1 Data Center is a proposed hyperscale green data center campus to be built in Cainta, Rizal. If built, MNL1 will become the largest data center in the Philippines. History Singaporean firm SpaceDC announced in February 2022 that it plans to set up MNL1, a hyperscale data center, in the Philippines citing the country as the second country in Southeast Asia with the fastest data center growth and characterized it as a "dramatically underserved market". It acquired the service of property consulting firm JLL as MNL1's construction manager. At a cost of , it is planned to be built in Cainta, Rizal. SpaceDC assessed potential sites in Greater Manila for natural disaster risk such as earthquakes, flooding, and volcanic eruptions prior to settling with the Cainta site. It is projected to be operational within 2022. If completed it will become the largest data center in the Philippines. Facilities The MNL1 campus in Cainta, Rizal will cover an area of . It will be a green data center with a capacity of 72 MW and mainly powered wind and geothermal energy. It will host its own leading internet exchange as well as switch/ramp to cloud providers such as AWS, Alibaba, and Azure. References Data centers Buildings and structures in Cainta Proposed buildings and structures in the Philippines
MNL1 Data Center
[ "Technology" ]
279
[ "Data centers", "Computers" ]
65,676,732
https://en.wikipedia.org/wiki/James%20Kirkcaldie
James Cullen Kirkcaldie (18 April 1875 – 16 August 1931) was a New Zealand cricketer. He played in one first-class match for Wellington in 1903/04. Kirkcaldie was an analytical chemist. References External links 1875 births 1931 deaths New Zealand cricketers Wellington cricketers Cricketers from the London Borough of Enfield People from Enfield, London Analytical chemists
James Kirkcaldie
[ "Chemistry" ]
75
[ "Analytical chemists" ]
65,677,074
https://en.wikipedia.org/wiki/Life-years%20lost
The life-years lost or years of lost life (YLL) is a unit to measure the number of expected years of human life lost following an unexpected event, such as death by illness, crime or war. Life-years lost is a flexible measure which have been used to measure the effects of overall mortality of non-communicable diseases, drug misuse and suicide, epidemics (for example COVID-19 pandemic), wars, and natural disasters such as earthquakes. Life-years lost are based on both the number of deaths and the age of those who died. It estimates the number of years that those who died would have lived if they did not met their accidental a deadly fate. Higher YLLs can be due to larger numbers of death, few sharply younger deaths or some combination of the two. See also Quality-adjusted life year Years of potential life lost References Epidemiology Health economics Life expectancy
Life-years lost
[ "Biology", "Environmental_science" ]
191
[ "Senescence", "Life expectancy", "Epidemiology", "Environmental social science" ]
65,677,561
https://en.wikipedia.org/wiki/Right%20group
In mathematics, a right group is an algebraic structure consisting of a set together with a binary operation that combines two elements into a third element while obeying the right group axioms. The right group axioms are similar to the group axioms, but while groups can have only one identity and any element can have only one inverse, right groups allow for multiple identity elements and multiple inverse elements. It can be proven (theorem 1.27 in ) that a right group is isomorphic to the direct product of a right zero semigroup and a group, while a right abelian group is the direct product of a right zero semigroup and an abelian group. Left group and left abelian group are defined in analogous way, by substituting right for left in the definitions. The rest of this article will be mostly concerned about right groups, but everything applies to left groups by doing the appropriate right/left substitutions. Definition A right group, originally called multiple group, is a set with a binary operation ⋅, satisfying the following axioms: Closure For all and in , there is an element c in such that . Associativity For all in , . Left identity element There is at least one left identity in . That is, there exists an element such that for all in . Such an element does not need to be unique. Right inverse elements For every in and every identity element , also in , there is at least one element in , such that . Such element is said to be the right inverse of with respect to . Examples Direct product of finite sets The following example is provided by. Take the group , the right zero semigroup and construct a right group as the direct product of and . is simply the cyclic group of order 3, with as its identity, and and as the inverses of each other. {| class="wikitable" |+ table ! !e !a !b |- !e |e |a |b |- !a |a |b |e |- !b |b |e |a |} is the right zero semigroup of order 2. Notice the each element repeats along its column, since by definition , for any and in . {| class="wikitable" |+ table ! !1 !2 |- !1 |1 |2 |- !2 |1 |2 |} The direct product of these two structures is defined as follows: The elements of are ordered pairs such that is in and is in . The operation is defined element-wise: Formula 1: The elements of will look like and so on. For brevity, let's rename these as , and so on. The Cayley table of is as follows: Here are some facts about : has two left identities: and . Some examples: Each element has two right inverses. For example, the right inverses of with regards to and are and , respectively. Complex numbers in polar coordinates Clifford gives a second example involving complex numbers. Given two non-zero complex numbers a and b, the following operation forms a right group: All complex numbers with modulus equal to 1 are left identities, and all complex numbers will have a right inverse with respect to any left identity. The inner structure of this right group becomes clear when we use polar coordinates: let and , where A and B are the magnitudes and and are the arguments (angles) of a and b, respectively. (this is not the regular multiplication of complex numbers) then becomes . If we represent the magnitudes and arguments as ordered pairs, we can write this as: Formula 2: This right group is the direct product of a group (positive real numbers under multiplication) and a right zero semigroup induced by the real numbers. Structurally, this is identical to formula 1 above. In fact, this is how all right group operations look like when written as ordered pairs of the direct product of their factors. Complex numbers in cartesian coordinates If we take the and complex numbers and define an operation similar to example 2 but use cartesian instead of polar coordinates and addition instead of multiplication, we get another right group, with operation defined as follows: , or equivalently: Formula 3: A practical example from computer science Consider the following example from computer science, where a set would be implemented as a programming language type. Let be the set of date times in an arbitrary programming language. Let be the set of transformations equivalent to adding a duration to an element of . Let be the set of time zone transformations on elements of . Both and are subsets of , the full transformation semigroup on . behaves like a group, where there is a zero duration and every duration has an inverse duration. If we treat these transformations as right semigroup actions, behaves like a right zero semigroup, such that a time zone transformation always cancels any previous time zone transformation on a given date time. Given any two arbitrary date times and (ignore issues regarding representation boundaries), one can find a pair of a duration and a time zone that will transform into . This composite transformation of time zone conversion and duration adding is isomorphic to the right group . Taking the java.time package as an example, the sets and would correspond to the class ZonedDateTime, the function plus and the function withZoneSameInstant, respectively. More concretely, for any ZonedDateTime t1 and t2, there is a Duration d and a ZoneId z, such that: t2 = t1.plus(d).withZoneSomeInstant(z) The expression above can be written more concisely using right action notation borrowed from group theory as: It can also be verified that durations and time zones, when viewed as transformations on date/times, in addition to obeying the axioms of groups and right zero semigroups, respectively, they commute with each other. That is, for any date/time t, any duration d and any timezone z: This is the same as saying: References Algebraic structures Semigroup theory
Right group
[ "Mathematics" ]
1,232
[ "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Algebraic structures", "Semigroup theory" ]
65,677,758
https://en.wikipedia.org/wiki/Lactarius%20psammicola
Lactarius psammicola is a species of mushroom in the genus Lactarius, family Russulaceae, and order Russulales. Its mushroom cap is convex when young and becomes funnel shaped as it ages. The cap has concentric rings of orangish brown. The taste is described as acrid. Further reading Hesler & Smith's monograph of North American Lactarius species References psammicola Fungus species
Lactarius psammicola
[ "Biology" ]
90
[ "Fungi", "Fungus species" ]
65,678,288
https://en.wikipedia.org/wiki/1st%20Engineer%20Brigade%20%28United%20States%29
The 1st Engineer Brigade is a military engineering training brigade of the United States Army subordinate to the United States Army Engineer School. It is headquartered at Fort Leonard Wood, Missouri. History World War II The 1st Engineer Amphibian Brigade was activated at Camp Edwards, Massachusetts on 15 June 1942. Some 2,269 men were transferred from existing units, the 37th Engineer Combat Regiment providing the nucleus of the boat regiment, and the 87th Engineer Heavy Ponton Battalion that of the shore regiment. Brigadier General Henry C. Wolfe was assigned as commanding general on 7 July 1942. The brigade trained until 15 July, when it was assigned to the Amphibious Training Command. The brigade was pulled from the Amphibious Training Center early and sent to England to participate in Operation Sledgehammer, departing from the New York Port of Embarkation on 5 August, and arriving on 17 August. Elements of the brigade participated in the Operation Torch. The 531st Shore Regiment and 286th Signal Company acted as the shore party for the 1st Infantry Division, while the 2nd Battalion, 591st Engineer Boat Regiment was reorganized as a shore battalion, and operated in support of Combat Command B, 1st Armored Division. Brigade headquarters departed Glasgow on 24 November, and landed in North Africa on 6 December. Wolfe became chief engineer at the Services of Supply on 22 February and Colonel R. L. Brown of the 531st Engineer Shore Regiment acted as commander. Wolfe rejoined the brigade on 22 March 1943, but on 25 May he became S-3 at Allied Force Headquarters, and was replaced by Colonel Eugene M. Caffey. On 10 May 1943, the brigade was redesignated the 1st Engineer Special Brigade. The 591st Boat Regiment was detached, as was the 561st Boat Maintenance Company, which remained in England working on Navy landing craft, but the 36th and 540th Engineer Combat Regiments were attached for the 10 July Allied invasion of Sicily (Operation Husky), bringing the strength of the brigade to over 20,000. The brigade then participated in the Allied invasion of Italy at Salerno (Operation Avalanche) on 9 September. In November 1943, the headquarters of the 1st Engineer Special Brigade, along with the 531st Shore Regiment, 201st Medical Battalion, 286th Signal Company, 262nd Amphibian Truck Battalion and 3497th Ordnance Medium Automotive Maintenance Company, returned to England to participate in the invasion of Normandy (Operation Overlord). This nucleus of 3,346 men was built up to a strength of 15,000 men for Overlord. During Exercise Tiger, a rehearsal for the Normandy operation on 28 April, German E-Boats attacked a convoy of landing ships, tank (LSTs) of the XI Amphibious Force carrying troops of the brigade. Two LSTs were sunk, and the brigade lost 413 men dead and 16 wounded. The exercise was observed by Lieutenant General Omar N. Bradley, who, unaware of the sinking of the LSTs, blamed the resulting poor performance of the brigade on Caffey, and had him temporarily replaced for the Normandy landings by Brigadier General James E. Wharton. The brigade participated in the D-Day landing on Utah Beach, and operated as Utah Beach Command until 23 October 1944, and then as the Utah District of the Normandy Base Section until 7 December 1944. Under the command of Colonel Benjamin B. Talley, the brigade headquarters returned to England, and embarked for the United States on 23 December. It arrived at Fort Dix, New Jersey, on 30 December. After four weeks leave, it reassembled at Fort Lewis, Washington. Part of the brigade headquarters went by air to Leyte to join the XXIV Corps for the invasion of Okinawa, while the rest traveled directly to Okinawa on the . The brigade was in charge of unloading on Okinawa from 9 April to 31 May. It then prepared for the invasion of Japan. This did not occur due to the end of the war, and the brigade landed in Korea on 12 September 1945. Its final commander was Colonel Robert J. Kasper, who assumed command on 1 November 1945. The brigade was inactivated in Korea on 18 February 1946. Organization for the landing in Normandy: Brigade Headquarters 531st Engineer Shore Regiment 24th Amphibian Truck Battalion 462nd Amphibian Truck Company 478th Amphibian Truck Company 479th Amphibian Truck Company 306th Quartermaster Battalion 556th Quartermaster Railhead Company 562nd Quartermaster Railhead Company 3939th Quartermaster Gas Supply Co 191st Ordnance Battalion 3497th Ordnance Medium Automotive Maintenance Company 625th Ordnance Ammunition Company 161st Ordnance Platoon 577th Quartermaster Battalion 363rd Quartermaster Service Company 3207th Quartermaster Service Company 4144th Quartermaster Service Company 261st Medical Battalion (Amphibious) 449th Military Police Company 286th Joint Assault Signal Company 33rd Chemical Decontamination Company Postwar On 30 September 1986, the brigade was reformed at Fort Leonard Wood, Missouri, as the 1st Engineer Brigade, and was assigned to the United States Army Engineer School within the Training and Doctrine Command. Current Structure 1st Engineer Brigade, Fort Leonard Wood, Missouri 31st Engineer Battalion, Fort Leonard Wood, Missouri 35th Engineer Battalion, Fort Leonard Wood, Missouri 169th Engineer Battalion, Fort Leonard Wood, Missouri 554th Engineer Battalion, Fort Leonard Wood, Missouri References Bibliography Engineer Brigades of the United States Army United States Army Corps of Engineers United States Army Engineer School
1st Engineer Brigade (United States)
[ "Engineering" ]
1,070
[ "Engineering units and formations", "United States Army Corps of Engineers" ]
65,678,292
https://en.wikipedia.org/wiki/Joule%20%28journal%29
Joule is a monthly peer-reviewed scientific journal published by Cell Press. It was established in 2017 as a sister journal to Cell. The editor-in-chief is Philip Earis. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 41.248. References External links Academic journals established in 2017 Cell Press academic journals Sustainability journals Delayed open access journals Monthly journals
Joule (journal)
[ "Environmental_science" ]
93
[ "Environmental science journals", "Sustainability journals", "Environmental science journal stubs" ]
65,681,303
https://en.wikipedia.org/wiki/HAT-P-30
HAT-P-30, also known as WASP-51, is the primary of a binary star system about 700 light-years away. It is a G-type main-sequence star. HAT-P-30 has a similar concentration of heavy elements compared to the Sun. The faint stellar companion was detected in 2013 at a projected separation of 3.842″. Planetary system In 2011 a transiting hot Jupiter planet b was independently detected by two teams. The planetary orbit is strongly misaligned with the equatorial plane of the star, the misalignment angle being equal to 73.5°. Since 2022, an additional planet in the system is suspected based on transit timing variations. References Hydra (constellation) G-type main-sequence stars Binary stars Planetary systems with one confirmed planet Planetary transit variables J08154797+0550121 Durchmusterung objects
HAT-P-30
[ "Astronomy" ]
184
[ "Hydra (constellation)", "Constellations" ]
65,681,521
https://en.wikipedia.org/wiki/Neptunium%28III%29%20chloride
Neptunium(III) chloride or neptunium trichloride is an inorganic compound with a chemical formula NpCl3. This salt is strongly radioactive. Production Neptunium(III) chloride can be produced by reducing neptunium(IV) chloride by ammonia or hydrogen at 350~400 °C: Chemical properties Neptunium(III) chloride hydrolyzes at 450 °C and forms an oxychloride NpOCl. References Neptunium(III) compounds Chlorides Actinide halides
Neptunium(III) chloride
[ "Chemistry" ]
112
[ "Chlorides", "Inorganic compounds", "Salts" ]
65,682,130
https://en.wikipedia.org/wiki/Data%20valuation
Data valuation is a discipline in the fields of accounting and information economics. It is concerned with methods to calculate the value of data collected, stored, analyzed and traded by organizations. This valuation depends on the type, reliability and field of data. History In the 21st century, exponential increases in computing power and data storage capabilities (in line with Moore's law) have led to a proliferation of big data, machine learning and other data analysis techniques. Businesses increasingly adapt these techniques and technologies to pursue data-driven strategies to create new business models. Traditional accounting techniques used to value organizations were developed in an era before high-volume data capture and analysis became widespread and focused on tangible assets (machinery, equipment, capital, property, materials etc.), ignoring data assets. As a result, accounting calculations often ignore data and leave its value off organizations' balance sheets. Notably, in the wake of the 9/11 attacks on the World Trade Center in 2001, a number of businesses lost significant amounts of data. They filed claims with their insurance companies for the value of information that was destroyed, but the insurance companies denied the claims, arguing that information did not count as property and therefore was not covered by their policies. A number of organizations and individuals began noticing this and then publishing on the topic of data valuation. Doug Laney, vice president and analyst at Gartner, conducted research on Wall Street valued companies, which found that companies that had become information-centric, treating data as an asset, often had market-to-book values two to three times higher than the norm. On the topic, Laney commented: "Even as we are in the midst of the Information Age, information simply is not valued by those in the valuation business. However, we believe that, over the next several years, those in the business of valuing corporate investments, including equity analysts, will be compelled to consider a company's wealth of information in properly valuing the company itself." In the latter part of the 2010s, the list of most valuable firms in the world (a list traditionally dominated by oil and energy companies) was dominated by data firms – Microsoft, Alphabet, Apple, Amazon and Facebook. Characteristics of data as an asset A 2020 study by the Nuffield Institute at Cambridge University, UK divided the characteristics of data into two categories, economic characteristics and informational characteristics. Economic characteristics Data is non-rival. Multiple people can use data without it being depleted or used up. Data varies in whether it is excludable. Data can be a public good or a club good, depending on what type of information it contains. Some data can reasonably be shared with anyone who desires to access it (e.g., weather data). Other data is limited to particular users and contexts (e.g., administrative data). Data involves externalities. In economics, an externality is the cost or benefit that affects a third party who did not choose to incur that cost or benefit. Data can create positive externalities because when new data is produced, it combines with already existing data to produce new insights, increasing the value of both, and negative externalities, when data may be leaked, breached or otherwise misused. Data may have increasing or decreasing returns. Sometimes collecting more data increases insight or value, though at other times it can simply lead to hoarding. Data has a large option value. Due to the perpetual development of new technologies and datasets, it is hard to predict how the value of a particular data asset might change. Organizations may store data, anticipating possible future value, rather than actual present value. Data collection often has high up-front cost and low marginal cost. Collecting data often requires significant investment in technologies and digitization. Once these are established, further data collection may cost much less. High entry barriers may prevent smaller organizations from collecting data. Data use requires complementary investment. Organizations may need to invest in software, hardware and personnel to realize value from data. Informational characteristics Subject matter. Encompasses what the data describes, and what can it help with. Generality. Some data is useful across a range of analyses; other data is useful only in particular cases. Temporal coverage, Data can be forecast, real-time, historic or back-cast. These are used differently, for planning, operational and historical analyses. Quality. Higher quality data is generally more valuable as it reduces uncertainty and risk, though the required quality varies from use to use. Greater automation in data collection tends to lead to higher quality. Sensitivity. Sensitive data is data that could be used in damaging ways (e.g., personal data, commercial data, national security data). Costs and risks are incurred keeping sensitive data safe. Interoperability and linkability. Interoperability relates to the use of data standards when representing data, which means that data relating to the same things can be easily brought together. Linkability relates to the use of standard identifiers within the data set that enables a record in one data set to be connected to additional data in another data set. Data value drivers A number of drivers affect the extent to which future economic benefits can be derived from data. Some drivers relate to data quality, while others may either render the data valueless or create unique and valuable competitive advantages for data owners. Exclusivity. Having exclusive access to a data asset makes it more valuable than if it is accessible to multiple license holders. Timeliness. For much data, the more closely it reflects the present, the more reliable the conclusions that can be drawn from it. Recently captured data is more valuable than historic data. Accuracy. The more closely data describes the truth, the more valuable it is. Completeness. The more variables about a particular event or object described by data, the more valuable the data is. Consistency. The more a data asset is consistent with other similar data assets, the more valuable it is (e.g., there are no inconsistencies as to where a customer resides). Usage Restrictions. Data collected without necessary approvals for usage (e.g., personal data for marketing purposes) is less valuable as it cannot be used legally. Interoperability/Accessibility. The more easily and effectively data can be combined with other organizational data to produce insights, the more valuable it is. Liabilities and Risk. Reputational consequences and financial penalties for breaching data regulations such as GDPR can be severe. The greater the risk associated with data use, the lower its value. The process of realizing value from data can be subdivided into a number of key stages: data assessment, where the current states and uses of data are mapped; data valuation, where data value is measured; data investment, where capital is spent to improve processes, governance and technologies underlying data; data utilization, where data is used in business initiatives; and data reflection, where the previous stages are reviewed and new ideas and improvements are suggested. Methods for valuing data Due to the wide range of potential datasets and use cases, as well as the relative infancy of data valuation, there are no simple or universally agreed upon methods. High option value and externalities mean data value may fluctuate unpredictably, and seemingly worthless data may suddenly become extremely valuable at an unspecified future date. Nonetheless, a number of methods have been proposed for calculating or estimating data value. Information-theoretic characterization Information theory provides quantitative mechanisms for data valuation. For instance, secure data sharing requires careful protection of individual privacy or organization intellectual property. Information-theoretic approaches and data obfuscation can be applied to sanitize data prior to its dissemination. Information-theoretic measures, such as entropy, information gain, and information cost, are useful for anomaly and outlier detection. In data-driven analytics, a common problem is quantifying whether larger data sizes and/or more complex data elements actually enhance, degrade, or alter the data information content and utility. The data value metric (DVM) quantifies the useful information content of large and heterogeneous datasets in terms of the tradeoffs between the size, utility, value, and energy of the data. Such methods can be used to determine if appending, expanding, or augmenting an existent dataset may improve the modeling or understanding of the underlying phenomenon. Infonomics valuation models Doug Laney identifies six approaches for valuing data, dividing these into two categories: foundational models and financial models. Foundational models assign a relative, informational value to data, where financial models assign an absolute, economic value. Foundational models Intrinsic Value of Information (IVI) measures data value drivers including correctness, completeness and exclusivity of data and assigns a value accordingly. Business Value of Information (BVI) measures how fit the data is for specific business purposes (e.g., initiative X requires 80% accurate data that is updated weekly – how closely does the data match this requirement?). Performance Value of Information (PVI) measures how the usage of the data effects key business drivers and KPIs, often using a control group study. Financial models Cost Value of Information (CVI) measures the cost to produce and store the data, the cost to replace it, or the impact on cash flows if it was lost. Market Value of Information (MVI) measures the actual or estimated value the data would be traded for in the data marketplace. Economic Value of Information (EVI) measures the expected cash flows, returns or savings from the usage of the data. Bennett institute valuations Research by the Bennett Institute divides approaches for estimating the value of data into market-based valuations and non-market-based valuations. Market based valuations Stock market valuations measure the advantage gained by organizations that invest in data and data capability. Income based valuations seek to measure the current and future income derived from data. This approach has limitations due to its inability to measure value realized in a wider business or societal ecosystem, or beyond financial transactions involving data. Where income from data is realized through trading data in a marketplace, there are further limitations, as markets fail to describe the full option value of data, and usually lack enough buyers and sellers for the market to settle on a price that truly reflects the economic value of the data. Cost based valuations measure the cost to create and maintain data. This can look at the actual cost incurred, or projected costs if the data needed to be replaced. Non-market based valuations Economic value of open data examines who open or free data creates value for: organizations that host or steward the data; intermediary organizations or individuals that reuse the data to create products and services; organizations and individuals that use these products and services. Value of personal data can be estimated by asking consumers questions such as how much they would be willing to pay to access a data-privacy service or would charge for access to their personal data. Values can also be estimated by examining the profits of companies that rely on personal data (In 2018 Facebook generated $10 for every active user), and by examining fines handed out to organizations that breach data privacy or other regulations. Other approaches A modified cost value approach suggests refinements to a cost-based valuation approach. It proposes the following modifications: data collected redundantly should be considered to have zero value to avoid double counting; unused data should be considered to have zero value (this can be identified via data usage statistics); the number of users and number of accesses to the data should be used to multiply the value of the data, allowing the historical cost of the information to be modified in the light of its use in practice; the value should be depreciated based on a calculated "shelf life" of the information; the value should be modified by its accuracy relative to what is considered an acceptable degree of accuracy. A consumption-based approach builds on the principles in the modified cost value approach by assigning data users different weightings based on the relative value they contribute to the organization. These weightings are including in the modelling of data usage statistics and further modify the measured value of data. Data hub valuation uses a cost-based approach that measures the cost of data hubs where large repositories of data are stored, rather than measuring the cost of separate datasets. The data hub cost can then be modified, as in the consumption based and modified cost value approaches. Another hub valuation approach uses a modified market value approach, by measuring savings to users from accessing data via hubs versus individually accessing data from producers, and user willingness-to-pay for access to data hubs. A stakeholder approach engages key stakeholders to value data, examining how data supports activities which external stakeholders identify as creating value for them. It uses a model that combines the total value created by the organization, a weighted list of value creating initiatives (as defined by external stakeholders) and an inventory of data assets. This approach was developed in a collaboration between Anmut, a consultancy firm, and Highways England, a public sector agency for which data valuations based on market value, income gains or economic performance are less meaningful. The approach can also be applied in the private sector. Companies performing Data Valuations Oyster Venture Partners performs Data Valuation as a Service for companies. They provide a provide a proven and defensible data valuation service to determine a monetary value for an organization's data assets. Their services are designed to ensure maximum value for companies' data assets so they can manage it as a monetary intangible asset. They have realized over $1.5 Billion in data asset value. Data Valuation as a Service provides: A data valuation report from 21 different data valuation methodologies and calculations to create a defensible valuation of your data unique to your company and its data. An interrogation of data via data due diligence and for strategy, security, governance, monetization, substantiation, security, privacy and people A data monetization strategies review against each use case in order to glean as much current and future value of data as possible. Analytic evidence of data value as well as model forecasts for data drivers, use case, and monetization impacts to the data valuation of your data References Accounting Information economics Data
Data valuation
[ "Technology" ]
2,898
[ "Information technology", "Data" ]
65,682,412
https://en.wikipedia.org/wiki/Laila%20Ohlgren
Ragnhild Laila Lillemor Ohlgren, born Andersson (19 November 1937 – 6 January 2014) was a Swedish telecommunications engineer who is seen as the developer of mobile telephony together with Östen Mäkitalo, both engineers at Telia. In particular, she successfully introduced storage of the telephone number to be dialed in the phone's microprocessor so that connection could be achieved by pressing the call button. This avoided transmission breakages caused by obstacles such as trees during more lengthy traditional dialing. The approach was subsequently adopted worldwide. For her efforts, in 2009 she became the first woman to be awarded the Polhem Prize for technical innovation. Early life Born in Tingshammer just outside Stockholm on 17 November 1937, Ragnhild Laila Lillemor Andersson was the daughter of Johan Arvid Andersson and his wife Sally Elisabeth born Carlsson. The family name was subsequently changed to Tingshammar. She was brought up by her single mother in difficult conditions. She attended the public school in Kungsholmen. While still a teenager, she met the baritone Bo Viktor Ohlgren (1933–2015) who was active in the Mission Covenant Church of Sweden. They married in 1959. Together they had two children, Magnus and Håkan, who both became engineers. Thanks to her father-in-law who worked for the Swedish telecoms' authority Televerket, she began working there in 1956 while continuing her education at home in the evenings. In this way, she succeeded in passing not only the school matriculation examination, but was also able to graduate as an engineer. At Televerket where she was the only woman in her department, she was promoted to project leader with involvement in the development of mobile telephone technology. From 1969, she was working with Östen Mäkitalo in connection with the Nordic Mobile Telephone (NMT) project. Inventing the call button When final tests of the NMT system were being conducted in 1979 just a few days before a key meeting in Kalmar, it suddenly occurred to Laila Ohlgren that the frequent breakdowns in dialing caused by objects such as trees when on the move could be overcome by using the phone's microprocessor to store the number to be dialed. A call button could then be used to make the connection by combining all the digits in one go. Although it was Whit weekend, she called Östen and asked him if they should drive around Stockholm and test her idea out. As reported in Ny Teknik, she explained, "One of us drove and the other made calls, and we continued the whole weekend. We perhaps made a thousand connections in order to get a reasonable statistical basis to see if the new solution worked. And it did." The approach proved to represent an important improvement in performance and was adopted as a component of NMT, the first integrated mobile telephony system in the world. Ohlgren's call button innovation became a world standard. Ohlgren continued to be employed at Televerket, later known as Telia, heading 750 employees in their insurance branch in Haninge until her retirement in 2005. In 2009, Laila Ohlgren became the first woman to be honoured with the Polhem Prize from the Swedish Association of Graduate Engineers, which included an award of 250,000 Swedish crowns (or around $28,000). She died on 6 January 2014 and is buried in Skogskyrkogården Cemetery in Gamla Enskede. References Further reading External links Laila Ohlgren utvecklade mobiltelefonin, illustrated biography of Laila Ohlgren in Swedish 1937 births 2014 deaths 20th-century Swedish inventors Women inventors Telecommunications engineers Swedish women engineers 20th-century Swedish women engineers 21st-century Swedish women engineers 20th-century Swedish engineers 21st-century Swedish engineers
Laila Ohlgren
[ "Engineering" ]
791
[ "Telecommunications engineering", "Telecommunications engineers" ]
65,682,506
https://en.wikipedia.org/wiki/Magway%20Ltd
Magway is a UK startup noted for its e-commerce and freight delivery system that aims to transport goods in pods that fit in new and existing -diameter pipes, underground and overground, reducing road congestion and air pollution. It uses linear magnetic motors to shuttle pods, designed to accommodate a standard delivery crate (or tote), at approximately . Founded in 2017 by Rupert Cruise, an engineer on Elon Musk's Hyperloop project, and Phill Davies, a business expert, Magway secured a £0.65 million grant in 2018, through Innovate UK’s 'Emerging and Enabling Technologies' competition, to develop an operational demonstrator. In 2019, £1.58 million was raised through crowdfunding to fund a pilot scheme, and in 2020, Magway was awarded £1.9 million from the UK Government's 'Driving the Electric Revolution Challenge', an initiative launched to coincide with the first meeting of a new Cabinet committee focused on climate change. In September 2020, Magway completed its first full loop of test track in a warehouse in Wembley. Primarily focused on two freight routes from large consolidation centres near London (Milton Keynes, Buckinghamshire and Hatfield, Hertfordshire) into Park Royal, a west London distribution centre, future plans involve installing of track in decommissioned London gas pipelines, to deliver e-commerce goods from distribution centres direct to consumers in the capital. The design of the pipes is similar to the current underground pipe system in small tunnels that distribute water, gas, and electricity in the city. The pods are powered by electromagnetic wave from magnetic motors that are similar to those used in roller coasters. A proposed route that runs from Milton Keynes to London will have the capacity to transport more than 600 million parcels annually. Outside of urban areas, Magway plans to build its pipe system alongside motorways. References Sustainable transport Transport systems Linear induction motors Vacuum systems
Magway Ltd
[ "Physics", "Technology", "Engineering" ]
382
[ "Transport systems", "Vacuum", "Sustainable transport", "Transport", "Physical systems", "Vacuum systems", "Matter" ]
65,682,510
https://en.wikipedia.org/wiki/Bourette
Bourette is a silk fabric with bumps often blended with other yarns made of Bourette fibers. The name "Bourette" is from its constituting fiber. It has a rough surface incorporating multicolored threads and knots of spun silk. The fabric is made with silk bourette and wool or cotton yarn. Bourette is a lightweight single cloth with a rough, knotty, and uneven surface. Silk waste Silk waste has many copious names whereas Floss is a general name for silk waste. Other names are 'Schappe' or 'echappe.' "Schapping" is a step of silk production of fermentation at low temperature for softening the gum. Schappe is one of the made products from Silk waste/Floss. Bourette and Florette Silk waste consists of two types, Bourette and Florette. The bourette fibers are short in length compared to the 'Florette', which are long silk fibers, suitable for products such as combed or worsted materials. Construction Bourette yarn Bourette yarn is a coarse, irregular slubbed yarn type made of silk waste fiber created during silk processing. Weave The fabric is a plain weave fabric but also possible with twill weave. The warp is made with wool or other types of yarns, and the weft is bourette. The yarn slubs provide a unique texture with small fancy colored lumps, scattered throughout. Uses Bourette was used for dresses, and furnishing material. References Woven fabrics Silk
Bourette
[ "Physics" ]
327
[ "Materials stubs", "Materials", "Matter" ]
65,683,901
https://en.wikipedia.org/wiki/Invasive%20Species%20Act
Invasive Species Act may refer to: National Invasive Species Act, a 1996 United States federal law North Texas Invasive Species Barrier Act of 2014, a 2014 Texas state law in the United States Invasive Species Act (Ontario), a 2015 Ontario provincial law in Canada See also Alien Species Prevention and Enforcement Act of 1992 - a United States federal law British Columbia Weed Control Act Hazardous Substances and New Organisms Act 1996 of New Zealand Invasive species
Invasive Species Act
[ "Biology" ]
85
[ "Pests (organism)", "Invasive species" ]
65,684,169
https://en.wikipedia.org/wiki/TV%20accessory
A television accessory (TV accessory) is an accessory that is used in conjunction with a television (TV) or other compatible display devices and is intended to either improve the user experience or to offer new possibilities of using it. History The first TV accessory with which owners could actively influence the content displayed on the screen in real time was the Magnavox Odyssey, the first commercial home video game console, released in September 1972 by Magnavox for a list price of $99.95. One of the first TV accessories that could record TV programs available for consumers was the Clie Pega-VR100K by Sony, released on October 9, 2003, for a list price of $479.99. As of 2017, TV accessories are a rapidly growing market which is expected to grow even more rapidly in the near future. Some of the most popular manufacturers of TV accessories include Sony, Magnavox, Apple, Nvidia, Amazon, Samsung, and Google, as well as many independent third-party suppliers. Types Soundbars A soundbar (also called sound bar or media bar) is a type of loudspeaker that projects audio from a wide enclosure. Soundbars are one of the most popular TV accessories because they are affordable, very easy to install and a relatively large upgrade compared to other accessories, offering much better sound than most integrated TV loudspeakers. Universal remotes A universal remote is a remote control that can be programmed to operate various brands of one or more types of consumer electronics devices. On May 30, 1985, Philips introduced the first universal remote (U.S. Pat. #4774511) under the Magnavox brand name. In 1985, Robin Rumbolt, William "Russ" McIntyre, and Larry Goodson with North American Philips Consumer Electronics (Magnavox, Sylvania, and Philco) developed the first universal remote control. Streaming television Streaming television is the digital distribution of television content, such as TV shows, as streaming video delivered over the Internet. Most TVs today are smart TVs, meaning that they can connect to the Internet to use different functions. However, since there are many different TV manufacturers that use different inferfaces for these functions, this may be confusing for some users. A dedicated streaming box like an Apple TV, Google Chromecast, Amazon Fire TV Stick or PlayStation TV offers a universal user experience across all TV brands. An Android TV box like the Nvidia Shield TV can also run all Android apps on the Play Store and stream PC gaming content to the TV. HDMI switches An HDMI switch (also known as HDMI switcher or HDMI switching box) is a device that accepts input from multiple HDMI sources and sends the signal you select to your HDTV via an HDMI cable. When they also support USB devices, they are KVM switches. Home video game consoles A home video game console is a type of video game console that is designed to be connected to a display device, such as a television, and an external power source as to play video games. In contrast to many other TV accessories that improve the user experience, a home video game console offers new possibilities of using a TV, meaning that users of such can not only determine what should be shown on the television screen, but also actively influence it in real time. References External links What to buy for your new 4K TV on Engadget Best TV accessories: everything your TV needs on TechRadar Multimedia Television technology Consumer electronics Electronics industry
TV accessory
[ "Technology" ]
713
[ "Information and communications technology", "Multimedia", "Television technology", "Electronics industry" ]
65,684,518
https://en.wikipedia.org/wiki/Relationship%20science
Relationship science is an interdisciplinary field dedicated to the scientific study of interpersonal relationship processes. Due to its interdisciplinary nature, relationship science is made up of researchers of various professional backgrounds within psychology (e.g., clinical, social, and developmental psychologists) and outside of psychology (e.g., anthropologists, sociologists, economists, and biologists), but most researchers who identify with the field are psychologists by training. Additionally, the field's emphasis has historically been close and intimate relationships, which includes predominantly dating and married couples, parent-child relationships, and friendships and social networks, but some also study less salient social relationships such as colleagues and acquaintances. History Early 20th century Empirically studying interpersonal relationships and social connection traces back to the early 20th century when some of the earliest focuses were on family relationships from a sociological perspective—specifically, marriage and parenting. In 1938 the National Council on Family Relations (NCFR) was formed and, in 1939, what is now the Journal of Marriage and Family (JMF) was established to publish peer-reviewed research with this emphasis. In the 1930s, 1940s, and 1950s, researchers such as John Bowlby, Harry Harlow, Robert Hinde, and Mary Ainsworth began pursuing the study of mother–infant attachment. In 1949, Reuben Hill developed the ABC-X model, which is a theoretical framework used to examine how families manage and adapt to crises given the resources they have. Then, in the late 1950s and early 1960s, the purview of relationship research began to expand more, beyond the idea of just family research. In 1959, Stanley Schachter published the book The Psychology of Affiliation: Experimental Studies of the Sources of Gregariousness, where he discussed humans' general affiliative needs and how they are intensified by biological responses (e.g., anxiety and hunger). That same year, Harold (Hal) Kelley and John Thibaut published a book, The Social Psychology of Groups, that outlined interdependence theory—an interdisciplinary theory that would become an essential framework for understanding close relationships from a cost-benefit perspective in the years to come. However, this prior interest in relationships was infrequent, and it was not until the late 1960s and early 1970s that the study of relationships truly began to blossom and gain popularity, which was in large part due to the influence of Ellen Berscheid and Elaine Hatfield. 1960s to 2000s Roughly two decades after the aforementioned work of Hill and a decade after the works of Schachter, Kelley, and Thibaut, Ellen Berscheid and Elaine Hatfield (professors at the Universities of Minnesota and Wisconsin, respectively) began studying how two individuals become attracted to one another. Yet, their work went beyond just attraction and began to explore other domains such as the processes of choosing a romantic partner and falling in love, and the centrality of relationships in human health and well-being. However, being a female professor and researcher during the era (when academia was overwhelmingly dominated by white males) was incredibly difficult, and was only made more difficult by the public reception to their phenomena of interest. In 1974, their work came under fire after the senator of Wisconsin at the time alleged their research was a waste of taxpayer dollars, in light of Berscheid receiving $84,000 from the National Science Foundation to study love. Despite this immense scrutiny, they nevertheless persisted in pioneering the nascent field of relationship science through the 1970s and into the 1980s through seminal developments such as the distinction between passionate and companionate love and a scale to measure the former. Meanwhile, researchers from across different disciplines had begun to dedicate themselves to the study of relationships. Along with the fast growing interest came high-impact works. Urie Bronfenbrenner's late 1970s and mid-1980s social–ecological model established key principles that researchers would eventually use ubiquitously to study the impact of socio-contextual factors on relationships. Graham Spanier published the Dyadic Adjustment Scale (DAS) in JMF, which is currently the most widely cited scale of intimate relationship quality. John Bowlby's attachment theory, formalized in the late 1960s and early 1970s, laid the groundwork for the study of parent–child relationships and also helped shape the study of adult relationships in the field. Notably, in 1983, Harold Kelley, Ellen Berscheid, Andrew Christensen, Anne Peplau and their colleagues wrote the book Close Relationships, which provided a comprehensive overview of the field of relationship science in its early stages, and identified the typologies of relationships studied. Also in the 1980s and into the 1990s, Toni Antonucci began exploring friendships and social support among adults, while Arthur Aron was examining the role of relationships with romantic partners, siblings, friends, and parents in individual self-expansion. Additionally, Thomas Malloy and David Kenny developed the social relations model (an early analytic approach to understanding the roles of a person and their partner in their interactions) and Kenny later published his work on Models of Non-independence in Dyadic Research in 1996. With a growing interest in marriage and family therapy in relationship science, in the late 1980s and 1990s, researchers such as Howard Markman, Frank Floyd, and Scott Stanley began developing romantic relationship (with a primary focus on marriages) interventions; specifically, in 1995, Floyd and colleagues published the program they developed, called Prevention Intervention and Relationship Enhancement (PREP). Interest in and development of relationship education programming increased in the 2000s due to state and federal Healthy Marriage Initiatives, which allocated grant funding to support programming that would impact disadvantaged communities. Although there were many theoretical and empirical contributions of the 1970s and 80s, the professional evolution of relationship science was simultaneously taking place. The first international conference specifically dedicated to relationship processes took place in 1977 in Swansea, Wales, hosted by Mark Cook (a social psychologist) and Glen Wilson (a psychotherapist). In 1982, the first of the eventually bi-annual International Conference of Personal Relationships (ICPR) took place in Madison, Wisconsin, under the direction of Robin Gilmour and Steve Duck, with about 100 attendees. Two years later, in 1984, the International Society for the Study of Personal Relationships (ISSPR) was borne out of the ICPR and the Journal of Social and Personal Relationships, the first peer-reviewed journal unique to the field of relationship science, was established. Then in 1987, the Iowa Network of Personal Relationships (which would later be known as the International Network of Personal Relationships; INPR) was formed and Hal Kelley was elected president of ISSPR that same year. A few years later in 1991, Ellen Berscheid (the then-president of ISSPR) announced a merger of ISSPR and INPR, which ultimately fell through until the idea was reignited over a decade later. In 1994, the journal Personal Relationships was formally established by ISSPR and began publishing work in relationship science with Pat Noller as the editor; Anne Peplau became president of ISSPR. The changing of roles only persisted when Dan Perlman became president of ISSPR in 1996 and began discussing with the president of INPR (at the time, Barbara Sarason) how they might work to better integrate the efforts and goals of the two organizations; in 1998, Jeffry Simpson took over as editor of Personal Relationships. The decades-long, interdisciplinary study of relationships culminated in Ellen Berscheid's 1999 article "The Greening of Relationship Science". Here, Berscheid took the opportunity to close out the 20th-century with an overview of the field's past, present, and future. She described the uniqueness and benefits of a well-integrated interdisciplinary field and the advancements that have cemented the field as an "essential science". However, she also discussed the shortcomings that were stifling the progress of the field, and provided specific advice for overcoming such limitations in the upcoming century. Some of this advice included leaving behind traditional analytic approaches that fail to consider non-independence of individuals in relationships, and prioritizing the implementation of existing methods that consider interdependent and dyadic data as well as "creatively constructing new ones". Additionally, she stressed the dire need of the field to inform public opinion and policy related specifically to intimate relationship stability (e.g., quality, dissolution/divorce)—at the time, a hotly debated topic informed by partisan politics rather than empirical evidence, and for scientists to place greater emphasis on the environments in which relationships operate. Her article foreshadowed and influenced the evolution of the field in the 21st century, and its structure has since been adapted by other relationship researchers to reflect on how far the field has come and where it is going. 2000s The year 2000 included new developments in the field such as Nancy Collins and Brooke Feeney's work on partner support-seeking and caregiving in romantic relationships from an attachment theory perspective, and Reis, Sheldon, Gable, and colleagues' article "Daily Well-being: The Role of Autonomy, Competence, & Relatedness". A couple of years later, Rena Repetti, Shelley Taylor, and Teresa Seaman published work that addressed some of Berscheid's 1999 article concerns as well as used health psychology perspectives to inform relationship science. They empirically demonstrated the negative effects of family home environments with significant conflict and aggression on the mental and physical health of individuals in both childhood and adulthood. Simultaneously, the early 21st century was a time for major changes in the professional development of the field. In 2004, after previously unsuccessful attempts, ISSPR and INPR merged to form the International Association for Relationship Research (IARR). In 2007, Harry Reis published "Steps Toward the Ripening of Relationship Science", an article inspired by Ellen Berscheid's 1999 article, that recapped and made suggestions for furthering the field. He discussed important works that could be used as framework for guiding the field, including Thomas Bradbury's 2002 article, "Research on Relationships as a Prelude to Action"—an article focussed on the mechanisms for improvement of relationship research including better integration of research findings, more ethnically and culturally diverse sampling, and interdisciplinary, problem-centered approaches to research. Reis argued the need for integrating and organizing theories, for paying more attention to non-romantic relationships (the primary focus of the area) in research and intervention development, and the use of his theory of perceived partner responsiveness to enable this progress. Fast-forwarding to 2012, relationship researchers again heeded Berscheid's advice of using relationships science to inform real-world issues. Eli Finkel, Paul Eastwick, Benjamin Karney, Harry Reis, and Susan Sprecher wrote an article discussing the impact of online dating on relationship formation and both its positive and negative implications for relationship outcomes compared to traditional offline dating. Additionally, in 2018, Emily Impett and Amy Muise published their follow-up to Berscheid's article, "The Sexing of Relationship Science: Impetus for the Special Issue on Sex and Relationships". Here, they called on the field to draw more attention to and place greater weight on the role of sexual satisfaction; they identified this area of research as nascent but fertile territory to explore sexuality in relationships and establish it as an integral part of relationship science. Types of relationships studied The field recognizes that, for two individuals to be in the most basic form of a social relationship, they must be interdependent—that is, have interconnected behaviors and mutual influence on one another. Personal relationships A relationship is said to be personal when there is not only interdependence (the defining feature of all relationships), but when two people recognize each other as unique and unable to be replaced. Personal relationships can include colleagues, acquaintances, family members, and others, so long as the criteria for the relationship are met. Close relationships The definition of close relationships that is frequently referred back to is one from Harold Kelley and colleague's 1983 book, Close Relationships. This asserts that a close relationship is "one of strong, frequent, and diverse interdependence that lasts over a considerable period of time". This definition indicates that not even all personal relationships may be considered close relationships. Close relationships can include family relationships (e.g., parent–child, siblings, grandparent–grandchild, in-laws, etc.) and friendships. Intimate relationships What defines a relationship as intimate are the same features that comprise a close relationship (i.e., must be personal, must have bidirectional interdependence, and must be close), but there must also be a shared sexual passion or the potential to be sexually intimate. Intimate relationships can include married couples, dating partners, and other relationships that satisfy the aforementioned criteria. Theories Social exchange theory Social exchange theory was developed in the late 1950s and early 1960s as an economic approach to describing social experiences. It addresses the transactional nature of relationships whereby people determine how to proceed in a relationship after assessing the costs versus the benefits. A prominent subset that secured the place of social exchange theory in relationship science is interdependence theory, which was articulated in 1959 by Harold Kelley and John Thibaut in The Social Psychology of Groups. Even though Kelley and Thibaut's intent was to discuss the theory as it applied to groups, they began by exploring the effects of mutual influence as it pertains to two people together (i.e., a dyad). They expanded upon this process at the dyadic level in later years, further developing the idea that people in relationships 1) compare the overall positive to overall negative outcomes of their relationship (i.e., outcome = rewards - costs), which they then 2) compare to what they expect to get or think they should be getting out of the relationship (i.e., comparison level or "CL") to determine how satisfied they are (i.e., satisfaction = outcome - CL), and finally 3) compare the outcome of their relationship to the possible options of being either in another relationship or not in any relationship at all (i.e., comparison level for alternatives or "CLalt") to determine how dependent they are on the relationship/their partner (i.e., dependence = outcome - CLalt). They described this as having practical and important implications for commitment in a relationship such that those less satisfied by and less dependent on their partner may be more inclined to end the relationship (e.g., divorce, in the context of a marriage). Interdependence theory has also been the basis of other influential works, such as Caryl Rusbult's investment model theory. The investment model (later known as the 'investment model of commitment processes') directly adopts the principles of interdependence theory and extends it by asserting that the magnitude of an individual's investment of resources in the relationship increases the costs of leaving the relationship, which decreases the value of alternatives, and therefore increases commitment to the relationship. Social learning theory Social learning theory can be traced back to the 1940s and originated from the ideas of behaviorists like Clark L. Hull and B. F. Skinner. However, it was notably articulated by Albert Bandura in his 1971 book, Social Learning Theory. It is closely related to social exchange theory (and the subsequently developed interdependence theory), but focuses more on drawbacks and rewards found directly in behavior and interactions (e.g., distant vs. displays affection) opposed to broad costs and benefits. In the context of close and intimate relationships, it emphasizes that partners' behaviors (e.g., displays of empathy during a conversation) are central in that they not only invoke an immediate response, but teach one another what to believe and how to feel about their relationship (e.g., feeling secure and trusting), which affects how satisfied one is—a process that is described as cyclical. Social learning theory as it applies to relationship science led to the development of other prominent theories such as Gerald Patterson's coercion theory, outlined in his book, Coercive Family Process. Coercion theory focuses on why people end up in and stay in unhealthy relationships by explaining that individuals unintentionally reinforce each other's bad behaviors. This pattern is also described as cyclical where partners will continue to behave in a certain, negative way (e.g., nagging) when their partner reinforces said behavior (e.g., does what partner is requesting through nagging), which tells them that their negative behavior is effective at getting the outcome they desired. Attachment theory Attachment theory was formalized in a trilogy of books, Attachment and Loss, published in 1969, 1973, and 1980 by John Bowlby. The theory was originally developed to pertain to parent–child relationships, and more specifically during infancy. This idea that children rely on a primary caregiver—an attachment figure—to feel safe and confident to explore the world (a secure base) and come back to being loved, accepted, and supported (a safe haven) has been applied extensively to adult relationships. This was first applied by Cindy Hazan and Phillip Shaver in 1987, specifically in the context of romantic relationships. Their research found that not only were attachment styles (i.e., secure, avoidant, anxious/ambivalent) relatively stable from infancy and into adulthood, but that these three major styles predicted the ways in which adults experienced romantic relationships. This spawned nearly three-and-a-half decades of research exploring the importance of attachment processes in childhood (i.e., parent-child relationships) and their predictive value in adult relationship formation and maintenance (i.e., romantic partnerships, friendships). Influential people who have studied close and intimate relationships from an attachment perspective include Nancy Collins, Jeffry Simpson, and Chris Fraley. Nancy Collins and Stephen Read (1990) developed one of the most widely cited and used scales assessing adult attachment styles and, additionally, their dimensions. Their work found three dimensions and investigated the extent to which they applied to individual self-esteem, trust, etc. as well as gender differences in their relevance to relationship quality in dating couples. Jeffry Simpson has conducted extensive research on the influence of attachment styles on relationships, including documenting more negative and less positive emotions expressed in a relationship by individuals who were either anxious or avoidant. Chris Fraley's work on attachment includes a prominent study that used item response theory (IRT) to explore the psychometric properties of self-report adult attachment scales. His findings indicated very low levels of desirable psychometric properties in three out of four of the most commonly used adult attachment scales. Among improvements to existing scales, he made suggestions for the future development of adult attachment scales, including more discriminating items in the secure region and additional items to tap into the low ends of anxiety and avoidance dimensions. Evolutionary theories Evolutionary psychology as it pertains to relationship science is a collection of theories that aim to understand mating behaviors as a product of our ancestral past and adaptation. This set of perspectives has a common thread that links the modern-day study of relationship processes and behaviors to adaptive responses and features that were developed to maximize reproductive fitness. Sexual selection says that success in competition for mates happens for those who possess traits that are more attractive to potential mating partners. Researchers have also considered the theory of parental investment, where females (compared to males) have more to lose and ancestrally were therefore more selective in mate selection; this is one facet of many observed sex differences in mate selection where male and females seek and prefer certain traits. These theoretical perspectives have been implemented widely in the study of relationships both on their own and in an integrated approach (e.g., considering cultural context). Prominent works that have taken the evolutionary approach to studying relationship formation and processes include a review of existing research by Steven Gangstead and Martie Haselton (2015) that revealed differences in both women's sexual desires and men's reactions to women across the ovulation cycle. David Buss has extensively studied sex differences in cross-cultural mate selection, jealousy, and other relationship processes through research that integrates evolutionary perspectives with socio-cultural contexts (e.g., "Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures"; "Sex differences in jealousy: Evolution, physiology, and psychology", etc.). Additionally, Jeffry Simpson and Steven Gangstead have published widely cited work on relationship processes from an evolutionary lens, including research on human mating that discusses trade-offs (faced by females selecting a mate) between a potential mate's genetic fitness for having children and their willingness to help in child-rearing. Social ecological theories Social ecology—derived from sociology and anthropology—approaches the study of people in a way that considers the environment or context in which people live. Social ecological models, as they pertain to relationships, explain relationship processes from a lens that consider external forces acting upon people in a relationship, whether they be family members, romantic partners, or friends. Reuben Hill articulated one of the earliest documented social ecological models pertaining to relationship science—specifically families—in 1949. This is known as the ABC-X model or crisis theory. The 'A' in the model indicates a stressor; the 'B' indicates resources available to handle the stressor (both tangible and emotional); the 'C' indicates the interpretation of the stressor (whether it is perceived as a threat or manageable obstacle); finally, the 'X' indicates the crisis (the overall experience and response to the stressor that either strengthens or weakens families/couples). See Figure 1. In 1977, 1979, and 1986, Urie Bronfenbrenner published a model that integrated the multiple different levels or domains of an individual's environment. It was first developed to apply to child development, but has been widely applied in relationship science. The first level is the microsystem, which contains the single, immediate context people or dyads (e.g., couple, parent-child, friends) directly find themselves in—such as a home, school, or work. The second level is the mesosystem, which considers the combined effects of two or more contexts/settings. The third level is the exosystem, which also considers the effects of two or more contexts, but specifically contains at least one context that the individual or dyad is not directly in (e.g., government, social services) but affects an environment they are directly in (e.g., home, work). The fourth level is the macrosystem, which is the broader cultural and social attitudes that affect an individual. Finally, the chronosystem is the broadest level that is specifically the dimension of time as it relates to an individual's context changes and life events. See Figure 2. Researchers in relationship science have used social ecological models to study changes and stressors in relationships over time, and how couples, families, or even friends manage them given the contexts they evolve in. Application of social ecological models in relationship research have been seen in influential works such as Benjamin Karney and Thomas Bradbury's Vulnerability-Stress-Adaptation (VSA) model. The VSA model is a theoretical approach that enables researchers to study the impact of stressful events on relationship quality and stability over time (e.g., determine risk of divorce/relationship dissolution), given a couple's capacity to manage and adapt to such events. See Figure 3. Relational mobility In the early 2000s, a Japan-based research team defined relational mobility as a measure of how much choice individuals have in terms of whom to form relationships with, including friendships, romantic partnerships, and work relations. Relational mobility is low in cultures with a subsistence economy that requires tight cooperation and coordination, such as farming, while it is high in cultures based on nomadic herding and in urban industrial cultures. A cross-cultural study found that the relational mobility is lowest in East Asian countries where rice farming is common, and highest in South American countries. Differences in relational mobility can explain cultural differences in certain norms and behaviors, including conformity, shame, and business strategies, as well as differences in social cognition, including attribution and locus of control. Methodologies Relationship science has relied on a variety of methods for both data collection and analysis. This includes but is not limited to: cross-sectional data, longitudinal data, self-report study, observational study, experimental study, repeated measures design, and mixed-methods procedures. Self-report data Relationship science relies predominantly on individuals' self-reported evaluations and descriptions of their own relationship processes. This method of data collection often comes in the form of answering a questionnaire that requires either selection from a set of fixed responses or providing open-ended responses. It is often the simplest way to study relationships, but researchers have cautioned against solely relying on this form of measurement. Some issues that arise with the use of self-report data is the difficulty of accurately answering retrospective questions or questions that require introspection. Recently, particularly in light of the anti-false positive movement in psychology, relationship scientists are encouraging the use of multiple methods (e.g., self-report data, observational data) to study the same or similar constructs in different ways. However, an identified benefit of using specifically self-report questionnaires is that many of the measures used to study relationships are standardized and are therefore used in multiple different studies, where findings across studies can provide insight into replicability. Experimental data Some of the earliest studies conducted in relationship science were done using laboratory experiments. The field has since used experimental methods in order to infer causality about a relationship phenomenon of interest. This requires identification of a dependent variable that will be the measured effect (e.g., performance on a stressful task) and an independent variable that will be what is manipulated (e.g., social support vs. no social support). However, a common concern with experimental study of relationship phenomena is the potential lack of generalizability of laboratory setting findings to real-world contexts. Observational data Observational (or, behavioral) data in relationship science is a method of making inferences about relationship processes that relies on an observer's reports, rather than a participant's own reports of their relationship. This is often done through videotaping or audio recording participants' interactions with one another and having outside observers systematically identify (i.e., code) aspects of interest dependent upon the type of relationship being studied (e.g., patience exhibited during a parent-child activity; affection exhibited during a romantic couple's discussion). This method enables researchers to study aspects of a relationship that may be sub-conscious to participants or would otherwise not be detectable through self-report measures. However, a hurdle of observational research is establishing strong inter-rater reliability—that is, the level of agreement between observers who are coding the observations. Additionally, as participants often know they are being watched or recorded and such interactions often take place in laboratory settings, observational data collection presents the issue of reactivity—when individuals change their natural response or behavior because they are being watched. Longitudinal data A cornerstone of the research done in relationship science is the use of multi-wave assessments and subsequent repeated measures design, multi-level modeling (MLM), and structural equation modeling (SEM). As relationships themselves are longitudinal, this approach enables researchers to assess change across time within and/or between relationships. However, it must be noted that most of the longitudinal research in relationship science focuses on marriages and some on parent-child relationships, while relatively few longitudinal studies on friendships or other types of relationships exist. Within longitudinal research, there is additional variation in the length of time of the study; while some studies follow individuals, couples, parents and children, etc. over the course of a few years, some study change processes across the lifespan and in multiple different relationships (e.g., from infancy into adulthood). Additionally, the frequency of and intervals of time between multi-wave assessments has considerable variation in longitudinal research; one might employ intensive longitudinal methods that require daily assessments, methods that require monthly assessments, or methods that require annual or bi-annual assessments. Interdependent and dyadic data An important turning point in the analytic approach to studying relationships came at the advent of statistically modeling interdependence and dyadic processes—that is, studying two individuals (or even two groups of individuals) simultaneously to account for the overlap in or interdependence of relationship processes. In 2006, David Kenny, Deborah Kashy, and William Cook published the book Dyadic Data Analysis, which has been widely cited as a tool of understanding and measuring non-independence. This book includes information and instructions on using MLM, SEM, and other statistical methods to study both between and within dyad phenomena. Several models have been articulated for these purposes in both journal articles and the 2006 Kenny, Kashy, & Cook text, including 1) the common fate model, 2) the mutual influence (or dyadic feedback) model, 3) the dyadic score model, and the most commonly used 4) actor-partner interdependence model (APIM). Common fate model The common fate model is a method of estimating not how two people influence one another, but how two people are similarly influenced by an external force. Dyadic means are computed for both the independent and dependent variable to estimate the effects of the dyad as a single unit. The between-dyad correlations are adjusted by the within-dyad correlations in order to remove individual-level variation. The two partners' predictor and outcome variables are observed variables that are used to compute latent variables (i.e., the 'common fate variables'). See Figure 4. Mutual influence (dyadic feedback) model The mutual influence or dyadic feedback model is a method of considering reciprocal influence of partners' predictor(s) on one another's and partners' outcome on one another's. Compared to the APIM, this model assumes there are no partner effects and no other types of non-independence, as seen in the predictor-predictor and outcome-outcome paths. Additionally, it assumes equal effects of partner's influence on one another (i.e., 1 influences 2 equally as 2 influences 1). See Figure 5. Dyadic score model The dyadic score model uses two partners observed predictor and outcome variables to compute both dyadic 'level' and 'difference' latent variables. The level variables are similar to the common fate latent variables while the difference variables represent the within-dyad contrast. See Figure 6. Actor-partner interdependence model (APIM) The APIM is a method of accounting for dyadic interdependence via both actor and partner effects. Specifically, it considers the influence of one partner's predictor(s) on the other partner's predictor(s) and outcome. This is modeled using regression, MLM, or SEM procedures. See Figure 7. See also Aristotle Plato Stanley Schachter Harold Kelley John Bowlby Urie Bronfenbrenner Ellen Berscheid Elaine Hatfield Caryl Rusbult David A. Kenny Mary Ainsworth Harry Harlow Robert Hinde Psychology Anthropology Economics Sociology Biology Social psychology Clinical psychology Developmental psychology Cognitive psychology Cognitive behavioral therapy Family therapy Systems ecology Social ecological model Social exchange theory Social learning theory Attachment theory Human mating strategies Strategic pluralism Social connection Human bonding Physical intimacy Emotional intimacy Social relationships Interpersonal relationships Interpersonal ties Friendship Family Parenting Sibling relationship Love Platonic love Intimate relationship Romance Marriage Dating References Interpersonal relationships
Relationship science
[ "Biology" ]
6,529
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
65,685,181
https://en.wikipedia.org/wiki/Moodies%20Group
The Moodies Group is a geological formation in South Africa and Eswatini. It has the oldest well-preserved siliciclastic tidal deposits on Earth, where microbial mats flourished. See also Archean life in the Barberton Greenstone Belt Fig Tree Formation Onverwacht Group References Geologic groups of Africa Geologic formations of South Africa Geology of Eswatini Archean Africa Fossiliferous stratigraphic units of Africa Paleontology in South Africa Origin of life
Moodies Group
[ "Biology" ]
96
[ "Biological hypotheses", "Origin of life" ]
65,685,645
https://en.wikipedia.org/wiki/List%20of%20plant%20genus%20names%20with%20etymologies%20%28Q%E2%80%93Z%29
Since the first printing of Carl Linnaeus's Species Plantarum in 1753, plants have been assigned one epithet or name for their species and one name for their genus, a grouping of related species. Many of these plants are listed in Stearn's Dictionary of Plant Names for Gardeners. William Stearn (1911–2001) was one of the pre-eminent British botanists of the 20th century: a Librarian of the Royal Horticultural Society, a president of the Linnean Society and the original drafter of the International Code of Nomenclature for Cultivated Plants. The first column below contains seed-bearing genera from Stearn and other sources as listed, excluding names with missing derivations and those names that no longer appear in more modern works, such as Plants of the World by Maarten J. M. Christenhusz (lead author), Michael F. Fay and Mark W. Chase. Plants of the World is also used for the family and order classification for each genus. The second column gives a meaning or derivation of the word, such as a language of origin. The last two columns indicate additional citations. Key Latin: = derived from Latin (otherwise Greek, except as noted) Ba = listed in Ross Bayton's The Gardener's Botanical Bu = listed in Lotte Burkhardt's Index of Eponymic Plant Names CS = listed in both Allen Coombes's The A to Z of Plant Names and Stearn's Dictionary of Plant Names for Gardeners G = listed in David Gledhill's The Names of Plants St = listed in Stearn's Dictionary of Plant Names for Gardeners Genera See also Glossary of botanical terms List of Greek and Latin roots in English List of Latin and Greek words commonly used in systematic names List of plant genera named for people: A–C, D–J, K–P, Q–Z List of plant family names with etymologies Notes Citations References See http://creativecommons.org/licenses/by/4.0/ for license. Further reading Available online at the Perseus Digital Library. Available online at the Perseus Digital Library. Systematic Greek words and phrases Systematic Systematic Taxonomy (biology) Glossaries of biology Gardening lists Genus names with etymologies (Q–Z) Etymologies,Q Wikipedia glossaries using tables
List of plant genus names with etymologies (Q–Z)
[ "Biology" ]
483
[ "Lists of plants", "Plants", "Lists of biota", "Taxonomy (biology)", "Taxonomic lists", "Glossaries of biology" ]
65,687,768
https://en.wikipedia.org/wiki/Squad%2044
Squad 44 (formerly Post Scriptum) is a 2018 tactical first-person shooter video game developed by the Canadian studio Offworld Industries alongside the British studio Mercury Arts, and published by Offworld Industries. It is set during World War II, specifically during Operation Market Garden, Operation Overlord, the Battle of France, Battle of Crete, Battle of the Bulge, and the Battle of Iwo Jima. The game features several playable factions — the United States Army, United States Marine Corps, British Army, Polish Armed Forces in the West, French Army, Wehrmacht, Waffen-SS, Australian Army, New Zealand Military Forces, Hellenic Army, and the Imperial Japanese Army — with some of them having multiple different playable units. Development It was initially developed as a mod for Offworld Industries' Squad, with Post Scriptum becoming its own standalone game by the French studio Periscope Games with assistance from their publisher, Offworld Industries. It released on 9 August 2018 through video game distributor Steam. On November 20, 2023 Offworld Industries announced the acquisition of Post Scriptum. Offworld Industries revealed it will be working with Mercury Arts on the game. Offworld also said it is working on a 2024 roadmap of updates that will be shared with the community at a later time On December 14, 2023 the game was rebranded to Squad 44. Content There are multiple maps for each battle included in the game. Operation Market Garden (Chapter 1: The Bloody Seventh) Driel Heelsum Oosterbeek Doorwerth Arnhem Veghel Best Grave Battle of France (Chapter 2: Plan Jaune) Stonne Dinant Maginot Operation Overlord (Chapter 3: Day of Days) Utah Beach St. Mere Eglise Carentan Battle of the Bulge (Chapter 4: Watch on the Rhine) Foy Haguenau Colmar Battle of Crete (Chapter Mercury) Maleme Rethymno Battle of Iwo Jima (The Pacific Front) Iwo Jima References External links Steam store page 2018 video games Asymmetrical multiplayer video games First-person shooters Multiplayer online games Tactical shooters Unreal Engine 4 games Video games developed in the United Kingdom Video games set in the Netherlands Video games set in France Video games set in Belgium Video games set in Greece Video games set in Japan Windows games Windows-only games World War II first-person shooters Offworld Industries games
Squad 44
[ "Physics" ]
486
[ "Asymmetrical multiplayer video games", "Symmetry", "Asymmetry" ]
65,688,230
https://en.wikipedia.org/wiki/Transition%20metal%20thioether%20complex
Transition metal thioether complexes comprise coordination complexes of thioether (R2S) ligands. The inventory is extensive. Dimethylsulfide complexes As the simplest thioether, dimethyl sulfide forms complexes that are illustrative of the class. Well characterized derivatives include cis-[TiCl4L2], VCl3L2, NbCl5L, NbCl4L2, Cr(CO)5L, CrCl3L3, RuCl2L4, RuCl3L3, RhCl3L3, cis- and trans-[IrCl4L3]−, cis-MCl2L2 (M = Pd, Pt), [PtCl3L]−, cis- and trans-[PtCl4L2] (L = SMe2). With respect to donor properties, dimethyl sulfide is a soft ligand with donor properties weaker than phosphine ligands. Such complexes are generally prepared by treating the metal halide with the thioether. Chloro(dimethyl sulfide)gold(I) can however be prepared by redox reaction of elemental gold and DMSO in the presence of hydrochloric acid. Stereochemistry Thioether complexes feature pyramidal sulfur centers. Typical C-S-C angles are near 99° in both free thioethers and their complexes. The C-S distance in dimethylsulfide is 1.81 Å, which is also unaffected in its complexes. The stereochemistry of thioether complexes have been extensively studied. Unsymmetrical thioethers, e.g., SMeEt, are prochiral ligands, and their complexes are chiral. One example is [Ru(NH3)5(SMeEt)]2+. The complex cis-VOCl2(SMeEt)2 exists as meso- and a pair of enantiomers. In complexes of thioethers of the type S(CH2R)2 (R ≠ H), the methylene protons are diastereotopic. Examination of the NMR spectra of such complexes reveal that they undergo inversion at sulfur, without dissociation of the M-S bond. Thioether as a bridging ligand Unlike ethers, thioethers occasionally serve as bridging ligands. The complexes Nb2Cl6(SMe2)3 is one such example. It adopts a face-sharing bioctahedral structure with a Nb(III)=Nb(III) bond, spanned by two chloride and one dimethylsulfide ligands. The complex Pt2Me4(μ-SMe2)2 is a source of "PtMe2". Complexes chelating thioether ligands Thiacrown ligands are analogous to crown ethers. The best studied thiacrown ligands have the formula (SCH2CH2)n (n = 3,4,5,6). The tridentate tri-thioether 9-ane-S3 forms extensive families of complexes of the type M(9-ane-S3)L3 and [M(9-ane-S3)2]2+. Examples of Cu(II)-thioether complexes were prepared from 14-ane-S4 and 15-ane-S5. The hexadentate ligand 18-ane-6 also forms extensive family of complexes, including unusual examples of Pd(III) and Ag(II). Examples of homoleptic complexes [M(SR2)6]n+ are otherwise rare. Occurrence Thioether complexes in nature arise from coordination of the sulfur substituent found in the amino acid methionine. One of the axial ligands in cytochrome c is illustrative. Methionine sulfur weakly binds to copper in azurin. References Coordination complexes
Transition metal thioether complex
[ "Chemistry" ]
839
[ "Coordination chemistry", "Coordination complexes" ]
61,859,413
https://en.wikipedia.org/wiki/John%20Shiers
John Shiers (1952–2011) was a Manchester based British left-wing gay rights campaigner. He was also a leading campaigner and a founding member of the Hulme Asbestos Action Group. He died in 2011 of mesothelioma a type of cancer closely linked to exposure to asbestos. The legal case against his landlord citing asbestos in his council home leading to his cause of death was one of the first of this type. Early life In his earlier years he attended Lancaster University and was a member of the Gay Liberation Front there, travelling to London for conferences. In 1978 an article he wrote was published by the Gay Left titled 'Two Steps Forward, Two Steps Back', 'Coming Out Six Years On by John Shiers'. On completing his studies at York University he was attracted to the lifestyle of the commercial gay scene at the emerging Manchester gay village at Canal Street. In 1978 he lamented on how it changed him, rather than him bringing change in accordance with the ideals he had learned from the Gay Liberation Front. He recalled bouts of depression and how for some time he met his partners through cottaging. He described his sexual encounters as 'commoditised' and not associated with emotion. He was a member of the organisation 'Friend' at this time. Career In his early work career as a local authority officer he was a strong influence in decentralising council services in Manchester and Rochdale in the 1980s. His career in the early 1990s was with Save The Children, where he became a consultant and charity trustee. He was influential in shaping services for children and young people in the North West England region. By the middle of the 1990s he had changed his career to psychosynthesis. To address further bouts of depression, he studied and qualified and went on to build a psychotherapy practice in Didsbury. Campaigns In the late 1970s he had moved to local authority housing in Hulme, owned by Manchester City Council when first arriving in the city. Whilst living in Hulme he discovered he and thousands of his neighbours council properties were riddled with asbestos. He had been one of the first to speak out about the asbestos in the properties. After his death Manchester Council admitted limited liability, in what was one of the first legal cases of this type. In 1988 he was instrumental in organising a demonstration against Section 28 in Manchester, attended by 25,000 people. In 2011 he was a guest speaker at the Greater Manchester Asbestos Victims Support Group and spoke in the presence of a gathering including Members of parliament Lisa Nandy, Kate Green, Tony Lloyd and Paul Goggins alongside Dr Linda Waldman, co-author of a report on asbestos in social housing at the Action Mesothelioma Day of the need for greater access to information about asbestos in social housing, highlighting risks of no requirement for social housing landlords to inform tenants of the presence of asbestos. References 1952 births 2011 deaths English LGBTQ rights activists Asbestos Environmental lawyers Product liability English gay men Deaths from cancer in England Deaths from mesothelioma 20th-century English LGBTQ people 21st-century English LGBTQ people
John Shiers
[ "Environmental_science" ]
628
[ "Toxicology", "Asbestos" ]
61,860,066
https://en.wikipedia.org/wiki/Kieka%20Mynhardt
Christina Magdalena (Kieka) Mynhardt (née Steyn; born 1953) is a South African born Canadian mathematician known for her work on dominating sets in graph theory, including domination versions of the eight queens puzzle. She is a professor of mathematics and statistics at the University of Victoria in Canada. Education and career Mynhardt was born in Cape Town, and was a student at the Hoërskool Lichtenburg. She completed her Ph.D. at Rand Afrikaans University (now incorporated into the University of Johannesburg) in 1979, supervised by Izak Broere. Her dissertation, The -constructability of graphs, gave a conjectured construction for the planar graphs by repeatedly adding vertices with prescribed neighborhoods. She became a faculty member at the University of Pretoria and then the University of South Africa before moving to the University of Victoria. Recognition In 1995, Mynhardt was selected as one of the founding members of the Academy of Science of South Africa. She was a 2005 recipient of the Dignitas Award of the University of Johannesburg Alumni. References External links Home page Faces of UVic Research: Kieka Mynhardt (video) 1953 births Living people South African mathematicians Members of the Academy of Science of South Africa Canadian mathematicians Women mathematicians Graph theorists University of Johannesburg alumni Academic staff of the University of Pretoria Academic staff of the University of South Africa Academic staff of the University of Victoria
Kieka Mynhardt
[ "Mathematics" ]
284
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
61,860,125
https://en.wikipedia.org/wiki/Renata%20Mansini
Renata Mansini (born 22 August 1968) is an Italian applied mathematician, economist, and operations researcher known for her research on problems in mathematical optimization including portfolio optimization and vehicle routing. She is a professor of operations research at the University of Brescia. Education Mansini earned a laurea in economics and business from the University of Brescia in 1991–1992, winning a prize from the Associazione Italiana di Studio del Lavoro for the best thesis in applied mathematics. She completed a doctorate in 1996–1997 at the University of Bergamo, with the dissertation Modelli di programmazione lineare mista intera per problemi finanziari: analisi, algoritmi e risultati computazionali [mixed integer linear programming models for financial problems: analysis, algorithms, and computational results]. Book Mansini is the co-author, with Włodzimierz Ogryczak and M. Grazia Speranza, of the book Linear and Mixed Integer Programming for Portfolio Optimization (EURO Advanced Tutorials on Operational Research, Springer, 2015). References External links 1968 births Living people Italian women engineers Italian economists Italian women economists Italian women mathematicians Applied mathematicians Operations researchers University of Brescia University of Bergamo alumni
Renata Mansini
[ "Mathematics" ]
261
[ "Applied mathematics", "Applied mathematicians" ]
61,860,767
https://en.wikipedia.org/wiki/Hafnium%E2%80%93tungsten%20dating
Hafnium–tungsten dating is a geochronological radiometric dating method utilizing the radioactive decay system of hafnium-182 to tungsten-182. The half-life of the system is  million years. Today hafnium-182 is an extinct radionuclide, but the hafnium–tungsten radioactive system is useful in studies of the early Solar system since hafnium is lithophilic while tungsten is moderately siderophilic, which allows the system to be used to date the differentiation of a planet's core. It is also useful in determining the formation times of the parent bodies of iron meteorites. The use of the hafnium-tungsten system as a chronometer for the early Solar system was suggested in the 1980s, but did not come into widespread use until the mid-1990s when the development of multi-collector inductively coupled plasma mass spectrometry enabled the use of samples with low concentrations of tungsten. Basic principle The radioactive system behind hafnium–tungsten dating is a two-stage decay as follows: → + + → + + The first decay has a half-life of 8.9 million years, while the second has a half-life of only 114 days, such that the intermediate nuclide tantalum-182 (182Ta) can effectively be ignored. Since hafnium-182 is an extinct radionuclide, hafnium–tungsten chronometry is performed by examining the abundance of tungsten-182 relative to other stable isotopes of tungsten, of which there are effectively five in total, including the extremely long-lived isotope tungsten-180, which has a half-life much longer than the current age of the universe. The abundance of tungsten-182 can be influenced by processes other than the decay of hafnium-182, but the existence of a large number of stable isotopes is very helpful for disentangling variations in tungsten-182 due to a different cause. For example, while 182W, 183W, 184W and 186W are all produced by the r- and s-processes, the rare isotope tungsten-180 is only produced by the p-process. Variations in tungsten isotopes caused by r- and s-process nucleosynthetic contributions also result in correlated changes in the ratios 182W/184W and 183W/184W, which means that the 183W/184W ratio can be used to quantify how much of the tungsten-182 variation is due to nucleosynthetic contributions. The influence of cosmic rays is more difficult to correct for since cosmic ray interactions affect the abundance of tungsten-182 much more than any of the other tungsten isotopes. Nonetheless, cosmic ray effects can be corrected for by examining other isotope systems such as platinum, osmium or the stable isotopes of hafnium, or simply by taking samples from the interior that have not been exposed to cosmic rays, though the latter requires large samples. Tungsten isotopic data is usually plotted in terms of ε182W and ε183W, which represent deviations in the ratios 182W/184W and 183W/184W in parts per 10,000 relative to terrestrial standards. Since Earth is differentiated the crust and mantle of Earth are enriched in tungsten-182 relative to the initial composition of the Solar system. Undifferentiated chondritic meteorites have ε182W =  relative to Earth, which is extrapolated to give a value of for the initial ε182W of the Solar system. Dating planetary core formation A primordial planet is undifferentiated, meaning that it is not layered according to density (with the densest material being towards the interior of the planet). When a planet undergoes differentiation the dense materials, particularly iron, separate from lighter components and sink to the interior forming the core of the planet. If this process took place relatively early in a planet's history, hafnium-182 would not have sufficient time to decay to tungsten-182. Since hafnium is a lithophile element the (undecayed) hafnium-182 would remain in the mantle (i.e. the outer layers of the planet). Then, after some time, the hafnium-182 would decay to tungsten-182 leaving an excess of tungsten-182 in the mantle. On the other hand, if differentiation occurred later in a planet's history, then most of the hafnium-182 would have decayed to tungsten-182 before differentiation began. Being moderately siderophilic, much of the tungsten-182 would sink towards the interior of the planet along with iron. In this scenario, not much tungsten-182 would subsequently be present in the outer layers of the planet. As such, by looking at how much tungsten-182 is present in the outer layers of a planet, relative to other isotopes of tungsten, the time of differentiation can be quantified. Model ages If we have a sample from the mantle (or core) of a body and want to calculate a core formation age from the tungsten-182 abundance we need to also know the composition of the bulk planet. Since we do not have samples from the core of Earth (or any other intact planet) the composition of chondritic meteorites is generally substituted for that of the bulk planet. Hafnium and tungsten are both refractory elements so there is not expected to be any fractionation between hafnium and tungsten due to heating of the planet during or after formation. A model age for the time of core formation can then be calculated using the equation , where is the decay constant for hafnium-182 (0.078±0.002 Ma−1), the ε182W values are those of the sample, chondritic meteorites (taken to represent the bulk planet) and the Solar System Initial value, and accounts for any differences in the general abundance of hafnium between the sample and chondritic meteorites, . It is important to note that this equation assumes that core formation is instantaneous. This can be a reasonable assumption for small bodies, like iron meteorites, but is not true for large bodies like Earth whose accretion likely took many millions of years. Instead more complex models that model core formation as a continuous process are more reasonable, and should be used. Core formation times for Solar system bodies The method of hafnium-tungsten dating has been applied to many samples from Solar system bodies and used to provide estimates for the date of core formation. For iron meteorites hafnium-tungsten dating yields ages ranging from less than a million years after the formation of the first solids (calcium-aluminium-rich inclusions, usually called CAIs) to around 3 million years for different meteorite groups. While chondritic meteorites are not differentiated as a whole, hafnium-tungsten dating can still be useful for constraining formation ages by applying it to smaller melt regions in which metals and silicates have separated. For the very well studied carbonaceous chondrite Allende this gives a formation age of around 2.2 million years after the formation of CAIs. Martian meteorites have been examined and indicate that Mars may have been fully formed within 10 million years of the formation of CAIs, which has been used to suggest that Mars is a primordial planetary embryo. For Earth, models of accretion and core formation are strongly dependent on how much giant impacts, like that presumed to have formed the Moon, re-mixed the core and mantle, yielding dates of between 30 and 100 million years after CAIs depending on assumptions. See also Radiometric dating Isotope geochemistry Planetary differentiation References Radiometric dating Hafnium Tungsten Planetary geology
Hafnium–tungsten dating
[ "Chemistry" ]
1,624
[ "Radiometric dating", "Radioactivity" ]
61,861,701
https://en.wikipedia.org/wiki/Deng%20Hongkui
Deng Hongkui () is a Chinese immunologist and stem cell researcher. He is a Changjiang Professor, the Boya Chair Professor, and Director of the Institute of Stem Cell Research at Peking University. He was awarded US$1.9 million by the Bill & Melinda Gates Foundation for his research on vaccines for HIV and hepatitis C. In 2017, he and Chen Hu engineered resistance to HIV in mice using CRISPR gene editing, and for the first time used the technique on an AIDS patient. Biography Deng Hongkui entered Wuhan University in 1980, where he earned his B.Sc. in 1984. He then studied at Shanghai Second Medical College and earned his master's degree in 1987. In 1990, he moved to the United States to study at the University of California, Los Angeles, where he earned his Ph.D. in 1995, under the supervision of Eli Sercarz. From 1995 to 1998 he was an Aaron Diamond Postdoctoral Fellow at the New York University School of Medicine, where he conducted research under Dan Littman. From 1998 to 2001, he worked as research director of ViaCell, a stem cell biotech company based in Boston. In 2001, Deng was awarded the prestigious Changjiang Professorship by the Chinese government, and returned to China to work at Peking University. He initially worked on treating diabetes using human embryonic stem cells. During the SARS outbreak, he conducted research on SARS treatment and vaccine. In 2006, he was awarded US$1.9 million by the Grand Challenges In Global Health initiative of the Bill & Melinda Gates Foundation, for his research on vaccines for HIV and hepatitis C. He became Director of Peking University's Institute of Stem Cell Research in 2013 and was appointed the Boya Chair Professor in 2016. In 2017, Deng and his collaborator, Chen Hu of the 307 Hospital, used CRISPR gene editing to transplant human hematopoietic stem cells with the edited CCR5 gene to mice, and conferred HIV resistance to the animals. They subsequently used the technique to treat an AIDS patient who suffered from acute lymphoblastic leukemia (ALL). It was the first time CRISPR was used on a human HIV patient. 19 months later, the patient's ALL was in complete remission. Their research demonstrated the safety of CRISPR for humans, although the therapy was not effective for curing AIDS as only 5% to 8% of the patient's bone marrow cells carried the edited CCR5 gene, much lower than the ideal 100%. Their findings were published in The New England Journal of Medicine in September 2019. References Living people Chinese expatriates in the United States Chinese immunologists Chinese medical researchers New York University fellows Academic staff of Peking University Shanghai Jiao Tong University alumni Stem cell researchers University of California, Los Angeles alumni Wuhan University alumni Year of birth missing (living people)
Deng Hongkui
[ "Biology" ]
584
[ "Stem cell researchers", "Stem cell research" ]
61,861,844
https://en.wikipedia.org/wiki/East%20and%20West%20Yorkshire%20Union%20Railway
The East and West Yorkshire Union Railway was promoted in 1883 to connect the Hull and Barnsley Railway at Drax with Leeds. The company was unable to raise the money it needed to build the line, and it substantially reduced its scope to connecting collieries around Rothwell with the existing main line network nearby. This was successful, with trains running from 1890, but the company decided it would find a way to connect to Leeds and operate a much truncated passenger service, from Rothwell. It sponsored the South Leeds Junction Railway to make a connection from Rothwell to the Midland Railway at Stourton; the SLJR was soon re-absorbed by the E&WYUR. The passenger service started on 4 January 1904 but it was a disastrous failure, and it was soon withdrawn from 1 October 1904. The E&WYUR continued as a successful mineral railway, being taken into the London and North Eastern Railway at the grouping of the railways in 1923. The network closed in 1966 as the collieries had ceased operation. Background In the last quarter of the nineteenth century there was a considerable upsurge in the coal industry in Yorkshire. At the same time there was dissatisfaction with the facilities provided by the established railway companies, and indeed with their charges. From the railways' point of view, huge capital investment in rolling stock and in infrastructure was being called for at a time when money was limited. The Hull and Barnsley Railway (H&BR) was authorised by Act of 26 August 1880 with the principal object of providing a direct link for mineral traffic from West Yorkshire coalfields to the Port of Hull; the project was partly motivated by resentment at the monopoly of the North Eastern Railway in serving Hull. The Parliamentary Bill for the H&BR had asked for wide-ranging running powers over existing railways to Sheffield, Leeds, Bradford, Huddersfield, Halifax, Manchester and Liverpool, but these had all been struck out when the Act received the Royal Assent. The line was a considerable engineering project at this time, and as work progressed, it became obvious that the company was unable to raise the money it needed, and in fact it never completed its network. E&WYUR promoted The H&BR thought it essential to get access to Leeds for passenger traffic from Hull, and it encouraged promotion of a line from Drax, on its own main line, to Ardsley on the Great Northern Railway. The line was to cross the Midland Railway at Woodlesford and make a connection there, and also connect a colliery network at Rothwell, four miles south-east of Leeds. At Ardsley, it was hoped, running powers over the Great Northern Railway would be granted to Leeds and Bradford. The Woodlesford connection would allow for an alternative route to Leeds. Rothwell was the focus of a group of collieries owned by Henry Charlesworth of J & J Charlesworth & Co Limited; high quality stone was quarried locally as well. The Charlesworth group of collieries had connections to railways, but there was considerable attraction for them in a new direct railway to Hull docks. The result was the Parliamentary authorisation on 2 August 1883 of the East and West Yorkshire Union Railway. The hoped-for running powers over the Midland Railway and the Great Northern Railway were refused. The main line from Drax was an ambitious project of about 30 miles of railway across difficult terrain. The E&WYUR soon found that raising the considerable capital sum for its main line was impossible, and in 1886 it got an abandonment Act. The intended line to Drax was to be abandoned, but short branch lines to Lofthouse (near Ardsley) on the GNR and Woodlesford on the Midland Railway were added (or modified), so that the proposed network was about nine miles in extent including siding complexes. Running powers from Lofthouse to Leeds Central over the GNR were requested—passenger operation was still contemplated on the abbreviated system—but these were refused. Opening of a limited network By November 1890 the line was nearly ready at the Ardsley end, so that some coal could come from the Rothwell collieries over the E&WYUR onto the GNR. However, there was a dispute with the GNR over the junction, and it was only resolved when the GNR were given running powers on the E&WYUR. On 19 May 1891 the line was fully open as far as Rothwell. Robin Hood Colliery was a collection point for other collieries, and GNR engines worked to and from that point; the colliery engines moved the wagons beyond that point. Considerable volumes of coal were now brought to the GNR for onward transit. Continuously short of money for construction purposes, the E&WYUR approached the GNR in February 1892, asking it to take it over. The GNR board was willing to consider this, but was cautious about the financial commitment it would be making, and referred the matter to a sub-committee. The issue became complicated: at this time Hunslet had become a major industrial and commercial growth area, and business interests put forward a new line to the north of the E&WYUR, from Beeston to Hunslet; this group approached the GNR for support. At the same time the E&WYUR proposed an extension of its own line to Hunslet, which was to be called the South Leeds Junction Railway. The GNR considered its position on these conflicting proposals, and decided that the line from Beeston would be better, being more direct and involving less property demolition. That line became the Hunslet Railway, opened in 1899 by the Great Northern Railway. In May 1892, takeover negotiations between the GNR and the E&WYUR broke down, as the E&WYUR wanted 4% on its share capital, a demand the GNR would not meet. The Beeston to Hunslet scheme proceeded separately, and had no further connection with the E&WYUR. Situation in 1900 An article in Railway Magazine in August 1900 describes some contemporary features: The situation of the railway made it a rather costly line to construct, the capital expenditure being up to the present time about £250,000... The Capital expenditure includes the many yards and sidings, the extent of which nearly equals the mileage of the railway. The principal traffic... is coal and stone. The traffic is derived from seven collieries and a number of quarries working the well known Robin Hood stone, which is sent all over England for window sills and stone work of that description... The railway is in a very flourishing condition financially, having paid 1 per cent dividend on its ordinary stock for several years past... South Leeds Junction Railway The E&WYUR proposal for the South Leeds Junction Railway was authorised on 24 August 1893, but in a much-modified form: it was to be a two-mile line from Rothwell to sidings at Stourton, two miles west of Woodlesford, alongside the Midland Railway line. This connected in more collieries, and it opened on 6 April 1895. Considerable volumes of coal came to the GNR off the line. The South Leeds Junction Railway was worked by the E&WYUR, and it was acquired by the E&WYUR company by an the (59 & 60 Vict. c. xlii) of 2 July 1896. Branches A further branch was authorised on 14 December 1897 from Robin Hood to Royds Green Lower; it was authorised under the Light Railways Act 1896 (59 & 60 Vict. c. 48), only the second line to be treated in this way. A third branch was the Thorpe Branch, which was built in 1899. The final result was a railway that had numerous small branches serving collieries in a small area. A short extension was opened on 1 November 1903 from Stourton to Stourton Junction on the Midland Railway, providing a running line connection. Passenger trains Workmen's services were operated on a branch from Robin Hood to Royds Green Lower, sanctioned by the light railway order, and opened by 1898. A light railway order made on 7 June 1901 permitted the company to operate its main line as a light railway and build a new branch alongside the Pontefract-Leeds road as far as the tramway terminus at Thwaite Gate (at this stage not making a junction with the Midland Railway). The intention was to operate a passenger service in connection with the trams, but this scheme was abandoned when work started on a conventional street tramway linking Wakefield and Leeds via Stourton, with a branch to Robin Hood. Undaunted, the E&WYUR decided to compete firectly with the trams and on 4 January 1904 introduced a passenger service between Robin Hood and Leeds Wellington station. It was a disastrous failure, losing £200 per month; Sunday services were withdrawn in August 1904, and the entire service was withdrawn from 1 October of the same year, only six weeks after the opening of the rival tram route. Suggitt remarks: The provision... of a separate passenger service on the E&WYUR made little sense. Horse drawn wagonettes already ran to Leeds and a scheme for electric trams to Wakefield had been approved in 1902. [To enable passenger operation,] a junction with the Midland main line at Stourton had to be built, the SLJR section of track doubled, modern signalling installed, and platforms built at the three stations of Robin Hood, Rothwell and Stourton. Operating methods The E&WYUR had an unusual operating system: no block working was used, and only the Royds Green Lower Branch used a staff and ticket method. All train movements had to be made on a siding basis, driving at sight. Proper signalling had to be provided for the passenger service. The 1900 Railway Magazine article includes the observation that "Although the railway is worked as a single line, a large portion of it is already doubled". The residual E&WUYR The E&WYUR had been constructed as a domestic Charlesworth network, and only limited attempts had been made to document land ownership. In a number of cases E&WYUR lines were built on Charlesworth property; the company had purchased land from Charlesworths in 1899, but Charlesworths retained certain reservations. For some time this did not matter, but the LNER later found that it was a problematic issue. The E&WYUR remained independent until the Grouping of the railways in 1923, when it was taken over by the London and North Eastern Railway. After 1923 The E&WYUR lines were all closed by 3 October 1966, the line east of Rothwell had been dormant since February 1962. Station list Robin Hood Rothwell Stourton Current Condition A video series has been made in 2020 of walks along the old tracks describing the history and current status of the lines, stations and pits it served. References Sources Donald J Grant, Directory of the Railway Companies of Great Britain, Matador Publishers, Kibworth Beauchamp, 2017, David Joy, A Regional History of the Railways of Great Britain: volume VIII: South and West Yorkshire, David & Charles, Newton Abbot, 1984, Gordon Suggitt, Lost Railways of South and West Yorkshire, Countryside Books, Newbury, 2007, John Wrottesley, The Great Northern Railway: volume II: Expansion and Competition, B T Batsford Limited, London, 1979, John Wrottesley, The Great Northern Railway: volume III: Twentieth Century to Grouping, B T Batsford Limited, London, 1981, Closed railway lines in Yorkshire and the Humber Early British railway companies Industrial railways in England Mining railways Leeds-related lists Rothwell, West Yorkshire Coal in England
East and West Yorkshire Union Railway
[ "Engineering" ]
2,379
[ "Mining equipment", "Mining railways" ]
61,862,734
https://en.wikipedia.org/wiki/Nubia%20Z20
The Nubia Z20 is an android smartphone which was launched globally on 14 October 2019. It has two screens (on both sides of the phone) which can operate independently. Specifications Hardware and Design The Z20 has an aluminum and glass construction. It is powered by the Qualcomm Snapdragon 855+ CPU and the Adreno 640 GPU. An AMOLED panel is used for both displays, with a 6.42-inch (163mm) 1080p 19.5:9 screen on the front and a smaller 5.1-inch (129.5mm) 720p 19:9 screen on the back. Both are protected by Gorilla Glass 5, and supports HDR10. It is available with 128 or 512 GB of non-expandable storage and 6 or 8 GB of RAM. A fingerprint sensor is located on both sides of the phone and is used to switch displays. The Z20 has a 4000mAh battery and can fast charge at up to 27W over USB-C. A triple camera setup is used, with a 48 MP main lens, a 16 MP ultrawide lens, and an 8 MP telephoto lens. The main lens has PDAF and OIS, with a red accent ring. It is capable of recording 1080p or 4K video at either 30 or 60 fps, and can also shoot 8K at 15 fps and 720p ultra slow-motion at 1920fps. A dual-LED flash is located to the left of the camera module, with another single-LED flash to the right. The device is available in Twilight Blue and Diamond Black. Software The Z20 runs on Nubia UI 7, which is based on Android 9 Pie. References Android (operating system) devices Discontinued flagship smartphones Mobile phones introduced in 2019 Mobile phones with multiple rear cameras Mobile phones with 8K video recording
Nubia Z20
[ "Technology" ]
384
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
61,863,261
https://en.wikipedia.org/wiki/Stanley%20Robert%20Hart
Stanley Robert Hart (born 20 June 1935 in Swampscott, Massachusetts) is an American geologist, geochemist, leading international expert on mantle isotope geochemistry, and pioneer of chemical geodynamics. Biography Hart graduated from MIT with a bachelor's degree in geology in 1956 and a master's degree in geochemistry in 1957 from Caltech. In 1960 he received his doctorate in geochemistry from MIT with thesis Mineral ages and metamorphism under the supervision of Patrick M. Hurley. After a year as a Carnegie Fellow, Hart was from 1961 to 1975 at the Carnegie Institution in Washington, D.C. in the Department of Terrestrial Magnetism. From 1975 to 1989 he was a professor of Earth, Atmospheric and Planetary Sciences at MIT and from 1989 to 1992 a visiting professor there. From 1989 to 2007 he was a Senior Scientist in geology and geophysics at Woods Hole Oceanographic Institution. He retired from Woods Hole in 2007 as Scientist Emeritus. Hart is a leading pioneer in the introduction of geochemistry into the Earth sciences. He developed comparative geochronology, which accounts for geological perturbations in various geochronometers. At the Carnegie Institution of Washington, he worked with George Wetherill, George Tilton, L. T. Aldrich, and G. L. Davis on mapping Precambrian rocks in the USA using comparative geochronology. There Hart became the leader of a group including Thomas Krogh, Albrecht Hofmann, Christopher Brooks, and others. According to Claude Allègre: Hart focused on the application of isotopic chemistry to age determination in geology, the geochemical evolution of mantle and oceanic lithosphere, and the geochemistry of strontium, neodymium, and lead isotopes in volcanic rocks. He also studied the long-term behavior of the chemical composition of the oceans due to their interaction with the oceanic crust and the experimental determination of fundamental geochemical properties such as mineral-melt partition coefficients in silicates and solid-state diffusion rates. In 1968, together with John S. Steinhart, he published the Steinhart-Hart equation, which provides a mathematical model of how the temperature and the electrical resistance of a thermistor vary, based upon 3 so-called Steinhart-Hart coefficients. He was a co-editor from 1970 to 1972 of the Reviews of Geophysics, from 1970 to 1976 of the Geochimica et Cosmochimica Acta, and from 1975 to 1992 of Physics of the Earth and Planetary Interiors. In 1975/76 he chaired the US National Committee for Geochemistry. His doctoral students include Erik Hauri. Hart has three children, one daughter from his first marriage, which ended in divorce in 1978, and a son and a daughter from his second marriage which began in 1980. Awards and honors 1983 — Member of the National Academy of Sciences 1985–1987 — President of the Geochemical Society 1992 — V. M. Goldschmidt Award, Geochemical Society 1997 — Harry Hess Medal, American Geophysical Union 2005 — Fellow of the American Academy of Arts and Sciences 2008 — Arthur L. Day Prize and Lectureship 2016 — William Bowie Medal Selected publications References 20th-century American geologists Massachusetts Institute of Technology School of Science alumni California Institute of Technology alumni Massachusetts Institute of Technology School of Science faculty Geochemists Members of the United States National Academy of Sciences Fellows of the American Academy of Arts and Sciences Fellows of the American Geophysical Union 1935 births Living people Presidents of the Geochemical Society Recipients of the V. M. Goldschmidt Award
Stanley Robert Hart
[ "Chemistry" ]
729
[ "Geochemists", "Presidents of the Geochemical Society", "Recipients of the V. M. Goldschmidt Award" ]
61,863,292
https://en.wikipedia.org/wiki/Viola%20Birss
Viola Ingrid Birss is a Professor of Chemistry at the University of Calgary and has been the holder of a Tier 1 Canada Research Chair in Fuel Cells and Related Clean Energy Systems for two 7-year terms. She works on electrochemical and nanomaterial technologies to advance clean energy and environmental applications.  She is a prolific scientist with over 350 refereed scientific publications. She has also supervised over 200 undergraduate, graduate and post-doctoral students and is an avid advocate for EDI, specifically in the attraction and retention of women in science and engineering. Early life and education Birss grew up in Crowsnest Pass, Alberta. She moved to Calgary at the age of ten. When she was deciding what to study at college, she felt that physics was "too abstract" and biology "too descriptive", so settled on chemistry. Having grown up with the wilderness close to her home, Birss was always aware of the environment, and interested in identifying clean ways of storing, converting and using energy. This attracted her to materials science and electrochemistry. Birss earned her doctorate at the University of Auckland as a Commonwealth Scholar, where she studied the electrochemistry of metal halide and metal sulfide monolayers and thin films on silver electrodes. Her doctoral thesis was titled Electrochemical studies of anodic films on silver. She was a postdoctoral research scientist at the University of Ottawa, where she worked on the supercapacitive properties of hydrous metal oxides. During this post, she specialized in studies of Ru oxide. Research and career Birss began her independent career at Alcan International, where she helped develop techniques to evaluate the susceptibility of aluminum alloys to stress corrosion and pitting. Her efforts included efforts to understand how to stabilize and protect a high-strength corrosion-resistant alloy: Al-Mg-Si alloy. She moved to the University of Calgary in 1983 where she was an Assistant Professor until 1987, an Associate Professor until 1991, then promoted to Full Professor. Birss prepares, characterizes and optimizes nanomaterials for a range of different electrochemical applications, including in fuel cells, electrolysis cells, batteries, capacitors and sensors. In her earlier work in Calgary, Birss and her team focused on understanding and modifying the electrochemical, chemical, physical and morphological properties of thin films on electrode surfaces, ranging from conducting polymers to a range of redox-active, hydrous, metal oxides.  In 2002, she was a founder and leader of the Western Canada Fuel Cell Initiative, which included over 35 research groups at eight institutions. This was supported by $2 million of funding under Birss' leadership. She subsequently co-founded the pan-Canadian Solid Oxide Fuel Cells Canada NSERC Research Network, an umbrella organization for groups working on solid oxide fuel cells. The focus of this 5 year network, which involved over 16 research groups at 8 universities across Canada, as well as government and industry partners, was focused mostly on the development of anodes that resist both sulfur contaminant poisoning and coking when operated on hydrogen from natural gas. Birss became a Tier 1 Canada Research Chair in Fuel Cells at the University of Calgary in 2004, holding the chair for two 7-year terms. The majority of her efforts as a CRC were focused on solid oxide fuel cells (SOFCs) and proton-exchange membrane fuel cells (PEMFCs), carbon nanomaterials, and electrochemical biological sensing. Some of her main contributions have involved determining the kinetics and mechanisms of oxidation and reduction reactions in fuel cells using electrochemical methods, as well as developing new fuel cell materials. Her team improved the performance and lifetime of low temperature PEMFCs through the development of ordered nanoporous carbon powders as well as self-supported, nanoporous carbon scaffolds. For use in high temperature solid oxide cells, Birss has further developed a family o metal oxide perovskite catalysts that can be used as both the anode and cathode in both solid oxide fuel cells and solid oxide electrolysis cells, catalyzing carbon dioxide splitting, water splitting, hydrogen and carbon monoxide oxidation, and oxygen reduction. Other areas of research have included the development of core shell nanoparticles, protective coatings and other novel strategies to combat the corrosion of metals, as well as selective and sensitive electrochemical biosensors for the detection of pathogens. Birss is currently the Scientific Director of CAESR-Tech (Calgary Advanced Energy Storage and Conversion Research Technologies), a large cluster of scientists and engineers who are focused on electrochemical technologies. This includes electrolysis cells, fuel cells, a variety of batteries and electrochemical capacitors, as well as electricity management and LCA, all at the University of Calgary. The CAESR-Tech cluster then spawned the ME2 NSERC CREATE student training center. Birss currently also serves as the Co-Lead of the Electrolysis Theme of HyPT (Hydrogen Production Technologies), a Global Research Center. Awards and honours Her awards and honours include; 2021 Fellow, Royal Society of the UK 2019 Peak Scholar, University of Calgary 2018 Killam Research Excellence Award 2018 Order of the University of Calgary 2017 David Grahame Award, Electrochemical Society Inc. 2016 Highlighted in ‘Successful Women Ceramic and Glass Scientists and Engineers, p. 13-18, Edited by Lynnette D. Madsen, Wiley 2014 Honorary Professor, University of Science and Technology, Beijing 2014 China Distinguished Materials Scientist, Univ. of Science and Technology Beijing 2012 Featured in U of Calgary promotional video (‘Eyes High – Reaching the Community’) 2011 Fellow, Royal Society of Canada 2010 Finalist for Outstanding Leadership in Alberta Technology (ASTECH Award) 2007 Awardee (Women’s Resource Center, U. of Calgary) for outstanding achievements as a research scientist, student supervisor, and mentor 2007 Fellow, Electrochemical Society Inc. 2006 Top 40 Alumni in the last 40 years, University of Calgary, 2006 2004 - 2018 Tier I CRC, Fuel Cells and Related Energy Applications 2005 NSF ADVANCE Distinguished Lectureship, Cleveland, Ohio 2003-2005 Honeywell Foundation Research Award 2002 Killam Resident Research Fellowship, University of Calgary 1998 CIC Lecture Award, University of Sherbrooke, Quebec 1996 Fellow, Canadian Society for Chemistry 1995 Faculty of Science, University of Calgary, Excellence in Research Award 1994 YWCA Woman of Distinction Award in Science and Technology, Calgary 1993 C. Benson Award, Canadian Society for Chemistry, Inaugural Recipient 1986 W. Lash Miller Award in Electrochemistry, Electrochemical Soc. Inc. (Canadian Section) 1985 Electrochemical Society W. Lash Miller Award She is a Fellow of the Royal Society (UK), Royal Society of Canada, the Chemical Institute of Canada and the Electrochemical Society. Selected publications Her publications include: Birss serves as associate editor of the Journal of Materials Chemistry A. References Canadian women chemists Canadian women academics Academic staff of the University of Calgary Academic staff of the University of Ottawa University of Auckland alumni Canadian materials scientists Women materials scientists and engineers Living people Year of birth missing (living people) 21st-century Canadian women scientists 21st-century Canadian chemists
Viola Birss
[ "Materials_science", "Technology" ]
1,469
[ "Women materials scientists and engineers", "Materials scientists and engineers", "Women in science and technology" ]