id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
7,076,870
https://en.wikipedia.org/wiki/Myc
Myc is a family of regulator genes and proto-oncogenes that code for transcription factors. The Myc family consists of three related human genes: c-myc (MYC), l-myc (MYCL), and n-myc (MYCN). c-myc (also sometimes referred to as MYC) was the first gene to be discovered in this family, due to homology with the viral gene v-myc. In cancer, c-myc is often constitutively (persistently) expressed. This leads to the increased expression of many genes, some of which are involved in cell proliferation, contributing to the formation of cancer. A common human translocation involving c-myc is critical to the development of most cases of Burkitt lymphoma. Constitutive upregulation of Myc genes have also been observed in carcinoma of the cervix, colon, breast, lung and stomach. Myc is thus viewed as a promising target for anti-cancer drugs. Unfortunately, Myc possesses several features that have rendered it difficult to drug to date, such that any anti-cancer drugs aimed at inhibiting Myc may continue to require perturbing the protein indirectly, such as by targeting the mRNA for the protein rather than via a small molecule that targets the protein itself. c-Myc also plays an important role in stem cell biology and was one of the original Yamanaka factors used to reprogram somatic cells into induced pluripotent stem cells. In the human genome, C-myc is located on chromosome 8 and is believed to regulate expression of 15% of all genes through binding on enhancer box sequences (E-boxes). In addition to its role as a classical transcription factor, N-myc may recruit histone acetyltransferases (HATs). This allows it to regulate global chromatin structure via histone acetylation. Discovery The Myc family was first established after discovery of homology between an oncogene carried by the Avian virus, Myelocytomatosis (v-myc; ) and a human gene over-expressed in various cancers, cellular Myc (c-Myc). Later, discovery of further homologous genes in humans led to the addition of n-Myc and l-Myc to the family of genes. The most frequently discussed example of c-Myc as a proto-oncogene is its implication in Burkitt's lymphoma. In Burkitt's lymphoma, cancer cells show chromosomal translocations, most commonly between chromosome 8 and chromosome 14 [t(8;14)]. This causes c-Myc to be placed downstream of the highly active immunoglobulin (Ig) promoter region, leading to overexpression of Myc. Structure The protein products of Myc family genes all belong to the Myc family of transcription factors, which contain bHLH (basic helix-loop-helix) and LZ (leucine zipper) structural motifs. The bHLH motif allows Myc proteins to bind with DNA, while the leucine zipper TF-binding motif allows dimerization with Max, another bHLH transcription factor. Myc mRNA contains an IRES (internal ribosome entry site) that allows the RNA to be translated into protein when 5' cap-dependent translation is inhibited, such as during viral infection. Function Myc proteins are transcription factors that activate expression of many pro-proliferative genes through binding enhancer box sequences (E-boxes) and recruiting histone acetyltransferases (HATs). Myc is thought to function by upregulating transcript elongation of actively transcribed genes through the recruitment of transcriptional elongation factors. It can also act as a transcriptional repressor. By binding Miz-1 transcription factor and displacing the p300 co-activator, it inhibits expression of Miz-1 target genes. In addition, myc has a direct role in the control of DNA replication. This activity could contribute to DNA amplification in cancer cells. Myc is activated upon various mitogenic signals such as serum stimulation or by Wnt, Shh and EGF (via the MAPK/ERK pathway). By modifying the expression of its target genes, Myc activation results in numerous biological effects. The first to be discovered was its capability to drive cell proliferation (upregulates cyclins, downregulates p21), but it also plays a very important role in regulating cell growth (upregulates ribosomal RNA and proteins), apoptosis (downregulates Bcl-2), differentiation, and stem cell self-renewal. Nucleotide metabolism genes are upregulated by Myc, which are necessary for Myc induced proliferation or cell growth. There have been several studies that have clearly indicated Myc's role in cell competition. A major effect of c-myc is B cell proliferation, and gain of MYC has been associated with B cell malignancies and their increased aggressiveness, including histological transformation. In B cells, Myc acts as a classical oncogene by regulating a number of pro-proliferative and anti-apoptotic pathways, this also includes tuning of BCR signaling and CD40 signaling in regulation of microRNAs (miR-29, miR-150, miR-17-92). c-Myc induces MTDH(AEG-1) gene expression and in turn itself requires AEG-1 oncogene for its expression. Myc-nick Myc-nick is a cytoplasmic form of Myc produced by a partial proteolytic cleavage of full-length c-Myc and N-Myc. Myc cleavage is mediated by the calpain family of calcium-dependent cytosolic proteases. The cleavage of Myc by calpains is a constitutive process but is enhanced under conditions that require rapid downregulation of Myc levels, such as during terminal differentiation. Upon cleavage, the C-terminus of Myc (containing the DNA binding domain) is degraded, while Myc-nick, the N-terminal segment 298-residue segment remains in the cytoplasm. Myc-nick contains binding domains for histone acetyltransferases and for ubiquitin ligases. The functions of Myc-nick are currently under investigation, but this new Myc family member was found to regulate cell morphology, at least in part, by interacting with acetyl transferases to promote the acetylation of α-tubulin. Ectopic expression of Myc-nick accelerates the differentiation of committed myoblasts into muscle cells. Clinical significance A large body of evidence shows that Myc genes and proteins are highly relevant for treating tumors. Except for early response genes, Myc universally upregulates gene expression. Furthermore, the upregulation is nonlinear. Genes for which expression is already significantly upregulated in the absence of Myc are strongly boosted in the presence of Myc, whereas genes for which expression is low in the absence Myc get only a small boost when Myc is present. Inactivation of SUMO-activating enzyme (SAE1 / SAE2) in the presence of Myc hyperactivation results in mitotic catastrophe and cell death in cancer cells. Hence inhibitors of SUMOylation may be a possible treatment for cancer. Amplification of the MYC gene was found in a significant number of epithelial ovarian cancer cases. In TCGA datasets, the amplification of Myc occurs in several cancer types, including breast, colorectal, pancreatic, gastric, and uterine cancers. In the experimental transformation process of normal cells into cancer cells, the MYC gene can cooperate with the RAS gene. Expression of Myc is highly dependent on BRD4 function in some cancers. BET inhibitors have been used to successfully block Myc function in pre-clinical cancer models and are currently being evaluated in clinical trials. MYC expression is controlled by a wide variety of noncoding RNAs, including miRNA, lncRNA, and circRNA. Some of these RNAs have been shown to be specific for certain types of human tissues and tumors. Changes in the expression of such RNAs can potentially be used to develop targeted tumor therapy. MYC rearrangements MYC chromosomal rearrangements (MYC-R) occur in 10% to 15% of diffuse large B-cell lymphoma (DLBCLs), an aggressive Non-Hodgkin Lymphoma (NHL). Patients with MYC-R have inferior outcomes and can be classified as single-hit, when they only have MYC-R; as double hit when the rearrangement is accompanied by a translocation of BCL2 or BCL6; and as triple hit when MYC-R includes both BCL2 and BCL6. Double and triple hit lymphoma have been recently classified as high-grade B-cell lymphoma (HGBCL) and it is associated with a poor prognosis. MYC-R in DLBCL/HGBCL is believed to arise through the aberrant activity of activation-induced cytidine deaminase (AICDA), which facilitates somatic hypermutation (SHM) and class-switch recombination (CSR). Although AICDA primarily targets IG loci for SHM and CSR, its off-target mutagenic effects can impact lymphoma-associated oncogenes like MYC, potentially leading to oncogenic rearrangements. The breakpoints in MYC rearrangements show considerable variability within the MYC region. These breakpoints may occur within the so-called “genic cluster,” a region spanning approximately 1.5 kb upstream of the transcription start site, as well as the first exon and intron of MYC. Fluorescence in situ hybridization (FISH) has become a routine practice in many clinical laboratories for lymphoma characterization. A break-apart (BAP) FISH probe is commonly utilized for the detection of MYC-R due to the variability of breakpoints in the MYC locus and the diversity of rearrangement partners, including immunoglobulin (IG) and non-IG partners (i.e. BCL2/BCL6). The MYC BAP probe includes a red and a green probe which hybridize 5’ and 3’ to the MYC gen, respectively. In an intact MYC locus, these probes yield a fusion signal. When MYC-R occur, two types of signals can be observed: Balanced patterns: These patterns present separate red and green signals. Unbalanced patterns: When isolated red or green signals in the absence of the corresponding green or red signal is observed. Unbalanced MYC-R are frequently associated with increased MYC expression. There is a large variability in the interpretation of unbalanced MYC BAP results among the scientists, which can impact diagnostic classification and therapeutic management of the patients. Animal models In Drosophila Myc is encoded by the diminutive locus, (which was known to geneticists prior to 1935). Classical diminutive alleles resulted in a viable animal with small body size. Drosophila has subsequently been used to implicate Myc in cell competition, endoreplication, and cell growth. During the discovery of Myc gene, it was realized that chromosomes that reciprocally translocate to chromosome 8 contained immunoglobulin genes at the break-point. To study the mechanism of tumorigenesis in Burkitt lymphoma by mimicking expression pattern of Myc in these cancer cells, transgenic mouse models were developed. Myc gene placed under the control of IgM heavy chain enhancer in transgenic mice gives rise to mainly lymphomas. Later on, in order to study effects of Myc in other types of cancer, transgenic mice that overexpress Myc in different tissues (liver, breast) were also made. In all these mouse models overexpression of Myc causes tumorigenesis, illustrating the potency of Myc oncogene. In a study with mice, reduced expression of Myc was shown to induce longevity, with significantly extended median and maximum lifespans in both sexes and a reduced mortality rate across all ages, better health, cancer progression was slower, better metabolism and they had smaller bodies. Also, Less TOR, AKT, S6K and other changes in energy and metabolic pathways (such as AMPK, more oxygen consumption, more body movements, etc.). The study by John M. Sedivy and others used Cre-Loxp -recombinase to knockout one copy of Myc and this resulted in a "Haplo-insufficient" genotype noted as Myc+/-. The phenotypes seen oppose the effects of normal aging and are shared with many other long-lived mouse models such as CR (calorie restriction) ames dwarf, rapamycin, metformin and resveratrol. One study found that Myc and p53 genes were key to the survival of chronic myeloid leukaemia (CML) cells. Targeting Myc and p53 proteins with drugs gave positive results on mice with CML. Relationship to stem cells Myc genes play a number of normal roles in stem cells including pluripotent stem cells. In neural stem cells, N-Myc promotes a rapidly proliferative stem cell and precursor-like state in the developing brain, while inhibiting differentiation. In hematopoietic stem cells, Myc controls the balance between self-renewal and differentiation. In particular, long-term hematopoietic stem cells (LT-HSCs) express low levels of c-Myc, ensuring self-renewal. Enforced expression of c-Myc in LT-HSCs promotes differentiation at the expense of self-renewal, resulting in stem cell exhaustion. In pathological states and specifically in acute myeloid leukemia, oxidant stress can trigger higher levels of Myc expression that affects the behavior of leukemia stem cells. c-Myc plays a major role in the generation of induced pluripotent stem cells (iPSCs). It is one of the original factors discovered by Yamanaka et al. to encourage cells to return to a 'stem-like' state alongside transcription factors Oct4, Sox2 and Klf4. It has since been shown that it is possible to generate iPSCs without c-Myc. Interactions Myc has been shown to interact with: ACTL6A BRCA1 Bcl-2 Cyclin T1 CHD8 DNMT3A EP400 GTF2I HTATIP let-7 MAPK1 MAPK8 MAX MLH1 MYCBP2 MYCBP NMI NFYB NFYC P73 PCAF PFDN5 RuvB-like 1 SAP130 SMAD2 SMAD3 SMARCA4 SMARCB1 SUPT3H TIAM1 TADA2L TAF9 TFAP2A TRRAP WDR5 YY1 and ZBTB17. C2orf16 See also Myc-tag C-myc mRNA References Further reading External links InterPro signatures for protein family: , , The Myc Protein NCBI Human Myc protein Myc cancer gene Generating iPS Cells from MEFS through Forced Expression of Sox-2, Oct-4, c-Myc, and Klf4 Drosophila Myc - The Interactive Fly PDBe-KB provides an overview of all the structure information available in the PDB for Human Myc proto-oncogene protein Oncogenes Transcription factors Human proteins
Myc
[ "Chemistry", "Biology" ]
3,358
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
7,077,161
https://en.wikipedia.org/wiki/Sodium%20aurothiomalate
Sodium aurothiomalate (INN, known in the United States as gold sodium thiomalate) is a gold compound that is used for its immunosuppressive anti-rheumatic effects. Along with an orally-administered gold salt, auranofin, it is one of only two gold compounds currently employed in modern medicine. Medical uses It is primarily given once or twice weekly by intramuscular injection for moderate-severe rheumatoid arthritis. It was also once used to treat tuberculosis, though later trials showed it to be harmful and ineffective for that purpose. Adverse effects Its most common side effects are digestive (mostly dyspepsia, mouth swelling, nausea, vomiting and taste disturbance), vasomotor (mostly flushing, fainting, dizziness, sweating, weakness, palpitations, shortness of breath and blurred vision) or dermatologic (usually itchiness, rash, local irritation near to the injection site and hair loss) in nature, although conjunctivitis, blood dyscrasias, kidney damage, joint pain, muscle aches/pains and liver dysfunction are also common. Less commonly, it can cause gastrointestinal bleeding, dry mucous membranes and gingivitis. Rarely it can cause aplastic anaemia, ulcerative enterocolitis, difficulty swallowing, angiooedema, pneumonitis, pulmonary fibrosis, hepatotoxicity, cholestatic jaundice, peripheral neuropathy, Guillain–Barré syndrome, encephalopathy, encephalitis and photosensitivity. Pharmacology Its precise mechanism of action is unknown but is known that it inhibits the synthesis of prostaglandins. It also modulates phagocytic cells and inhibits class II major histocompatibility complex-peptide interactions. It is also known that it inhibits the following enzymes: Acid phosphatase Beta-glucuronidase Elastase Cathepsin G Thrombin Microsomal prostaglandin E synthase-1 History of use Reports of favorable use of the compound were published in France in 1929 by Jacques Forestier. The use of gold salts was then a controversial treatment and was not immediately accepted by the international community. Success was found in the treatment of Raoul Dufy's joint pain by the use of gold salts in 1940; "(the treatment) brought in a few weeks such a spectacular sense of healing, that Dufy ... boasted of again having the ability to catch a tram on the move." Along with aurothioglucose, sodium aurothiomalate was discontinued in the United States, leaving auranofin as the only gold salt remaining on the U.S. market. References Gold(I) compounds Antirheumatic products Metal-containing drugs Organic sodium salts Thiolates Gold–sulfur compounds French inventions Disease-modifying antirheumatic drugs
Sodium aurothiomalate
[ "Chemistry" ]
621
[ "Organic sodium salts", "Thiolates", "Functional groups", "Salts" ]
7,077,416
https://en.wikipedia.org/wiki/Avidity
In biochemistry, avidity refers to the accumulated strength of multiple affinities of individual non-covalent binding interactions, such as between a protein receptor and its ligand, and is commonly referred to as functional affinity. Avidity differs from affinity, which describes the strength of a single interaction. However, because individual binding events increase the likelihood of occurrence of other interactions (i.e., increase the local concentration of each binding partner in proximity to the binding site), avidity should not be thought of as the mere sum of its constituent affinities but as the combined effect of all affinities participating in the biomolecular interaction. A particular important aspect relates to the phenomenon of 'avidity entropy'. Biomolecules often form heterogenous complexes or homogeneous oligomers and multimers or polymers. If clustered proteins form an organized matrix, such as the clathrin-coat, the interaction is described as a matricity. Antibody-antigen interaction Avidity is commonly applied to antibody interactions in which multiple antigen-binding sites simultaneously interact with the target antigenic epitopes, often in multimerized structures. Individually, each binding interaction may be readily broken; however, when many binding interactions are present at the same time, transient unbinding of a single site does not allow the molecule to diffuse away, and binding of that weak interaction is likely to be restored. Each antibody has at least two antigen-binding sites, therefore antibodies are bivalent to multivalent. Avidity (functional affinity) is the accumulated strength of multiple affinities. For example, IgM is said to have low affinity but high avidity because it has 10 weak binding sites for antigen as opposed to the 2 stronger binding sites of IgG, IgE and IgD with higher single binding affinities. Affinity Binding affinity is a measure of dynamic equilibrium of the ratio of on-rate (kon) and off-rate (koff) under specific concentrations of reactants. The affinity constant, Ka, is the inverse of the dissociation constant, Kd. The strength of complex formation in solution is related to the stability constants of complexes, however in case of large biomolecules, such as receptor-ligand pairs, their interaction is also dependent on other structural and thermodynamic properties of reactants plus their orientation and immobilization. There are several methods to investigate protein–protein interactions existing with differences in immobilization of each reactant in 2D or 3D orientation. The measured affinities are stored in public databases, such as the Ki Database and BindingDB. As an example, affinity is the binding strength between the complex structures of the epitope of antigenic determinant and paratope of antigen-binding site of an antibody. Participating non-covalent interactions may include hydrogen bonds, electrostatic bonds, van der Waals forces and hydrophobic effects. Calculation of binding affinity for bimolecular reaction (1 antibody binding site per 1 antigen): [Ab] + [Ag] <=> [AbAg] where [Ab] is the antibody concentration and [Ag] is the antigen concentration, either in free ([Ab],[Ag]) or bound ([AbAg]) state. calculation of association constant (or equilibrium constant): calculation of dissociation constant: Application Avidity tests for rubella virus, Toxoplasma gondii, cytomegalovirus (CMV), varicella zoster virus, human immunodeficiency virus (HIV), hepatitis viruses, Epstein–Barr virus, and others were developed a few years ago. These tests help to distinguish acute, recurrent or past infection by avidity of marker-specific IgG. Currently there are two avidity assays in use. These are the well known chaotropic (conventional) assay and the recently developed AVIcomp (avidity competition) assay. See also Amino acid residue Epitope Fab region Hapten A number of technologies exist to characterise the avidity of molecular interactions including switchSENSE and surface plasmon resonance. References Further reading External links Biophysics Protein structure
Avidity
[ "Physics", "Chemistry", "Biology" ]
861
[ "Structural biology", "Applied and interdisciplinary physics", "Biophysics", "Protein structure" ]
7,078,040
https://en.wikipedia.org/wiki/Fredrik%20Idestam
Knut Fredrik Idestam (28 October 1838, Tyrväntö, Grand Duchy of Finland – 8 April 1916, Helsinki, Grand Duchy of Finland) was a Finnish mining engineer and businessman, best known as a founder of Nokia. In May 1865, Idestam obtained a permit to construct a groundwood paper mill at Tampere, Finland. The mill began operations in 1866. In 1871, Idestam and Leo Mechelin founded Nokia Ltd. and moved the company's operations to the city of Nokia, Finland. He was buried in the Hietaniemi Cemetery in Helsinki. Notes External links Tapio Helen, Fredrik Idestam (1838–1916), National Biography of Finland, Finnish Historical Society 1838 births 1916 deaths People from Häme Province (Grand Duchy of Finland) Engineers from the Russian Empire Swedish-speaking Finns Mining engineers Nokia people Burials at Hietaniemi Cemetery 19th-century Finnish businesspeople Businesspeople from the Grand Duchy of Finland
Fredrik Idestam
[ "Engineering" ]
200
[ "Mining engineering", "Mining engineers" ]
8,565,964
https://en.wikipedia.org/wiki/Lithium%20hexafluorophosphate
Lithium hexafluorophosphate is an inorganic compound with the formula LiPF6. It is a white crystalline powder. Production LiPF6 is manufactured by reacting phosphorus pentachloride with hydrogen fluoride and lithium fluoride PCl5 + LiF + 5 HF → LiPF6 + 5 HCl Suppliers include Targray and Morita Chemical Industries Co., Ltd. Chemistry The salt is relatively stable thermally, but loses 50% weight at 200 °C (392 °F). It hydrolyzes near 70 °C (158 °F) according to the following equation forming highly toxic HF gas: LiPF6 + 4 H2O → LiF + 5 HF + H3PO4 Owing to the Lewis acidity of the Li+ ions, LiPF6 also catalyses the tetrahydropyranylation of tertiary alcohols. In lithium-ion batteries, LiPF6 reacts with Li2CO3, which may be catalysed by small amounts of HF: LiPF6 + Li2CO3 → POF3 + CO2 + 3 LiF Application The main use of LiPF6 is in commercial secondary batteries, an application that exploits its high solubility in polar aprotic solvents. Specifically, solutions of lithium hexafluorophosphate in carbonate blends of ethylene carbonate, dimethyl carbonate, diethyl carbonate and/or ethyl methyl carbonate, with a small amount of one or many additives such as fluoroethylene carbonate and vinylene carbonate, serve as state-of-the-art electrolytes in lithium-ion batteries. This application takes advantage of the inertness of the hexafluorophosphate anion toward strong reducing agents, such as lithium metal, as well as of the ability of [PF6-] to passivate the positive aluminium current collector. References Lithium salts Hexafluorophosphates Electrolytes
Lithium hexafluorophosphate
[ "Chemistry" ]
406
[ "Electrochemistry", "Lithium salts", "Electrolytes", "Salts" ]
8,566,056
https://en.wikipedia.org/wiki/Chain%20rule%20for%20Kolmogorov%20complexity
The chain rule for Kolmogorov complexity is an analogue of the chain rule for information entropy, which states: That is, the combined randomness of two sequences X and Y is the sum of the randomness of X plus whatever randomness is left in Y once we know X. This follows immediately from the definitions of conditional and joint entropy, and the fact from probability theory that the joint probability is the product of the marginal and conditional probability: The equivalent statement for Kolmogorov complexity does not hold exactly; it is true only up to a logarithmic term: (An exact version, , holds for the prefix complexity KP, where is a shortest program for x.) It states that the shortest program printing X and Y is obtained by concatenating a shortest program printing X with a program printing Y given X, plus at most a logarithmic factor. The results implies that algorithmic mutual information, an analogue of mutual information for Kolmogorov complexity is symmetric: for all x,y. Proof The ≤ direction is obvious: we can write a program to produce x and y by concatenating a program to produce x, a program to produce y given access to x, and (whence the log term) the length of one of the programs, so that we know where to separate the two programs for x and upper-bounds this length). For the ≥ direction, it suffices to show that for all such that we have that either or . Consider the list (a1,b1), (a2,b2), ..., (ae,be) of all pairs produced by programs of length exactly [hence ]. Note that this list contains the pair , can be enumerated given and (by running all programs of length in parallel), has at most 2K(x,y) elements (because there are at most 2n programs of length ). First, suppose that x appears less than times as first element. We can specify y given by enumerating (a1,b1), (a2,b2), ... and then selecting in the sub-list of pairs . By assumption, the index of in this sub-list is less than and hence, there is a program for y given of length . Now, suppose that x appears at least times as first element. This can happen for at most different strings. These strings can be enumerated given and hence x can be specified by its index in this enumeration. The corresponding program for x has size . Theorem proved. References Computability theory Theory of computation Articles containing proofs
Chain rule for Kolmogorov complexity
[ "Mathematics", "Technology", "Engineering" ]
539
[ "Telecommunications engineering", "Applied mathematics", "Mathematical logic", "Computer science", "Information theory", "Computability theory", "Articles containing proofs" ]
8,566,947
https://en.wikipedia.org/wiki/Pacific%20Symposium%20on%20Biocomputing
The Pacific Symposium on Biocomputing (PSB) is an annual multidisciplinary scientific meeting co-founded in 1996 by Dr. Teri Klein, Dr. Lawrence Hunter and Sharon Surles. The conference is to presentation and discuss research in the theory and application of computational methods for biology. Papers and presentations are peer reviewed and published. PSB brings together researchers from the US and the Asian Pacific nations, to exchange research results and address open issues in all aspects of computational biology. PSB is a forum for the presentation of work in databases, algorithms, interfaces, visualization, modeling, and other computational methods, as applied to biological problems, with emphasis on applications in data-rich areas of molecular biology. The PSB aims for "critical mass" in sub-disciplines within biocomputing. For that reason, it is the only meeting whose sessions are defined dynamically each year in response to specific proposals. PSB sessions are organized by leaders in the emerging areas and targeted to provide a forum for publication and discussion of research in biocomputing's topics. Since 2017 the Research Parasite Award has been announced and presented annually at the Symposium to recognize scientists who study previously-published data in ways not anticipated by the researchers who first generated it. Since the 2019 award year, the Research Parasite Award has been supported in part by an endowment housed at the University of Pennsylvania. Research Symbiont Award is another award presented annually at the Symposium to recognize exemplars in the practice of data sharing. References External links Pacific Symposium on Biocomputing web site Biology conferences Computer science conferences
Pacific Symposium on Biocomputing
[ "Technology", "Biology" ]
327
[ "Computer science", "Computer science conferences", "Computational biology" ]
8,566,976
https://en.wikipedia.org/wiki/Years%20of%20potential%20life%20lost
Years of potential life lost (YPLL) or potential years of life lost (PYLL) is an estimate of the average years a person would have lived if they had not died prematurely. It is, therefore, a measure of premature mortality. As an alternative to death rates, it is a method that gives more weight to deaths that occur among younger people. An alternative is to consider the effects of both disability and premature death using disability adjusted life years. Calculation To calculate the years of potential life lost, the analyst has to set an upper reference age. The reference age should correspond roughly to the life expectancy of the population under study. In the developed world, this is commonly set at age 75, but it is essentially arbitrary. Thus, PYLL should be written with respect to the reference age used in the calculation: e.g., PYLL[75]. PYLL can be calculated using individual level data or using age grouped data. Briefly, for the individual method, each person's PYLL is calculated by subtracting the person's age at death from the reference age. If a person is older than the reference age when they die, that person's PYLL is set to zero (i.e., there are no "negative" PYLLs). In effect, only those who die before the reference age are included in the calculation. Some examples: Reference age = 75; Age at death = 60; PYLL[75] = 75 − 60 = 15 Reference age = 75; Age at death = 6 months; PYLL[75] = 75 − 0.5 = 74.5 Reference age = 75; Age at death = 80; PYLL[75] = 0 (age at death greater than reference age) To calculate the PYLL for a particular population in a particular year, the analyst sums the individual PYLLs for all individuals in that population who died in that year. This can be done for all-cause mortality or for cause-specific mortality. Significance In the developed world, mortality counts and rates tend to emphasise the most common causes of death in older people because the risk of death increases with age. Because YPLL gives more weight to deaths among younger people, it is the favoured metric among those who wish to draw attention to those causes of death that are more common in younger people. Some researchers say that this measurement should be considered by governments when they decide how best to divide up scarce resources for research. For example, in most of the developed world, heart disease and cancer are the leading causes of death, as measured by the number (or rate) of deaths. For this reason, heart disease and cancer tend to get a lot of attention (and research funding). However, one might argue that everyone has to die of something eventually, and so public health efforts should be more explicitly directed at preventing premature death. When PYLL is used as an explicit measure of premature death, then injuries and infectious diseases, become more important. While the most common cause of death of young people aged 5 to 40 is injury and poisoning in the developed world, because relatively few young people die, the principal causes of lost years remain cardiovascular disease and cancer. By main cause of death in the United States of America A study suggests the global "mean loss of life expectancy" (LLE) from all forms of direct violence was about 0.3 years, while air pollution accounted for about 2.9 years in 2015. By country Here is a table of YPLL for all causes (ages 0–69, per 100,000) with the most recent available data from the OECD: Australia The report of the NSW Chief Medical Officer in 2002 indicates that cardiovascular disease (32.7% (of total Males Years of Life Lost due to premature mortality) and 36.6% of females YLL) and malignant neoplasms (27.5% of Males YLL and 31.2% of Females YLL) are the main causes of lost years When disability adjusted life years are considered, cancer (25.1/1,000), cardiovascular disease (23.8/1,000), mental health issues (17.6/1,000), neurological disorders (15.7/1,000), chronic respiratory disease (9.4/1,000) and diabetes (7.2/1,000) are the main causes of good years of expected life lost to disease or premature death. The dramatic difference is in the greater number of years of disability caused mental illness and neurological issues and by diabetes. See also Life-years lost Disability-adjusted life year Quality-adjusted life year References Actuarial science Death Health policy Epidemiology
Years of potential life lost
[ "Mathematics", "Environmental_science" ]
976
[ "Epidemiology", "Applied mathematics", "Actuarial science", "Environmental social science" ]
8,567,154
https://en.wikipedia.org/wiki/Sabrage
Sabrage is a technique for opening a champagne bottle with a saber, used for ceremonial occasions. The wielder slides the saber along the body seam of the bottle to the lip to break the top of the neck away, leaving the neck of the bottle open and ready to pour. The force of the blade hitting the lip breaks the glass to separate the collar from the neck of the bottle. The cork and collar remain together after separating from the neck. History The technique became popular in France when the army of Napoleon visited many of the aristocratic domains. It was just after the French Revolution and the saber was the weapon of choice for Napoleon's light cavalry (the Hussars). Napoleon's spectacular victories across all Europe gave them plenty of reason to celebrate. During these parties the cavalry would open the champagne with their sabers. Napoleon, who was known to have said, "I drink champagne when I win, to celebrate... and I drink champagne when I lose, to console myself", may have encouraged this. There are many stories about this tradition. One of the more spirited tales is that of Madame Clicquot, who had inherited her husband's small champagne house at the age of 27. She used to entertain Napoleon's officers in her vineyard, and as they rode off in the early morning with their complimentary bottle of champagne, they would open it with their saber to impress the rich young widow. Champagne sword A champagne sword (sabre à champagne) is an instrument specially made for sabrage. Some swords have short blades, around long and resemble large knives, although others have longer blades. The edges of the blade used should be blunt; a sharpened edge is unnecessary because in sabrage it is the impact that is important. If using a sword with a sharp blade then the flat blunt back of the blade is used. A champagne bottle can be opened with a spoon, the edge of a modern mobile phone or other similar items using the same method. The bottle neck is held at an angle of approximately 20 degrees and the sword is cast down on it. The experienced sommelier can open the bottle with little loss of champagne. However, it is advised to allow a small flow in order to wash away any loose shards of glass that may be adhering to the neck. The first glass poured should also be checked for small glass shards. Physics A champagne bottle holds a considerable amount of pressure. With early designs, bottles tended to explode and the manufacturers kept making them thicker until they could contain the pressure caused by the release of carbon dioxide during the secondary fermentation. The inside pressure of a typical champagne bottle is around . The diameter of the opening is , so there is a force of about trying to push the cork out of the bottle. At the opening of the bottle, there is a lip that creates a stress concentration. On the vertical seam of the bottle there is a thin, faintly visible, prepared seam, which creates a second stress concentration. At the intersection of the seam and the lip, both stress concentrations combine and the strength of the glass is reduced by more than fifty percent. The impact of the saber on this weak point creates a crack that rapidly propagates through the glass, fueled by the momentum of the saber and the pressure in the bottle. Once the crack has severed the top from the bottle, the pressure inside the bottle and the transferred momentum from the saber will send the top flying, typically for a distance of . Records The greatest number of champagne bottles sabered in one minute to be officially recognized by the Guinness Book of World Records is 68 and was achieved by Mirko Rainer (CH) at the Show Dei Record by Gerry Scotti, in Milan, Italy on 3 February 2023. Rainer used his own design MRK-Sabre à Champagne. He beat the previous record of 66 held by Ashrita Furman, who managed this feat on 2 August 2015 at the Sri Chinmoy Centre, Jamaica, Queens. The greatest number of champagne bottles sabered simultaneously was 623. It was officially recognised a world record by an occasion of the Sciabolata del Santero in Santo Stefano Belbo in Italy in June 2016. References External links Champagne sword article on Wired News Champagne (wine) Etiquette Sparkling wines Sabres Bartending equipment
Sabrage
[ "Biology" ]
869
[ "Etiquette", "Behavior", "Human behavior" ]
8,567,316
https://en.wikipedia.org/wiki/Cyclonic%20spray%20scrubber
Cyclonic spray scrubbers are an air pollution control technology. They use the features of both the dry cyclone and the spray chamber to remove pollutants from gas streams. Generally, the inlet gas enters the chamber tangentially, swirls through the chamber in a corkscrew motion, and exits. At the same time, liquid is sprayed inside the chamber. As the gas swirls around the chamber, pollutants are removed when they impact on liquid droplets, are thrown to the walls, and washed back down and out. Cyclonic scrubbers are generally low- to medium-energy devices, with pressure drops of 4 to 25 cm (1.5 to 10 in) of water. Commercially available designs include the irrigated cyclone scrubber and the cyclonic spray scrubber. In the irrigated cyclone (Figure 1), the inlet gas enters near the top of the scrubber into the water sprays. The gas is forced to swirl downward, then change directions, and return upward in a tighter spiral. The liquid droplets produced capture the pollutants, are eventually thrown to the side walls, and carried out of the collector. The "cleaned" gas leaves through the top of the chamber. The cyclonic spray scrubber (Figure 2) forces the inlet gas up through the chamber from a bottom tangential entry. Liquid sprayed from nozzles on a center post (manifold) is directed toward the chamber walls and through the swirling gas. As in the irrigated cyclone, liquid captures the pollutant, is forced to the walls, and washes out. The "cleaned" gas continues upward, exiting through the straightening vanes at the top of the chamber. This type of technology is a part of the group of air pollution controls collectively referred to as wet scrubbers. Particulate collection Cyclonic spray scrubbers are more efficient than spray towers, but not as efficient as venturi scrubbers, in removing particulate from the inlet gas stream. Particulates larger than 5 μm are generally collected by impaction with 90% efficiency. In a simple spray tower, the velocity of the particulates in the gas stream is low: 0.6 to 1.5 m/s (2 to 5 ft/s). By introducing the inlet gas tangentially into the spray chamber, the cyclonic scrubber increases gas velocities (thus, particulate velocities) to approximately 60 to 180 m/s (200 to 600 ft/s). The velocity of the liquid spray is approximately the same in both devices. This higher particulate-to-liquid relative velocity increases particulate collection efficiency for this device over that of the spray chamber. Gas velocities of 60 to 180 m/s are equivalent to those encountered in a venturi scrubber. However, cyclonic spray scrubbers are not as efficient as venturi scrubbers because they are not capable of producing the same degree of useful turbulence. Gas collection High gas velocities through these devices reduce the gas-liquid contact time, thus reducing absorption efficiency. Cyclonic spray scrubbers are capable of effectively removing some gases; however, they are rarely chosen when gaseous pollutant removal is the only concern. Maintenance problems The main maintenance problems with cyclonic scrubbers are nozzle plugging and corrosion or erosion of the side walls of the cyclone body. Nozzles have a tendency to plug from particulates that are in the recycled liquid and/or particulates that are in the gas stream. The best solution is to install the nozzles so that they are easily accessible for cleaning or removal. Due to high gas velocities, erosion of the side walls of the cyclone can also be a problem. Abrasion-resistant materials may be used to protect the cyclone body, especially at the inlet. Summary The pressure drops across cyclonic scrubbers are usually 4 to 25 cm (1.5 to 10 in) of water; therefore, they are low- to medium-energy devices and are most often used to control large-sized particulates. Relatively simple devices, they resist plugging because of their open construction. They also have the additional advantage of acting as entrainment separators because of their shape. The liquid droplets are forced to the sides of the cyclone and removed prior to exiting the vessel. Their biggest disadvantages are that they are not capable of removing submicrometer particulates and they do not efficiently absorb most pollutant gases. Table 1 lists typical operating characteristics of cyclonic scrubbers. Bibliography Bethea, R. M. 1978. Air Pollution Control Technology. New York: Van Nostrand Reinhold. McIlvaine Company. 1974. The Wet Scrubber Handbook. Northbrook, IL: McIlvaine Company. Richards, J. R. 1995. Control of Particulate Emissions (APTI Course 413). U.S. Environmental Protection Agency. Richards, J. R. 1995. Control of Gaseous Emissions. (APTI Course 415). U.S. Environmental Protection Agency. U.S. Environmental Protection Agency. 1969. Control Techniques for Particulate Air Pollutants. AP-51. References Pollution control technologies Air pollution control systems Wet scrubbers Liquid-phase and gas-phase contacting scrubbers
Cyclonic spray scrubber
[ "Chemistry", "Engineering" ]
1,094
[ "Scrubbers", "Wet scrubbers", "Pollution control technologies", "Environmental engineering" ]
8,567,475
https://en.wikipedia.org/wiki/Roller-compacted%20concrete
Roller-compacted concrete (RCC) or rolled concrete (rollcrete) is a special blend of concrete that has essentially the same ingredients as conventional concrete but in different ratios, and increasingly with partial substitution of fly ash for portland cement. The partial substitution of fly ash for Portland Cement is an important aspect of RCC dam construction because the heat generated by fly ash hydration is significantly less than the heat generated by portland cement hydration. This in turn reduces the thermal loads on the concrete and reduces the potential for thermal cracking to occur. RCC is a mix of cement/fly ash, water, sand, aggregate and common additives, but contains much less water. The produced mix is drier and essentially has no slump. RCC is placed in a manner similar to road paving; the material is delivered by dump trucks or conveyors, spread by small bulldozers or specially modified asphalt pavers, and then compacted by vibratory rollers. In dam construction, roller-compacted concrete began its initial development with the construction of the Alpe Gera Dam near Sondrio in North Italy between 1961 and 1964. Concrete was laid in a similar form and method but not rolled. RCC had been touted in engineering journals during the 1970s as a revolutionary material suitable for, among other things, dam construction. Initially and generally, RCC was used for backfill, sub-base and concrete pavement construction, but increasingly it has been used to build concrete gravity dams because the low cement content and use of fly ash cause less heat to be generated while curing than do conventional mass concrete placements. Roller-compacted concrete has many time and cost benefits over conventional mass concrete dams; these include higher rates of concrete placement, lower material costs and lower costs associated with post-cooling and formwork. Dam applications For dam applications, RCC sections are built lift-by-lift in successive horizontal layers resulting in a downstream slope that resembles a concrete staircase. Once a layer is placed, it can immediately support the earth-moving equipment to place the next layer. After RCC is deposited on the lift surface, small dozers typically spread it in one-foot-thick (about 30 cm) layers. The first RCC dam built in the United States was the Willow Creek Dam on Willow Creek, a tributary in Oregon of the Columbia River. It was constructed by the US Army Corps of Engineers between November 1981 and February 1983. Construction proceeded well, within a fast schedule and under budget (estimated US$50 million, actual US$35 million). On initial filling though, it was found that the leakage between the compacted layers within the dam body was unusually high. This condition was treated by traditional remedial grouting at a further cost of US$2 million, which initially reduced the leakage by nearly 75%; over the years, seepage has since decreased to less than 10% of its initial flow. Concern over the dam's long-term safety has continued however, although only indirectly related to its RCC construction. Within a few years of construction, problems were noted with stratification of the reservoir water, caused by upstream pollution and anoxic decomposition, which produced hydrogen sulfide gas. Concerns were expressed that this could in turn give rise to sulfuric acid, and thus accelerate damage to the concrete. The controversy itself, as well as its handling, continued for some years. In 2004 an aeration plant was installed to address the root cause in the reservoir, as had been suggested 18 years earlier. In the quarter century following the construction of the Willow Creek Dam, considerable research and experimentation yielded many improvements in concrete mix designs, dam designs and construction methods for roller-compacted concrete dams. By 2008, about 350 RCC dams existed worldwide. As of 2018, the highest dam of this type was the Gilgel Gibe III Dam in Ethiopia, at , with the Pakistani Diamer-Bhasha Dam under construction at . See also List of roller-compacted concrete dams Asphalt concrete Further reading References External links History of Concrete Database of Worldwide Roller Compacted Concrete Dams Concrete Concrete buildings and structures Building materials
Roller-compacted concrete
[ "Physics", "Engineering" ]
842
[ "Structural engineering", "Building engineering", "Construction", "Materials", "Building materials", "Concrete", "Matter", "Architecture" ]
8,568,167
https://en.wikipedia.org/wiki/Micro%20Electronics%2C%20Inc.
Micro Electronics, Inc. (MEI) is an American privately held company headquartered in Hilliard, Ohio. Founded in 1979 by John Baker, it serves as the parent company of the computer retailer Micro Center, its online division Micro Center Online, and its brand iPSG, which houses PowerSpec PC, WinBook, and Inland (including Inland Premium for high-end SSDs). See also References Consumer electronics retailers of the United States Consumer electronics retailers Consumer electronics Computer companies of the United States Home computer hardware companies American companies established in 1979 Computer companies established in 1979 Computer hardware companies Computer systems companies Electronics companies established in 1979 Retail companies established in 1979 Online retailers of the United States Privately held companies based in Ohio 1979 establishments in Ohio Companies based in the Columbus, Ohio metropolitan area
Micro Electronics, Inc.
[ "Technology" ]
156
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
8,568,920
https://en.wikipedia.org/wiki/Dupuit%E2%80%93Forchheimer%20assumption
The Dupuit–Forchheimer assumption holds that groundwater flows horizontally in an unconfined aquifer and that the groundwater discharge is proportional to the saturated aquifer thickness. It was formulated by Jules Dupuit and Philipp Forchheimer in the late 1800s to simplify groundwater flow equations for analytical solutions. The Dupuit–Forchheimer assumption requires that the water table be relatively flat and that the groundwater be hydrostatic (that is, that the equipotential lines are vertical): where is the vertical pressure gradient, is the specific weight, is the density of water, is the standard gravity, and is the vertical hydraulic gradient. References Aquifers Hydraulic engineering Hydrology
Dupuit–Forchheimer assumption
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
144
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Aquifers", "Environmental engineering", "Hydraulic engineering" ]
8,569,148
https://en.wikipedia.org/wiki/Cascade%20filling%20system
A cascade filling system is a high-pressure gas cylinder storage system that is used for the refilling of smaller compressed gas cylinders. In some applications, each of the large cylinders is filled by a compressor, otherwise they may be filled remotely and replaced when the pressure is too low for effective transfer. The cascade system allows small cylinders to be filled without a compressor. In addition, a cascade system is useful as a reservoir to allow a low-capacity compressor to meet the demand of filling several small cylinders in close succession, with longer intermediate periods during which the storage cylinders can be recharged. Principle of operation When gas in a cylinder at high pressure is allowed to flow to another cylinder containing gas at a lower pressure, the pressures will equalize to a value somewhere between the two initial pressures. The equilibrium pressure is affected by transfer rate as it will be influenced by temperature, but at a constant temperature, the equilibrium pressure is described by Dalton's law of partial pressures and Boyle's law for ideal gases. The formula for the equilibrium pressure is: P3 = (P1×V1+P2×V2)/(V1+V2) where P1 and V1 are the initial pressure and volume of one cylinder P2 and V2 the initial pressure and volume of the other cylinder and P3 is the equilibrium pressure. An example could be a 100-litre (internal volume) cylinder (V1) pressurised to 200 bar (P1) filling a 10-litre (internal volume) cylinder (V2) which was unpressurised (P2 = 1 bar) (resulting in both cylinder equalising to approximately 180 bar (P3). If another 100-liter cylinder pressurized this time to 250 bar were then used to "top-up" the 10-liter cylinder, both of these cylinders would equalize to about 240 bar. However, if the higher pressure 100-liter cylinder were used first, the 10-liter cylinder would equalize to about 225 bar and the lower pressure 100-liter cylinder could not be used to top it up. In a cascade storage system, several large cylinders are used to bring a small cylinder up to the desired pressure, by always using the supply cylinder with the lowest usable pressure first, then the cylinder with the next lowest pressure, and so on. In practice, the theoretical transfers can only be achieved if the gases are allowed to reach a temperature equilibrium before disconnection. This requires significant time, and a lower efficiency may be accepted to save time. Actual transfer can be calculated using the general gas equation of state if the temperature of the gas in the cylinder is accurately measured. Uses Breathing sets A breathing set cylinder may be filled to its working pressure by decanting from larger (often 50 liters) cylinders. (To make this easy the neck of the cylinder of the Siebe Gorman Salvus rebreather had the same thread as an oxygen storage cylinder, but the opposite gender, for direct decanting.) The storage cylinders are available in a variety of sizes, typically from 50 litre internal capacity to well over 100 litres. In the more general case, a high-pressure hose known as a filling whip is used to connect the filling panel or storage cylinder to the receiving cylinder. Cascade filling is often used for partial pressure blending of breathing gas mixtures for diving, to economize on the relatively expensive oxygen, for nitrox, and the even more expensive helium in trimix or heliox mixtures. Compressed natural gas fueling Cascade storage is used at compressed natural gas (CNG) fueling stations. Typically three CNG tanks will be used, and a vehicle will first be fueled from one of them, which will result in an incomplete fill, perhaps to 2000 psig for a 3000 psig tank. The second and third tanks will bring the vehicle's tank closer to 3000 psi. The station normally has a compressor, which refills the station's tanks, using natural gas from a utility line. This prevents accidentally overfilling the tank, which could happen with a system using a single fueling tank at a higher pressure than the target pressure for the vehicle. Hydrogen storage In cascade storage systems for hydrogen storage, for example at hydrogen stations, fuel dispenser A draws hydrogen from tank A, while dispenser B draws fuel from hydrogen tank B. If dispenser A is over-utilized, tank A will become depleted before tank B. At this point dispenser A is switched to tank C. Tank C will then supply dispensers A and B and tank A until tank A is filled to the same pressure as tank B and the dispensers are disconnected, after which the control system will close the control valves to switch to its former state. Arrangement of system The storage cylinders may be used independently in sequence using a portable transfer whip with a pressure gauge and manual bleed valve, to transfill the receiving cylinder until the appropriate fill pressure has been reached, or the storage cylinders may be connected to a manifold system and a filling control panel with one or more filling whips. Ideally, each storage cylinder has an independent connection to the filling panel with a contents pressure gauge and supply valve dedicated to that cylinder, and a filling gauge connected to the filling whip, so the operator can see at a glance the next higher storage cylinder pressure compared to the receiving cylinder pressure. The storage cylinders may be filled remotely and connected to the manifold by a flexible hose when in use, or maybe permanently connected and refilled by a compressor through a dedicated filling system, which may be automated or manually controlled. An over-pressure safety valve is usually installed inline between the compressor and the storage units to protect the cylinders from overfilling, and each cylinder may also be protected by a rupture disc. References External links Breathing gases Diving support equipment Gas technologies Hydrogen storage Pressure vessels
Cascade filling system
[ "Physics", "Chemistry", "Engineering" ]
1,203
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Pressure vessels" ]
8,569,325
https://en.wikipedia.org/wiki/Mishnat%20ha-Middot
The Mishnat ha-Middot (, 'Treatise of Measures') is the earliest known Hebrew treatise on geometry, composed of 49 mishnayot in six chapters. Scholars have dated the work to either the Mishnaic period or the early Islamic era. History Date of composition Moritz Steinschneider dated the Mishnat ha-Middot to between 800 and 1200 CE. Sarfatti and Langermann have advanced Steinschneider's claim of Arabic influence on the work's terminology, and date the text to the early ninth century. On the other hand, Hermann Schapira argued that the treatise dates from an earlier era, most likely the Mishnaic period, as its mathematical terminology differs from that of the Hebrew mathematicians of the Arab period. Solomon Gandz conjectured that the text was compiled no later than (possibly by Rabbi Nehemiah) and intended to be a part of the Mishnah, but was excluded from its final canonical edition because the work was regarded as too secular. The content resembles both the work of Hero of Alexandria (c. ) and that of al-Khwārizmī (c. ) and the proponents of the earlier dating therefore see the Mishnat ha-Middot linking Greek and Islamic mathematics. Modern history The Mishnat ha-Middot was discovered in MS 36 of the Munich Library by Moritz Steinschneider in 1862. The manuscript, copied in Constantinople in 1480, goes as far as the end of Chapter V. According to the colophon, the copyist believed the text to be complete. Steinschneider published the work in 1864, in honour of the seventieth birthday of Leopold Zunz. The text was edited and published again by mathematician Hermann Schapira in 1880. After the discovery by Otto Neugebauer of a genizah-fragment in the Bodleian Library containing Chapter VI, Solomon Gandz published a complete version of the Mishnat ha-Middot in 1932, accompanied by a thorough philological analysis. A third manuscript of the work was found among uncatalogued material in the Archives of the Jewish Museum of Prague in 1965. Contents Although primarily a practical work, the Mishnat ha-Middot attempts to define terms and explain both geometric application and theory. The book begins with a discussion that defines "aspects" for the different kinds of plane figures (quadrilateral, triangle, circle, and segment of a circle) in Chapter I (§1–5), and with the basic principles of measurement of areas (§6–9). In Chapter II, the work introduces concise rules for the measurement of plane figures (§1–4), as well as a few problems in the calculation of volume (§5–12). In Chapters III–V, the Mishnat ha-Middot explains again in detail the measurement of the four types of plane figures, with reference to numerical examples. The text concludes with a discussion of the proportions of the Tabernacle in Chapter VI. The treatise argues against the common belief that the Tanakh defines the geometric ratio π as being exactly equal to 3 and defines it as instead. The book arrives at this approximation by calculating the area of a circle according to the formulae and . See also Baraita of the Forty-nine Rules References External links MS Heb. c. 18, Catalogue of the Genizah Fragments in the Bodleian Libraries. 15th-century manuscripts Bodleian Library collection Hebrew-language literature Hebrew manuscripts History of geometry History of mathematics Mathematics books Mathematics textbooks Mishnah Pi Works of unknown authorship
Mishnat ha-Middot
[ "Mathematics" ]
747
[ "History of geometry", "Pi", "Geometry" ]
8,569,383
https://en.wikipedia.org/wiki/Groundwater%20discharge
Groundwater discharge is the volumetric flow rate of groundwater through an aquifer. Total groundwater discharge, as reported through a specified area, is similarly expressed as: where Q is the total groundwater discharge ([L3·T−1]; m3/s), K is the hydraulic conductivity of the aquifer ([L·T−1]; m/s), dh/dl is the hydraulic gradient ([L·L−1]; unitless), and A is the area which the groundwater is flowing through ([L2]; m2) For example, this can be used to determine the flow rate of water flowing along a plane with known geometry. The discharge potential The discharge potential is a potential in groundwater mechanics which links the physical properties, hydraulic head, with a mathematical formulation for the energy as a function of position. The discharge potential, [L3·T−1], is defined in such way that its gradient equals the discharge vector. Thus the hydraulic head may be calculated in terms of the discharge potential, for confined flow as and for unconfined shallow flow as where is the thickness of the aquifer [L], is the hydraulic head [L], and is an arbitrary constant [L3·T−1] given by the boundary conditions. As mentioned the discharge potential may also be written in terms of position. The discharge potential is a function of the Laplace's equation which solution is a linear differential equation. Because the solution is a linear differential equation for which superposition principle holds, it may be combined with other solutions for the discharge potential, e.g. uniform flow, multiple wells, analytical elements (analytic element method). See also Groundwater flow equation Groundwater energy balance Submarine groundwater discharge Discharge (hydrology) Flux (transport definition) Darcy's Law References Freeze, R.A. & Cherry, J.A., 1979. Groundwater, Prentice-Hall. Hydrology Aquifers
Groundwater discharge
[ "Chemistry", "Engineering", "Environmental_science" ]
397
[ "Hydrology", "Aquifers", "Environmental engineering" ]
8,569,924
https://en.wikipedia.org/wiki/University%20of%20Tehran%2C%20Department%20of%20Mining%20Engineering
The University of Tehran, Department of Mining Engineering was among the first four engineering departments established in the University of Tehran. The main responsibility of this department as the first formal education institution in mining engineering in Iran, was to train experts and professionals who could help exploiting the substantial potential of the country in natural resources, mineral deposits, and mining prospects. Within last 72 years, many mining engineers have received education and training in this school in various levels. Currently, the School of Mining Engineering has about 290 undergraduate and 135 graduate students. Undergraduate students should pass 140 credit hours including 3 credit hours of senior project, 20 credit hours of humanities, basic engineering math, physics, and chemistry, applied geology courses, and mining related topics with several credit hours of field training. School of Mining Engineering currently has 20 full-time and several prominent adjunct faculty members. It has several research laboratories and educational workshops including Mineral Processing, Rock Mechanics, Mineralogy, Petrology, Cartography, Geophysics, Geochemistry, XRD Lab, Analytical Lab, Industrial Mineral Application Lab, Mineral and Rock Museum, and a computer center. References Mining Engineering universities and colleges in Iran Schools of mines University and college departments Mining in Iran
University of Tehran, Department of Mining Engineering
[ "Engineering" ]
241
[ "Schools of mines", "Engineering universities and colleges" ]
8,570,211
https://en.wikipedia.org/wiki/Heavitree%20stone
Heavitree stone is a type of breccia stone, red in colour, of very coarse texture and prone to weathering, which occurs naturally in the parish of Heavitree near the City of Exeter in Devon, England. It was quarried in the area from about 1350 to the 19th century, and was used to construct many of Exeter's older buildings, including Exeter Castle, the old city walls, and many of the almshouses and parish churches. Many ancient buildings in Exeter made of Heavitree stone were destroyed by enemy bombing during World War II. It was first referred to by Sir Henry De La Beche in 1839, as the "Conglomerates of Heavitree". Quarries The site of the historic quarry is represented today by "Quarry Lane" in Heavitree, where survive two quarry faces, and another quarry existed in nearby Wonford. A quarry is first recorded in 1390. Description The stone comprises angular fragments and grains, up to 40mm in diameter, of sandstone, chert, minerals, granite and volcanic rocks, all embedded in a matrix of finer sands and clay. As the stone was formed from sediment laid down by flash flooding in semi-arid conditions, the stone fragments are not rounded by the wearing of water, as are sedimentary deposits laid down in the sea. It dates to the Triassic period, about 280 million years ago. References Further reading Dove, J. (1994). Exeter in Stone: an urban geology. Thematic Trails. School of Social Sciences, Oxford Brookes University, 45pp. De la Beche, H.T. (1839). Report on the Geology of Cornwall, Devon and West Somerset. Memoir of the Geological Survey of Greta Britain. Longman, orme, Brown, Green and Longmans, London, 648pp. Breccias Geology of Devon
Heavitree stone
[ "Materials_science" ]
378
[ "Breccias", "Fracture mechanics" ]
8,570,918
https://en.wikipedia.org/wiki/Nuclear%20Energy%20Institute
The Nuclear Energy Institute (NEI) is a nuclear industry trade association in the United States, based in Washington, D.C. Synopsis The Nuclear Energy Institute represents the nuclear technologies industry. NEI’s stated mission “is to promote the use and growth of nuclear energy through efficient operations and effective policy.” NEI works on legislative and regulatory issues impacting the industry, such as the preservation of nuclear plants and used nuclear fuel storage. The association represents the nuclear industry's interests before Congress and the Nuclear Regulatory Commission. It often produces research reports and testifies at federal and state congressional hearings. The nuclear energy industry that NEI represents and serves includes: Commercial electricity generation, nuclear medicine including diagnostics and therapy, food processing and agricultural applications, industrial and manufacturing applications, uranium mining and processing, nuclear fuel and radioactive materials manufacturing, transportation of radioactive materials, and nuclear waste management NEI is governed by a 47-member board of directors. The board includes representatives from the nation's 27 nuclear utilities, plant designers, architect/engineering firms and fuel cycle companies. Eighteen members of the board serve on the executive committee, which is responsible for NEI's business and policy affairs. History The Nuclear Energy Institute (NEI) was founded in 1994 from the merger of several nuclear energy industry organizations, the oldest of which was created in 1953. Specifically, in 1994, NEI was formed from the merger of the Nuclear Utility Management and Resources Council (NUMARC), which addressed generic regulatory and technical issues; The U.S. Council for Energy Awareness (USCEA), which conducted a national communications program; the American Nuclear Energy Council (ANEC), which conducted government affairs; and the nuclear division of the Edison Electric Institute (EEI), which handled issues involving used nuclear fuel management, nuclear fuel supply, and the economics of nuclear energy. In 1987, NUMARC and USCEA were created through a division of the Atomic Industrial Forum (AIF). USCEA was founded in 1979 as the Committee for Energy Awareness and it changed its name to USCEA in Jan 1992 (in the aftermath of Three Mile Island) to create ambiguity. In a 1983 magazine interview, USCEA president and CEO Harold Finger stated, "I guess we chose our name very well. Many people ask us [if USCEA] is a government agency of bureaucracy." It has been charged with blatant misrepresentations in the CEO advertising campaign by the Safe Energy Communications Council (SECC). The membership list as of June 1990 lists 31 major power companies. The AIF was created in 1953 to focus on the beneficial uses of nuclear energy. This was two years before the international “Atoms for Peace” conference held in Geneva in 1955, marking the dawn of the nuclear age. Current issues In addition to its core mission, NEI also sponsors a number of public communications efforts to build support for the industry and the expansion of nuclear energy, a number of which have come under attack from environmentalists and anti-nuclear activists. In 2006, NEI founded the Clean and Safe Energy Coalition (CASEnergy) to help build local support around the country for new nuclear construction. The co-chairs of the coalition are early Greenpeace member Patrick Moore and former United States Environmental Protection Agency Secretary and New Jersey Governor Christine Todd Whitman. As of April 2006, CASEnergy boasted 427 organizations and 454 individuals as members. In April 2004, the Austin Chronicle reported that NEI has hired the Potomac Communications Group to ghostwrite pro-nuclear op-ed columns to be submitted to local newspapers under the name of local residents. In 2003 story in the Columbus Dispatch, NEI said that it engaged a public affairs agency to identify individuals with technical expertise in the nuclear energy industry to participate in the public debate. However, as many of these individuals have little experience in opinion writing for a non-technical audience, the agency provides assistance if requested, a common industry practice. In 1999, Public Citizen filed a complaint with the Federal Trade Commission charging that an NEI advertising campaign overstated the environmental benefits of nuclear energy to consumers living in markets where sales of electricity had been deregulated. In a ruling the following December, the FTC rejected those claims concluding: NEI did not violate the law; agreed that the advertisements were directed to policymakers and opinion leaders in forums that principally reach those who set national policy on energy and environmental issues, and therefore did not constitute "commercial speech"; noted that in different circumstances, such as direct marketing of electricity, such advertising could be considered commercial speech and be subject to stricter substantiation. NEI ran other ads with similar content, most recently one released in September 2006 touting nuclear energy's non-emitting character and the role it can play in reducing American dependence on foreign sources of fossil fuels like oil and natural gas. In 2008, Greenpeace criticized NEI's public relations efforts and suggested that NEI's advertising about nuclear power was an example of greenwashing. In the first quarter of 2008, NEI spent $320,000 on lobbying the US federal government. Besides Congress, the nuclear group lobbied the White House, Nuclear Regulatory Commission, departments of Commerce, Defense, Energy and others in the first three months of the year. The NEI spent $1.3 million to lobby the federal government in 2007. In 2012, NEI quoted Kathyrn Higley, professor of radiation health physics in the department of nuclear engineering at Oregon State University, who described the health impact of the Fukushima Daiichi nuclear accident to be "really, really minor", adding that "the Japanese government was able to effectively block a large component of exposure in this population". Advocacy One of NEI's main focuses is advocating for policies that would promote beneficial uses of nuclear energy. NEI utilizes the National Nuclear Energy Strategy which has four main points that they want to hit when guiding policy: preserve, sustain, innovate, and thrive. Preservation aims to keep and preserve the current nuclear power plants that are still in use today. Sustain is another point that is used to guide policies. Its goal is to sustain the operations of the existing plants through more efficient practices and smarter regulations. The point of innovation emphasizes creating newer nuclear technologies that will produce greener energy. Lastly, thrive touches on the point of saying that it is essential to our country's leadership that we can do well in the global nuclear energy marketplace. One of these most important, pressing points is the preservation of nuclear power plants. In the next few years, about half of the operating licenses for the US’s nuclear plants will expire  However, the NEI is helping provide information and push policy to help increase the amount of Second License Renewals. Second license renewal is where a nuclear power plant can extend its original operating license for up to 20 years. This is important because if these plants are forced to close if they do not renew their license, then they will most likely not be replaced with another nuclear power plant. They will probably be replaced with a less efficient plant that utilizes fossil fuels. This could hurt up to one-quarter of the environmental benefits that these nuclear plants have contributed. Along with the advocacy of policy, NEI also is dedicated to advocating the advantages of nuclear energy as well. Some of the main advantages that NEI states are the benefits in climate, national security, sustainable development, infrastructure, and air quality. Nuclear energy will help our climate by contributing to decarbonization. NEI also argues that if a country is leading in nuclear energy development, it would also be leading in the world. Nuclear power plants would be able to function even if something were to happen to the electrical grid around them which would greatly help the US. The sustainable development of increasing our nuclear energy would be very beneficial. NEI claims that it could even help poverty, hunger, and stagnant economies. Nuclear energy would help because it would provide individuals with clean, low-cost, secure energy.   Infrastructure within America has not been able to keep pace with Americans rapidly increasing power needs. To keep the gap between power and expansion of infrastructure, NEI suggests maintaining existing nuclear power plants. This suggestion is made with the knowledge that after a power plant has closed, it is gone forever. NEI also advocates for more nuclear power infrastructure due to hundreds of jobs being created and consistent for the years to come. NEI advocates for nuclear energy due to it being the largest source of clean energy within the United States, already producing more than half of the nation’s clean electricity. Due to the lack of emissions from nuclear energy, it acts as a beneficial option for states attempting to comply with the Clean Air Act. Key personnel President and Chief Executive Officer: Maria Korsnick Chairman: Ralph Izzo Vice Chairman: Paul D. Koonce President and Chief Executive Officer: Maria G. Korsnick Executive Vice President and Chief Financial Officer: Phyllis M. Rich Senior Vice President, External Affairs: Neal M. Cohen Senior Vice President, General Counsel and Secretary: Ellen C. Ginsberg Vice President, Policy Development and Public Affairs: John F. Kotek Vice President, Government Affairs: Beverly K. Marshall Chief Nuclear Officer and Senior Vice President, Generation and Suppliers: Doug E. True Vice President, Generation and Suppliers: Jennifer L. Uhle Vice President, Communications: Jon C. Wentzel See also Nuclear power in the United States Office of Nuclear Energy United States Department of Energy Atomic Industrial Forum American Nuclear Society Institute of Nuclear Power Operations Frank L. "Skip" Bowman (Biographic details) Institute of Nuclear Materials Management References External links Clean and Safe Energy Coalition (CASEnergy) Skip Bowman speech at LA Town Hall Dr. Patrick Moore at NEA 2006 Stewart Brand at NEA 2006 SourceWatch on the Nei Business organizations based in the United States Nuclear industry organizations Nuclear organizations 501(c)(6) nonprofit organizations Organizations established in 1994 Trade associations based in the United States Lobbying organizations based in Washington, D.C. Organizations based in Washington, D.C.
Nuclear Energy Institute
[ "Engineering" ]
2,039
[ "Nuclear industry organizations", "Nuclear organizations", "Energy organizations" ]
8,571,568
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Vulpecula
This is the list of notable stars in the constellation Vulpecula, sorted by decreasing brightness. See also List of stars by constellation SGR 1935+2154 Notes References List Vulpecula
List of stars in Vulpecula
[ "Astronomy" ]
42
[ "Lists of stars by constellation", "Vulpecula", "Constellations" ]
8,571,733
https://en.wikipedia.org/wiki/Noise%2C%20vibration%2C%20and%20harshness
Noise, vibration, and harshness (NVH), also known as noise and vibration (N&V), is the study and modification of the noise and vibration characteristics of vehicles, particularly cars and trucks. While noise and vibration can be readily measured, harshness is a subjective quality, and is measured either via jury evaluations, or with analytical tools that can provide results reflecting human subjective impressions. The latter tools belong to the field psychoacoustics. Interior NVH deals with noise and vibration experienced by the occupants of the cabin, while exterior NVH is largely concerned with the noise radiated by the vehicle, and includes drive-by noise testing. NVH is mostly engineering, but often objective measurements fail to predict or correlate well with the subjective impression on human observers. For example, although the ear's response at moderate noise levels is approximated by A-weighting, two different noises with the same A-weighted level are not necessarily equally disturbing. The field of psychoacoustics is partly concerned with this correlation. In some cases, the NVH engineer is asked to change the sound quality, by adding or subtracting particular harmonics, rather than making the vehicle quieter. Noise, vibration, and harshness for vehicles can be distinguished easily by quantifying the frequency. Vibration is between 0.5 Hz and 50 Hz, noise is between 20 Hz and 5000 Hz, and harshness takes the coupling of noise and vibration. Sources of NVH The sources of noise in a vehicle can be classified as: Aerodynamic (e.g., wind, cooling fans of HVAC) Mechanical (e.g., engine, driveline, tire contact patch and road surface, brakes) Electrical (e.g., electromagnetically induced acoustic noise and vibration coming from electrical actuators, alternator, or traction motor in electric cars) Mainly, noise is either structure-borne noise or airborne noise. Many problems are generated as either vibration or noise, transmitted via a variety of paths, and then radiated acoustically into the cabin. These are classified as "structure-borne" noise. Others are generated acoustically and propagated by airborne paths. Structure-borne noise is attenuated by isolation, while airborne noise is reduced by absorption or through the use of barrier materials. Vibrations are sensed at the steering wheel, the seat, armrests, or the floor and pedals. Some problems are sensed visually, such as the vibration of the rear-view mirror or header rail on open-topped cars. Tonal versus broadband NVH can be tonal such as engine noise, or broadband, such as road noise or wind noise, normally. Some resonant systems respond at characteristic frequencies, but in response to random excitation. Therefore, although they look like tonal problems on any one spectrum, their amplitude varies considerably. Other problems are self-resonant, such as whistles from antennas. Tonal noises often have harmonics. Below is the noise spectrum of Michael Schumacher's Ferrari at 16680 rpm, showing the various harmonics. The x-axis is given in terms of multiples of engine speed. The y-axis is logarithmic, and uncalibrated. Instrumentation Typical instrumentation used to measure NVH include microphones, accelerometers, and force gauges or load cells. Many NVH facilities have semi-anechoic chambers, and rolling road dynamometers. Typically, signals are recorded directly to the hard drive via an analog-to-digital converter. In the past, magnetic or DAT tape recorders were used. The integrity of the signal chain is very important, typically each of the instruments used are fully calibrated in a laboratory once per year, and any given setup is calibrated as a whole once per day. Laser scanning vibrometry is an essential tool for effective NVH optimization. The vibrational characteristics of a sample is acquired full-field under operational or excited conditions. The results represent the actual vibrations. No added mass is influencing the measurement, as the sensor is light itself. Investigative techniques Techniques used to help identify NVH include part substitution, modal analysis, rig squeak and rattle tests (complete vehicle or component/system tests), lead cladding, acoustic intensity, transfer path analysis, and partial coherence. Most NVH work is done in the frequency domain, using fast Fourier transforms to convert the time domain signals into the frequency domain. Wavelet analysis, order analysis, statistical energy analysis, and subjective evaluation of signals modified in real time are also used. Computer-based modeling NVH analysis needs good representative prototypes of the production vehicle for testing. These are needed early in the design process as the solutions often need substantial modification to the design, forcing in engineering changes which are much less expensive when made early. These early prototypes are very expensive, so there has been great interest in computer aided predictive techniques for NVH. One example is the modeling works for structure borne noise and vibration analysis. When the phenomenon being considered occurs below, for example, 25–30 Hz, the idle shaking of the powertrain, a multi-body model can be used. In contrast, when the phenomenon being considered occurs at relatively high frequency – for example, above 1 kHz – a statistical energy analysis (SEA) model may be a better approach. For the mid-frequency band, various methodologies exist, such as vibro-acoustic finite element analysis, and boundary element analysis. The structure can be coupled to the interior cavity and form a fully coupled equation system. Also, other techniques exist that can mix measured data with finite element or boundary element data. Typical solutions There are three principal means of improving NVH: Reducing the source strength, as in making a noise source quieter with a muffler, or improving the balance of a rotating mechanism Interrupting the noise or vibration path, with barriers (for noise) or isolators (for vibration) Absorption of the noise or vibration energy, as for example with foam noise absorbers, or tuned vibration dampers Deciding which of these (or what combination) to use in solving a particular problem is one of the challenges facing the NVH engineer. Specific methods for improving NVH include the use of tuned mass dampers, subframes, balancing, modifying the stiffness or mass of structures, retuning exhausts and intakes, modifying the characteristics of elastomeric isolators, adding sound deadening or absorbing materials, and using active noise control. In some circumstances, substantial changes in vehicle architecture may be the only way to cure some problems cost-effectively. Not-for-profit organizations such as the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) and Vibration Isolation and Seismic Control Manufacturers Association (VISCMA) provide specifications, standards, and requirements that cover a wide array of industries including electrical, mechanical, plumbing, and HVAC. See also Acoustic camera Acoustic quieting Engine balance Health effects from noise Noise control Noise mitigation Vibration calibrator Vibration isolation Acoustical measurements and instrumentation References Bibliography Baxa (1982). Noise Control in Internal Combustion Engines. Beranek. Acoustics. Griffin. Handbook of Human Vibration. Harris. Shock and Vibration Handbook. Thomson. Theory of Vibration with Applications. External links Agilent's Fundamentals of Signal Analysis Basics of NVH Dr. Pawan Pingle Mechanical vibrations Automotive engineering Noise control
Noise, vibration, and harshness
[ "Physics", "Engineering" ]
1,530
[ "Structural engineering", "Automotive engineering", "Mechanics", "Mechanical engineering by discipline", "Mechanical vibrations" ]
8,571,929
https://en.wikipedia.org/wiki/Smoking%20room
A smoking room (or smoking lounge) is a room which is specifically provided and furnished for smoking, generally in buildings where smoking is otherwise prohibited. Locations and facilities Smoking rooms can be found in public buildings such as airports, and in semi-public buildings such as workplaces. Such rooms are commonly equipped with chairs, ashtrays and ventilation, and are usually free to enter, although there may be a smoking age restriction. A cigarette company sometimes sponsors these smoking rooms, displaying its brand names on the room walls and financing the room or its maintenance. Cigarette companies have worked hard to ensure smoking was accommodated in major airports, which are high-profile locations serving many people who are often bored or nervous. Initially, providing smoking and no smoking areas was their goal, but when that policy failed they fell back on ventilated smoking rooms. Historical use in private British houses When the Crimean War during the 1850s popularized Turkish tobacco in Great Britain, smoking gained in fashionable popularity there but was considered indelicate. After dinner in a large private house, the gentlemen might retreat from the ladies to a smoking room, furnished with velvet curtains and decorated to masculine tastes (wealthy owners would often choose Turkish themes and weapons collections), and replace his tail coat with a comfortable velvet smoking jacket and cap. The velvet was intended to absorb the smoke, to avoid contaminating other rooms and clothes. In the United Kingdom smoking became illegal on 1 July 2007 in any enclosed public place, so that smoking rooms cannot be provided and facilities must be outdoors; apart from private homes, almost the only legal exception is for smoking rooms in tobacconists where customers may sample the wares. Mental health units in the NHS providing long term in-patient care also have an exemption allowing the provision of designated smoking rooms for patients. See also Smoking ban Passive smoking References Rooms room
Smoking room
[ "Engineering" ]
369
[ "Rooms", "Architecture" ]
8,571,969
https://en.wikipedia.org/wiki/Antitaenite
Antitaenite is a meteoritic metal alloy mineral composed of iron (Fe) and 20–40% nickel (Ni), (and traces of other elements) that has a face centered cubic crystal structure. There are three other known Fe-Ni meteoritic minerals: kamacite, taenite, and tetrataenite. The existence of antitaenite as a new mineral species, occurring in both iron meteorites and in chondrites, was first proposed in 1995 but the IMA has not approved paramagnetic antitaenite; instead the organization regards it as a variety of taenite. Gamma (fcc) Fe-Ni alloys with low-Ni (about 25% Ni) are probably inhomogeneous on a nanometer scale. Antitaenite and taenite have the same crystal structure (face centered cubic) and can have the same chemical composition (same proportions of Fe and Ni) but they differ in their electronic structures: taenite has a high magnetic moment whereas antitaenite has a low magnetic moment. This difference in electronic structure was first established in 1999 and arises from a high-magnetic-moment to low-magnetic-moment transition occurring in the Fe-Ni bi-metallic alloy series. The same electronic structure transition is believed to be a causal factor in Invar behaviour. See also Glossary of meteoritics References External links Mindat with location data Webmineral data Iron minerals Nickel minerals Meteorite minerals
Antitaenite
[ "Chemistry" ]
303
[ "Alloys", "Alloy stubs" ]
8,573,380
https://en.wikipedia.org/wiki/Czeslaw%20Brzozowicz
Czeslaw Peter Brzozowicz (June 28, 1911 - November 24, 1997) was a consulting engineer for the CN Tower, Toronto-Dominion Centre, first Toronto subway line, among many other construction projects in Canada. Biography Born in Sokolow Malopolski, Poland, in 1911, Brzozowicz graduated in civil engineering from the University of Lwow in Poland only months before the Nazi invasion of Poland. He served with the Polish army in Poland and France for three years before obtaining a Canadian visa in 1942 under an agreement with the government-in-exile to send engineers for Canada's war industries. Like many immigrants, he arrived in Canada with a few dollars and his professional training. His first job was as a surveyor, laying out the highway between Prince George and Prince Rupert in British Columbia. In 1944, Brzozowicz joined Marathon Paper Mills in Toronto, designing their Northern Ontario plants. At the end of the war, sensing Canada was about to boom, he launched a private practice as a consulting engineer. His first client was Canadian Breweries Ltd., whose expansion plans - typical for the time - called for several reinforced concrete structures in Toronto, Waterloo, Windsor and Montreal. Brzozowicz made a name for himself designing concrete structures reinforced with embedded steel bars. It was a relatively uncommon practice in Canada, since the short construction season was considered unfavourable for poured concrete walls. In this respect, Brzozowicz was at the forefront of an engineering trend that would become enormously popular. Brzozowicz designed grain elevators and other industrial structures in Toronto, Winnipeg and Montreal. Working with Pigott Construction, he contributed to such Canadian landmarks as the A.V. Roe aircraft facility and one of the world's largest automobile factories, General Motors' Autoplex in Oshawa, Ontario. Brzozowicz also consulted extensively on Toronto's first subway line, which ran under Yonge Street from Union Station to Eglinton Avenue. C.P. Brzozowicz Ltd. supplied engineering expertise for the construction of the Commonwealth's tallest building in the 1960s, Mies van der Rohe's Toronto-Dominion Bank Tower. His expertise was also used in the design of the world's first tower with a revolving restaurant, the Skylon Tower in Niagara Falls, Ontario. He was later involved in the crucial shoring of the CN Tower, the world's 9th tallest freestanding structure - and one made of reinforced concrete. Personal life Brzozowicz was married to his wife Danuta for 48 years, and together they raised three daughters and three sons, dividing their time between their north Toronto home, a ramshackle cottage on Georgian Bay and an apple farm near Collingwood, Ontario. He made sure each of his children received a traditional Catholic education. The Brzozowicz family home was one of the more than 700 projects designed by Brzozowicz's firm in his lifetime, dressed in warm red brick, and built in 1957. Brzozowicz died of pneumonia in Toronto in 1997. It serves as a legacy to the man that when the family home was sold and demolished some 10 years after his passing, the stoutly built residence proved difficult to raze. References Lives Lived, Globe and Mail, January 27, 1998. Author: Mark Toljagic. Canadian civil engineers Polish emigrants to Canada 1911 births 1997 deaths Structural engineers
Czeslaw Brzozowicz
[ "Engineering" ]
694
[ "Structural engineering", "Structural engineers" ]
8,573,406
https://en.wikipedia.org/wiki/Phenothrin
Phenothrin, also called sumithrin and d-phenothrin, is a synthetic pyrethroid that kills adult fleas and ticks. It has also been used to kill head lice in humans. d-Phenothrin is used as a component of aerosol insecticides for domestic use. It is often used with methoprene, an insect growth regulator that interrupts the insect's biological lifecycle by killing the eggs. Effects Phenothrin is primarily used to kill fleas and ticks. It is also used to kill head lice in humans, but studies conducted in Paris and the United Kingdom have shown widespread resistance to phenothrin. It is extremely toxic to bees. A U.S. Environmental Protection Agency (EPA) study found that 0.07 micrograms were enough to kill honey bees. It is also extremely toxic to aquatic life with a study showing concentrations of 0.03 ppb killing mysid shrimp. It has increased risk of liver cancer in rats and mice in long-term exposure, with doses in the range of 100 milligrams per kilogram of body weight per day, or above. It is capable of killing mosquitoes, although remains poisonous to cats and dogs, with seizures and deaths being reported due to poisoning. Specific data on concentrations or exposure are lacking. Phenothrin has been found to possess antiandrogen properties, and was responsible for a small epidemic of gynecomastia via isolated environmental exposure. The EPA has not assessed its effect on cancer in humans. However, one study performed by the Mount Sinai School of Medicine linked sumithrin with breast cancer; the link made by its effect on increasing the expression of a gene responsible for mammary tissue proliferation. EPA action In 2005, the U.S. EPA cancelled permission to use phenothrin in several flea and tick products, at the request of the manufacturer, Hartz Mountain Industries. The products were linked to a range of adverse reactions, including hair loss, salivation, tremors, and numerous deaths in cats and kittens. In the short term, the agreement called for new warning labels on the products. As of March 31, 2006, the sale and distribution of Hartz's phenothrin-containing flea and tick products for cats has been terminated. However, EPA's product cancellation order did not apply to Hartz flea and tick products for dogs, and Hartz continues to produce many of its flea and tick products for dogs. See also Permethrin Resmethrin Deltamethrin References External links d-Phenothrin general information – National Pesticide Information Center Pyrethrins and Pyrethroids Fact Sheet – National Pesticide Information Center Pyrethrins and Pyrethroids Pesticide Information Profile – Extension Toxicology Network Chrysanthemate esters Endocrine disruptors Nonsteroidal antiandrogens Pest control (3-phenoxyphenyl)methyl 2,2,3-trimethylcyclopropane-1-carboxylates
Phenothrin
[ "Chemistry", "Biology" ]
639
[ "Endocrine disruptors", "Pests (organism)", "Pest control" ]
8,573,560
https://en.wikipedia.org/wiki/Ferrofluidic%20seal
Ferrofluidic is the brand name of a staged magnetic liquid rotary sealing mechanism made by the Ferrotec Corporation. Ferrofluidic seals, also known as magnetic liquid rotary seals, are employed in various rotating equipment to facilitate rotary motion while ensuring a hermetic seal. This is achieved through a physical barrier constituted by a ferrofluid, which is held in position by a permanent magnet. Developed in the 1970s, ferrofluidic seals have been utilized in various specialized applications, including computer disk drives, vacuum systems, and nuclear technologies. Origins Ferrofluidic seals rely on the general principle of ferrofluids - fluids that display magnetic attraction. Following research on ferrofluids during the 1960s, the ferrofluidic seal was first patented in 1971 by R.E. Rosensweig (USP 3,620,584), who subsequently founded Ferrofluidics Corporation with R. Moskowitz. Benefits and limitations Magnetic liquid rotary seals operate with little maintenance and minimal leakage in a range of applications. Ferrofluid-based seals used in industrial and scientific applications are most often packaged in mechanical seal assemblies called rotary feed-throughs, which also contain a central shaft, ball bearings, and outer housing. The ball bearings provide two functions: maintaining the shaft's centering within the seal gap and supporting external loads. The bearings are the only mechanical wear items, as the dynamic seal is formed with a series of rings of ultra-low vapor pressure, oil-based liquid, held magnetically between the rotor and stator. As the ferrofluid retains its liquid properties even when magnetized, drag torque is very low. With the use of permanent magnets, the operating life and equipment maintenance cycles are generally very long. Ferrofluid-sealed feed-throughs reach their greatest performance levels by optimizing features such as ferrofluid Viscosity and magnetic strength, magnet and steel materials, bearing arrangements, and using water cooling for applications with extremely high speeds or temperatures. Ferrofluid-sealed feedthroughs can operate in environments including ultra-high vacuum (below 10−8 mbar), temperatures over 1,000 °C, tens of thousands of RPM, and multiple-atmosphere pressures. Magnetic liquid seals can be engineered for a range of applications and exposure, but are generally limited to sealing gases and vapors, not direct pressurized liquid. This is due to premature failure of the ferrofluid seal when it seals a liquid. In 2020, research was underway to try and solve this problem. Each particular combination of construction materials and design features has practical limits concerning temperature, differential pressure, speed, applied loads, and operating environment, and as such devices must be designed to meet the criteria for their applications. Necessary features may include multiple ferrofluid stages, water cooling, customized materials, permanent magnets, and exotic bearings. Ferrofluid-based seals have extremely low leak rates however they cannot reach the levels of welded connections or other all-metal, static (non-rotating) seals. References External links Video demonstration of a magnetic liquid rotary seal Seals (mechanical) Magnetic devices
Ferrofluidic seal
[ "Physics" ]
654
[ "Seals (mechanical)", "Materials", "Matter" ]
8,573,860
https://en.wikipedia.org/wiki/-ose
The suffix -ose () is used in organic chemistry to form the names of sugars. This Latin suffix means "full of", "abounding in", "given to", or "like". Numerous systems exist to name specific sugars more descriptively. The suffix is also used more generally in English to form adjectives from nouns, with the sense "full of", as in "verbose": wordy, full of words. Monosaccharides, the simplest sugars, may be named according to the number of carbon atoms in each molecule of the sugar: pentose is a five-carbon monosaccharide, and hexose is a six-carbon monosaccharide. Aldehyde monosaccharides may be called aldoses; ketone monosaccharides may be called ketoses. Larger sugars such as disaccharides and polysaccharides can be named to reflect their qualities. Lactose, a disaccharide found in milk, gets its name from the Latin word for milk combined with the sugar suffix; its name means "milk sugar". The polysaccharide that makes up plant starch is named amylose, or "starch sugar"; see amyl. There are these theories about the origin of the -ose suffix in chemistry:- Derived from glucose, an important hexose whose name came from Greek γλυκύς = "sweet". Derived from sucrose, whose name came from Latin = "sugar" plus the common Latin adjective-forming suffix -ōsus; Latin would mean "sugary". References ose English suffixes
-ose
[ "Chemistry" ]
354
[ "Chemistry suffixes" ]
8,574,103
https://en.wikipedia.org/wiki/Residual%20body
In lysosomal digestion, residual bodies are vesicles containing indigestible materials. Residual bodies are either secreted by the cell via exocytosis (this generally only occurs in macrophages), or they become lipofuscin granules that remain in the cytosol indefinitely. Longer-living cells like neurons and muscle cells usually have a higher concentration of lipofuscin than other more rapidly proliferating cells. See also Autophagy Phagocytosis References Sources Cellular processes
Residual body
[ "Biology" ]
106
[ "Cellular processes" ]
8,574,224
https://en.wikipedia.org/wiki/MSN%20WiFi%20Hotspots
MSN WiFi Hotspots, previously Windows Live WiFi Hotspot Locator, was a website that helped users to locate wireless Internet hotspots worldwide and view their positions on a map using Live Search Maps. This service has been discontinued as of June 10, 2008. Windows Live WiFi Center Windows Live WiFi Center was part of Microsoft's Windows Live services that helped users to find and connect to wireless networks around the world. It allowed users to search for wireless networks that are available and displayed information about them such as security configuration and signal strength. In addition, users could also add wireless networks as favorites, track connection history, and manage network preferences. It used VPN technology to secure a wireless Internet connection on unsecured networks. The service allowed users to search for free and fee-based wireless networks, showing information such as address, description, available amenities, service providers and location using Live Search Maps. Windows Live Wifi Center was discontinued after the rebranding of Windows Live WiFi Hotspot Locator to MSN WiFi Hotspots. Requirements Windows Live WiFi Center requires the following software to be installed prior to installation: Microsoft .NET Framework 2.0 Microsoft Core XML Services (MSXML) 6.0 Wi-Fi Protected Access 2 (WPA2) External links Un-Wired Official Team Blog MSN Rebranding on Liveside.net Wi-Fi
MSN WiFi Hotspots
[ "Technology" ]
290
[ "Wireless networking", "Wi-Fi" ]
8,575,099
https://en.wikipedia.org/wiki/Intra-frame%20coding
Intra-frame coding is a data compression technique used within a video frame, enabling smaller file sizes and lower bitrates, with little or no loss in quality. Since neighboring pixels within an image are often very similar, rather than storing each pixel independently, the frame image is divided into blocks and the typically minor difference between each pixel can be encoded using fewer bits. Intra-frame prediction exploits spatial redundancy, i.e. correlation among pixels within one frame, by calculating prediction values through extrapolation from already coded pixels for effective delta coding. It is one of the two classes of predictive coding methods in video coding. Its counterpart is inter-frame prediction which exploits temporal redundancy. Temporally independently coded so-called intra frames use only intra coding. The temporally coded predicted frames (e.g. MPEG's P- and B-frames) may use intra- as well as inter-frame prediction. Usually only few of the spatially closest known samples are used for the extrapolation. Formats that operate sample by sample like Portable Network Graphics (PNG) can usually use one of four adjacent pixels (above, above left, above right, left) or some function of them like e.g. their average. Block-based (frequency transform) formats prefill whole blocks with prediction values extrapolated from usually one or two straight lines of pixels that run along their top and left borders. Inter frame has been specified by the CCITT in 1988–1990 by H.261 for the first time. H.261 was meant for teleconferencing and ISDN telephoning. Coding process Data is usually read from a video camera or a video card in the YCbCr data format (often informally called YUV for brevity). The coding process varies greatly depending on which type of encoder is used (e.g., JPEG or H.264), but the most common steps usually include: partitioning into macroblocks, transformation (e.g., using a DCT or wavelet), quantization and entropy encoding. Applications It is used in codecs like ProRes: a group of pictures codec without inter frames. See also Video compression I-Frame Delay Inter frame Group of pictures application of frame types Motion compensation External links http://www.cs.cf.ac.uk/Dave/Multimedia/node248.html MPEG Video compression
Intra-frame coding
[ "Technology" ]
503
[ "Multimedia", "Computing stubs", "MPEG" ]
8,576,047
https://en.wikipedia.org/wiki/Restaurant%20media
Restaurant media is an emerging form of retail media advertising used in cafeterias, fast food and family restaurants and diners and that reaches consumers while they dine. For decades most fast food restaurant chains employed various in-store advertising media such as billboards, posters and paper tray covers and these media are rapidly being replaced by digital signage. The concept of delivering multimedia content to customers of fast food restaurants and food courts emerged in the early 1990s and became increasingly popular in recent years. Burger King and Tim Hortons were among the first fast food restaurant chains to deploy digital signage projects involving plasma displays, LCD panels, self-service interactive kiosks in their restaurants Overview Plasma displays and liquid crystal display panels: Flat panel are the most common digital signage or narrowcasting vehicles and are commonly located at or above the food service counter. This form of retail media usually attracts visitor attention with custom programming and helps customer make the best product selection. In some cases this is used for third-party advertising. Interactive kiosks: Some fast food restaurants have deployed interactive kiosks, allowing customers to make purchase of food or third-party products while being exposed to restaurant or third-party advertising. Interactive table-top displays. The use of table-top displays is emerging in mainly full-service restaurants providing customers with the ability to call the waiter, order menu items online, access Internet, television and custom restaurant programming. Interactive multimedia food service trays: One of the major recent breakthroughs in restaurant media was invented in late 2006 by Canadian entrepreneurs and is currently being evaluated for launch by major fast food restaurant chains. Gaming corners: In order to better target the youth segment and to generate additional revenue from third-party sponsorship, several restaurant chains are launching multimedia gaming corners. McDonald's and Burger King in certain regions installed video game console systems. Internet access outlets: Internet access points are becoming popular in cafeterias, shopping malls and full-service restaurants and are generally used to attract customers to the restaurant as well as expose them to third-party advertising. Interactive fast food ordering systems: To facilitate for a large group of take-out customers and to improve service of eat-in customers during peak times, many restaurants deployed touch screen ordering systems usually located at the entrance to the restaurant or drive-through area. Table Top Ordering Systems: This is a new technology trend that enables patrons to view menu, place orders, play games and pay at the table as well. These table top systems are being at Applebees, Olive Garden and lot of other chain restaurants Trends Strong competition will also continue among food service players as they attempt to capture business from these time starved consumers. Major players are already looking for innovative ways to appeal to customers, as evident by the recent renovation announcements from food service giant, McDonald's Corp. The restaurant chain is establishing what it calls the "next generation" of McDonald's – restaurants equipped with leather chairs, plasma televisions, and wireless internet. The goal of the renovation is to create a setting to win back customers, particularly young adults and families, and even attract a new type of customer who likes to linger throughout the day. The restaurants are being designed with three key zones: a fast zone, a social zone and a "linger" zone. To date, restaurants with this new format have seen increased guest counts as well as increased sales, effectively demonstrating that consumers' expectations regarding the dine-in environment are changing. The major players in the food service market have been competing with new product introductions, healthier menu items and new, friendlier restaurant designs. McDonald's has begun a $5 billion renovation project to try to encourage patrons to linger longer in their restaurants. This demonstrates two things. One, traditional "fast food" restaurants are looking for ways to keep customers in the store longer in an attempt to generate more purchases. And two, the old value proposition behind fast food: deliver a meal cheaply and quickly is also changing. Consumers are becoming so immersed in technology, media and marketing that they are now even expecting to be entertained while they dine. Consumer trends The U.S. saw an increase in the number of households from the period of 1990-2005 yet it also experienced a decline in its population. This resulted in a net decrease in the number of people per household, fuelled mainly by single person household growth (an increase over the period of 30%) as well as two-person household growth (an increase over the period of 25%). Full-service and fast food restaurants will continue to benefit from this trend, as single and two person households find it more economical to eat meals out rather than prepare them at home. A research study by Ipsos-Insight also found that 32% of Americans currently eat at restaurants at least a few days a week and 61% do so at least once a week. Disposable incomes in the U.S. are predicted to increase between 2005 and 2010, as a result of continued growth in the number of dual income households. This is a favorable trend for the food service industry for two reasons. First, an increase in the number of dual income households means that there is increasing time pressure on the household and a willingness to seek out convenient dining options. Second, increased disposable income typically indicates people will spend more in restaurants. This dual income segment also continually looks for an appealing atmosphere which allows them to dine with friends and family. Technology trends Attempts have been made by several companies to digitize the materials on the restaurant tray cover and enhance the consumer experience by integrating multimedia devices into the fast food trays. In 2006, the manufacturer of multimedia tablets called Mediox showcased the first prototypes of multimedia-enabled trays. Regular consumer tablets and multimedia phones quickly replaced the multimedia food trays and most of the traditional and quick service restaurants now focus on delivering content via in-store wall-mounted and cash register TVs. McDonald's has chosen to use the self-ordering kiosks as a way of showing their customers a wider range of choices. Just like how online shopping works by making it easy to "add" more items to the basket, so too do the self-ordering kiosks. Standard restaurants have begun to embrace digital menus. Instead of the standard paper menu, some restaurants are beginning to use a tablet type device in order to allow the customer to be more engaging with the menu and ultimately lead to a more satisfied choice. Another technology that has advanced from a paper form into a digital one is loyalty schemes. The classic stamp cards are starting to be removed and replaced with a transaction based loyalty scheme that is linked to the consumers EFTPOS or credit card. Every time the card is used to make a purchase from the restaurant points are awarded to an online account, which can be cashed in for free items off the menu. Restaurants have also indulged in the mobile app market. Fast food companies such as Pizza Hut and Dominoes are among the many restaurants that offer an app designed for on the go, easy ordering. This idea allows for efficiency and lower wait times for busy customers who need a pit stop meal. Having this data on one's phone also allows the app to update so that the customers more frequently purchased items show up faster, which along with online payments being possible through the app, makes it even simpler for the consumer to choose, order, pay for and enjoy the food. The teen segment There are 73 million people under the age of 18 in the U.S. and these individuals represent the next generation of spenders, a marketing-savvy group that has grown up immersed in technology, the Internet and is accustomed to instantaneous delivery of information. Marketing to the teenage segment of the population is becoming increasingly difficult as the segment now experiences targeted marketing from a very early age. It is hard for advertisers to cut through the media clutter and deliver messages that are relevant and will resonate with this increasingly savvy (yet lucrative) group. This group also tends to be the largest among early adopters of new technology. Trends in gaming, PC and internet usage and mobile technology have all become ingrained in their way of life. According to a report from Forrester Research, promotions that work well with teens include advergames, instant-win games, online coupons, streaming video ads and cell phone promotions. Restaurant media that encompasses video ads and other multimedia content and encourages interactive game playing will effectively capture the attention of this segment. Competition in the food service industry Competition in the food service segment is also becoming increasingly fierce as restaurants look to attract diners from other types of restaurants in order to increase their own traffic. Competitors who have not kept up through menu innovation or by offering new services have felt the strongest pressure competitively. Restaurants that demonstrate innovation and creativity to improve convenience and service will be those that achieve differentiation from competitors. Moreover, those restaurants that are able to provide new concepts that can appeal to a wider variety of customers will also realize increased returns. With new restaurants opening up so often, it is very important that restaurants keep up to date with any advancements in order to remain relevant in the ever-changing market. Media is a basic technology that restaurants must stay up to date with. Fast food restaurants are far more advanced technologically than standard restaurants, as it is acceptable to sacrifice human interaction for fast service and results. However, for a traditional restaurant, adding in self-serving kiosks for example is not an ideal advancement. That us where social media platforms open up huge opportunities for restaurants who need to stay somewhat traditional in the physical restaurant. Social media is among the fastest growing media platforms in the modern era. Restaurants have jumped onto the social media buzz in a unique and effective way. There are many apps available to download that help restaurants attract new customers and get their name out to the public. One type of app that has revolutionised restaurant to consumer engagement is online search and discovery platforms such as Zomato and Yelp. What Zomato and Yelp do is they sell an advertising platform to the restaurant, which in turn allows users of the app to search for restaurants of a certain type with in a small radius of their location. The use of this type of media allows restaurants to directly advertise to a very specific target audience, as customers refine their searches to find a specific restaurant to suit their needs. Through signing up for social medias such as Zomato and Yelp, the restaurant is also telling the customers that they are confident in the quality of their product as the app allows customers to place reviews on their experience at the restaurant . In 2009, 70% of all restaurants were listed on Yelp. A positive correlation exists between the revenues of restaurants and their online reviews. Online review platforms are not the only form of social media that has revolutionised the restaurant industry. Even the large social medias such as Facebook and Twitter have had a huge impact on the industry. An example of this was KFC's release of the Double Down burger. Whether the topic of discussion was positive or negative regarding the Double Down, conversations through Facebook and Twitter especially had a greater success of reaching consumers than any advertising campaign for the burger did. The hype and controversy over the burger in May 2011 made it the most desired item on the Kentucky Fried Chicken menu. The idea of using a social media in the restaurants advantage is to get people talking about a restaurant or a menu item. This is essentially free advertising. Not only is it free, but also 92% of all consumers believe recommendations from friends and family over any advertising campaign. Restaurants want to utilize the opportunity for free advertising via Facebook, as it is a platform that can be constantly updated with new items, deals or even new store locations. Photos and videos are a far more effective advertising medium, which makes Facebook a perfect media to use for advertising product. References External links fmi – Food Marketing Institute National Restaurant Association Advertising New media
Restaurant media
[ "Technology" ]
2,416
[ "Multimedia", "New media" ]
8,576,385
https://en.wikipedia.org/wiki/System%20integration
System integration is defined in engineering as the process of bringing together the component sub-systems into one system (an aggregation of subsystems cooperating so that the system is able to deliver the overarching functionality) and ensuring that the subsystems function together as a system, and in information technology as the process of linking together different computing systems and software applications physically or functionally, to act as a coordinated whole. The system integrator integrates discrete systems utilizing a variety of techniques such as computer networking, enterprise application integration, business process management or manual programming. System integration involves integrating existing, often disparate systems in such a way "that focuses on increasing value to the customer" (e.g., improved product quality and performance) while at the same time providing value to the company (e.g., reducing operational costs and improving response time). In the modern world connected by Internet, the role of system integration engineers is important: more and more systems are designed to connect, both within the system under construction and to systems that are already deployed. Methods of integration Vertical integration (as opposed to "horizontal integration") is the process of integrating subsystems according to their functionality by creating functional entities also referred to as silos. The benefit of this method is that the integration is performed quickly and involves only the necessary vendors, therefore, this method is cheaper in the short term. On the other hand, cost-of-ownership can be substantially higher than seen in other methods, since in case of new or enhanced functionality, the only possible way to implement (scale the system) would be by implementing another silo. Reusing subsystems to create another functionality is not possible. Star integration, also known as spaghetti integration, is a process of systems integration where each system is interconnected to each of the remaining subsystems. When observed from the perspective of the subsystem which is being integrated, the connections are reminiscent of a star, but when the overall diagram of the system is presented, the connections look like spaghetti, hence the name of this method. The cost varies because of the interfaces that subsystems are exporting. In a case where the subsystems are exporting heterogeneous or proprietary interfaces, the integration cost can substantially rise. Time and costs needed to integrate the systems increase exponentially when adding additional subsystems. From the feature perspective, this method often seems preferable, due to the extreme flexibility of the reuse of functionality. Horizontal integration or Enterprise Service Bus (ESB) is an integration method in which a specialized subsystem is dedicated to communication between other subsystems. This allows cutting the number of connections (interfaces) to only one per subsystem which will connect directly to the ESB. The ESB is capable of translating the interface into another interface. This allows cutting the costs of integration and provides extreme flexibility. With systems integrated using this method, it is possible to completely replace one subsystem with another subsystem which provides similar functionality but exports different interfaces, all this completely transparent for the rest of the subsystems. The only action required is to implement the new interface between the ESB and the new subsystem. The horizontal scheme can be misleading, however, if it is thought that the cost of intermediate data transformation or the cost of shifting responsibility over business logic can be avoided. Industrial lifecycle integration is a system integration process that considers four categories or stages of integration: initial system implementation, engineering and design, project services, and operations. This approach incorporates the requirements of each lifecycle stage of the industrial asset when integrating systems and subsystems. The key output is a standardized data architecture that can function throughout the life of the asset. A common data format is an integration method to avoid every adapter having to convert data to/from every other applications' formats, Enterprise application integration (EAI) systems usually stipulate an application-independent (or common) data format. The EAI system usually provides a data transformation service as well to help convert between application-specific and common formats. This is done in two steps: the adapter converts information from the application's format to the bus' common format. Then, semantic transformations are applied on this (converting zip codes to city names, splitting/merging objects from one application into objects in the other applications, and so on). Challenges of integration System integration can be challenging for organizations and these challenges can diminish their overall return on investment after implementing new software solutions. Some of these challenges include lack of trust and willingness to share data with other companies, unwillingness to outsource various operations to a third party, lack of clear communication and responsibilities, disagreement from partners on where functionality should reside, high cost of integration, difficulty finding good talents, data silos, and common API standards. These challenges result in creating hurdles that "prevent or slow down business systems integration within and among companies". Clear communication and simplified information exchange are key elements in building long term system integrations that can support business requirements. Benefits of integration On the other hand, system integration projects can be incredibly rewarding. For out-of-date, legacy systems, different forms of integration offer the ability to enable real-time data sharing. This can enable, for example, publisher-subscriber data distribution models, consolidated databases, event-driven architectures, reduce manual user data entry (which can also help reduce errors), refresh or modernize the application's front-end, and offload querying and reporting from expensive operational systems to cheaper commodity systems (which can save costs, enable scalability, and free up processing power on the main operational system). Usually, an extensive cost-benefit analysis is undertaken to help determine whether an integration project is worth the effort. See also Artificial intelligence systems integration Cloud-based integration Configuration design Continuous integration Integration Competency Center Integration platform Interoperability Modular design Multidisciplinary approach System of record Systems integrator System design System in package and system on a chip References External links CSIA (Control System Integrators Association) Systems analysis Systems engineering Interoperability
System integration
[ "Engineering" ]
1,243
[ "Systems engineering", "Telecommunications engineering", "System integration", "Interoperability" ]
6,909,485
https://en.wikipedia.org/wiki/Full%20summer%20pool
A full summer pool or full pool is the water level of a reservoir at normal operating conditions. Water levels During droughts or water shortages, the water level can drop below full summer pool. Additionally, water levels may be lowered during the winter season below full summer pool to accommodate snowmelt or seasonally heavy rains. During periods of heavy rain, the water level in the reservoir may rise above full summer pool to prevent flooding downstream. See also Cistern Hydraulic engineering References Reservoirs Hydraulic engineering
Full summer pool
[ "Physics", "Engineering", "Environmental_science" ]
99
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
6,909,493
https://en.wikipedia.org/wiki/REAPER
REAPER (Rapid Environment for Audio Production, Engineering, and Recording) is a digital audio workstation, MIDI sequencer and video editing software application created by Cockos. The current version is available for Microsoft Windows (XP and newer), macOS (10.5 and newer), and Linux. REAPER acts as a host to most industry-standard plug-in formats (such as VST and AU) and can import all commonly used media formats, including video. REAPER and its included plug-ins are available in 32-bit and 64-bit format. History REAPER development is led by Justin Frankel, who also created Winamp and the Gnutella peer-to-peer file sharing network. A preview was released in 2005 and the first official shareware release was in August 2006 with a download size of only 2MB and a "huge feature set, incredibly low price and surface simplicity". Version 2.0 (October 2007) included a more sophisticated user interface, an extended mixer, and the ability to save and load screen layouts. This update also added the Zplane Elastique 2 algorithms for enhanced time-stretching and pitch shifting, supported Windows and Mac OS and remained compact enough to run off a USB memory stick. Version 3 (2009) added nested tracks, plugin controls in the mixer, VCA grouping and enhanced automation and MIDI features. Version 4 (2011) continued Cockos' reputation of "listening and involving their user base with frequent updates, beta versions and forum discussion" while adding features such as automatable pitch shift envelopes, multichannel setups such as quad, 5.1, 7.1 and 9.1, improved project management and window arrangement customization. Version 5 (2015) added video editing capability, automation of individual effect parameters, enhanced scripting and VST3 support. Version 6 (2019) introduced Retina, HiDPI and Metal display support for higher resolution and faster screen redraw, FX plugin embedding for faster workflow, MIDI CC envelope automation, a graphical patchbay and performance improvements for projects with 200+ tracks. Version 7 (2024) added support for track lanes, swipe comping, up to 128 channels of audio per track and 128 buses for MIDI routing with unlimited audio tracks. Licensing REAPER offers a fully functional evaluation period of 60 days. Following this period, users may purchase one of two available licenses: commercial or discounted. Both licenses have identical features but the discounted license is intended for private use, educational institutions, and small businesses. Purchased licenses include all updates to the current version and next version of the software. For example, a license purchased for any release of version 7 includes all updates to version 7, as well as version 8 and all of its updates. Each license covers all configurations and allows installation on multiple computers, as long as it is only used on one computer at a time. Customization REAPER offers a comprehensive range for multi-track recording and editing, MIDI recording and editing, internal non-realtime downmixing, and track-by-track effects looping. The routing concept eliminates the necessity for dedicated bus, aux, and MIDI tracks, allowing each track to accommodate both audio and MIDI data. Multi-track editing is facilitated through object grouping, analogous to the approach employed in Samplitude. Both individual elements and complete tracks can be grouped. The options for macro customization, allowing users to combine complex function sequences into a macro through a drag-and-drop individual commands and assigning them to the user interface, a keyboard key, a mouse button or a MIDI/OSC command, according to the user's specifications. Furthermore, REAPER offers an extension API that facilitates deep integration of third-party software within the REAPER environment. The graphical user interface (GUI) of REAPER can be modified according to the user's preferences through the use of customizable themes. These themes can be created by the user themselves, allowing for a high degree of flexibility in adapting the software to their specific needs. Additionally, the default theme from each previous version of REAPER is included, providing a starting point for users who may not have the time or inclination to create their own themes. ReaScript allows users to customize REAPER by editing, running, and debugging scripts. This feature supports the creation of personalized themes, the extension of REAPER’s functionality, and the development of advanced macros and comprehensive extensions. Scripts can be written in EEL2 (JSFX/Jesusonic script), Lua and Python. ReaPack offers a comprehensive solution for managing and installing extensions sourced from a variety of repositories. The SWS/S&M extension (founded by Standing Water Studios' Tim Payne) is a widely used open-source extension to REAPER that offers workflow enhancements such as snapshots, marker actions and advanced tempo/groove manipulation capabilities. ReaClassical offers a fully open-source environment for classical music editing, featuring source-destination editing, multitrack track-group editing, and a two-lane crossfade editor. Additionally, REAPER supports multiple languages, with downloadable language packs available. Both users and developers can create their own language packs for REAPER. Included software and plug-ins REAPER comes with a variety of commonly used audio production effects. They include tools such as ReaEQ (parametric equalizer), ReaVerb (reverb), ReaGate (audio gate), ReaDelay (delay), ReaPitch (pitch shifting), ReaComp (compression) and ReaTune (automatic tuning of vocals or other audio). The included plug-ins are also accessible as a standalone download for users of other DAWs as the "ReaPlugs VST FX Suite". Also included are hundreds of JSFX plug-ins ranging from standard effects to specific applications for MIDI and audio. JSFX scripts are editable text files, which when loaded into REAPER (exactly like a VST or other plug-in) become full-featured plugins ranging from simple audio effects (e.g delay, distortion, compression) to instruments (synths, samplers) and other special purpose tools (drum triggering and surround panning). REAPER includes the instruments ReaSynth, ReaSynDr & ReaSamplomatic 5000. ReaSynth is a basic synth with wave shape, ADSR Envelope & Portamento. ReaSynDr has 4 drum samples, a kick, snare, blip & tick. ReaSamplomatic 5000 is a sampler. REAPER includes no third-party software, but is fully compatible with all versions of the VST standard (currently VST2 and VST3). It can also run AU plugins (on macOS), CLAP plug-ins, DX plugins (on Windows) and LV2 plugins, and thus works with the vast majority of free and commercial plug-ins. REAPER x64 can also run 32-bit plug-ins alongside 64-bit processes. As of version 5.97, REAPER supports ARA 2 plugins. Video editing REAPER allows video, audio, MIDI and still images to be freely combined on any track. REAPER offers the ability to cut and trim video files and edit or replace their audio. It supports common video effects such as fades, wipes, cross-fades, opacity, motion detection and text titles. Video can be viewed in a separate window while working. Control surface support and remote control REAPER has built-in support for: BCF2000 – Behringer's motorized faders control surface, USB/MIDI TranzPort – Frontier Design Group's wireless transport control AlphaTrack – Frontier Design Group's AlphaTrack control surface FaderPort – Presonus' FaderPort control surface Baby HUI – Mackie's Baby HUI control surface MCU – Mackie's "Mackie Control Universal" control surface REAPER's built-in web control allows control of the software from any other device on the same network, such as a tablet, smartphone or another computer. REAPER also supports the Open Sound Control (OSC) standard. Timeline of Reaper versions First public release – December 23, 2005 as freeware 1.0 – August 23, 2006 as shareware 2.0 – October 10, 2007 2.43 – July 30, 2008: Beta Mac OS X and Windows x64 support 2.56 – March 2, 2009: Finalized Mac OS X and Windows x64 ports 3.0 – May 22, 2009 4.0 – August 3, 2011 Work on Linux support began 5.0 – August 12, 2015 Beta-quality Linux support Support for VST3 plugins 5.20 – May 17, 2016: MIDI notation editor 5.93 – July 17, 2018: First public Linux builds released 6.0 – December 3, 2019 6.71 – November 28, 2022: Support for CLAP plugins 7.0 – October 16, 2023 Reception REAPER has been praised for its affordable price tiers, features, versatility and flexibility. See also Comparison of digital audio editors List of digital audio workstation software List of music software Cockos References External links REAPER home page REAPER en español (unofficial website, tutorials & tips) Linux Audio editing software for Linux Digital audio editors for Linux Digital audio workstation software Linux software MacOS audio editors
REAPER
[ "Engineering" ]
1,923
[ "Audio engineering", "Audio software" ]
6,909,686
https://en.wikipedia.org/wiki/Donald%20Sadoway
Donald Robert Sadoway (born 7 March 1950) is professor emeritus of materials chemistry at the Massachusetts Institute of Technology. He is a noted expert on batteries and has done significant research on how to improve the performance and longevity of portable power sources. In parallel, he is an expert on the extraction of metals from their ores and the inventor of molten oxide electrolysis, which has the potential to produce crude steel without the use of carbon reductant thereby totally eliminating greenhouse gas emissions. Background Sadoway was born in Toronto, Ontario, Canada. He did both his undergraduate and graduate studies at the University of Toronto, receiving his PhD in 1977. There he focused his studies on chemical metallurgy. He also served on the National Executive of the Ukrainian Canadian Students' Union (SUSK) from 1972 to 1974. In 1977, he received a NATO postdoctoral fellowship from the National Research Council of Canada and came to MIT to conduct his postdoctoral research under Julian Szekely. Sadoway joined the MIT faculty in 1978. On 19 June 2013, Sadoway was awarded an honorary Doctorate of Engineering by the University of Toronto in recognition of his contributions to sustainable energy and sustainable metal production as well as to higher education both in curriculum and in teaching style. In 2014, Sadoway received an honorary doctorate from NTNU, the Norwegian University of Science and Technology. Research As a researcher, Sadoway has focused on environmental ways to extract metals from their ores, as well as producing more efficient batteries. His research has often been driven by the desire to reduce greenhouse gas emissions while improving quality and lowering costs. He is the co-inventor of a solid polymer electrolyte. This material, used in his "sLimcell" has the capability of allowing batteries to offer twice as much power per kilogram as is possible in current lithium ion batteries. In August 2006, a team that he led demonstrated the feasibility of extracting iron from its ore through molten oxide electrolysis. When powered exclusively by renewable electricity, this technique has the potential to eliminate the carbon dioxide emissions that are generated through traditional methods. In 2009, Sadoway disclosed the liquid metal battery comprising liquid layers of magnesium and antimony separated by a layer of molten salt that could be used for stationary energy storage. Research on this concept was being funded by ARPA-E and the French energy company Total Experimental data showed a 69% DC-to-DC storage efficiency with good storage capacity and relatively low leakage current (self discharge). In 2010, with funding from Bill Gates and Total, Sadoway and two others, David Bradwell and Luis Ortiz, co-founded a company called the Liquid Metal Battery Corporation (later, Ambri) in order to scale up and commercialize the technology. Teaching For 16 years Sadoway taught 3.091 Introduction to Solid State Chemistry at MIT, one of the largest classes at MIT. Sadoway's animated teaching style was popular with students and freshman enrollment in the course steadily increased through 2010. In the fall of 2007, the number of students registering for 3.091 reached 570 students, over half the freshman class. The largest lecture hall available on campus seats 566 students. Sadoway much preferred teaching in one of the smaller lecture halls, seating only 450; as such, the institute had to take the unprecedented step of streaming digital video of the lecture into an overflow room to accommodate all the students interested in taking the course. In contrast, most classes at MIT are relatively small with approximately 60% of classes at MIT having fewer than 20 students. The popularity of this course has reached outside of the MIT campus as a result of the MIT OpenCourseWare initiative. This is seen in a comment by Bill Gates who told the Seattle Post-Intelligencer "Everybody should watch chemistry lectures -- they're far better than you think. Don Sadoway, MIT -- best chemistry lessons anywhere. Unbelievable". Sadoway's lectures often included the history of science, especially with respect to the Nobel Prize. Sadoway gave out "library assignments" in which he asked students to research Nobel Prize–winning papers. He began his lectures by playing music, which has some connection with the lecture's material. For example, for the lecture on hydrogen bonding he plays Handel's Water Music. For one of the lectures on polymers he plays Aretha Franklin's "Chain of Fools". He ended his lectures with five minutes on the topic of "chemistry and the world around us". Examples include automotive exhaust catalytic converters (technology), forensic examination of paintings (chemistry in the fine arts), the mistreatment of Rosalind Franklin in the quest to discover the structure of DNA (intellectual dishonesty), the metallurgical failure that sank the Titanic (greed and incompetence), and the clarification of champagne (viticulture). Media recognition On 29 February 2012, Sadoway gave a TED talk on his invention of the liquid metal battery for grid-scale storage. The talk is as much about the inventive process as it is about the technology. Sadoway was named one of Time magazine's 100 Most Influential People in the World in 2012 for accomplishments in energy storage as well as his approach to mentoring students (hire the novice instead of the expert). On 22 October 2012, Sadoway appeared as a guest on The Colbert Report to discuss his liquid metal battery technology and his view that electrochemistry is the key to world peace (batteries usher in the electric age reducing the dependence on petroleum dropping its price thereby destabilizing dictatorships). Sadoway appeared in "MIT Gangnam Style". See also John F. Elliott – MIT has a chaired professorship named after Elliott. Since 1999, Sadoway has occupied that chair. References External links Donald Sadoway resume Introduction to Solid State Chemistry: Course description, from OCW.Mit.edu Don Sadoway Playlist Appearance on WMBR's Dinnertime Sampler (radio show) 2 October 2002. "Innovation in Energy Storage: What I Learned in 3.091 was All I Needed to Know". lecture by Donald R. Sadoway, 5 June 2010. "The missing link to renewable energy" (TED2012) (also ) 1950 births Living people American materials scientists MIT School of Engineering faculty Canadian emigrants to the United States Scientists from Toronto University of Toronto alumni Fellows of the Minerals, Metals & Materials Society Canadian materials scientists Solid state chemists
Donald Sadoway
[ "Chemistry" ]
1,329
[ "Solid state chemists" ]
6,909,733
https://en.wikipedia.org/wiki/Glucose%20cycle
The glucose cycle (also known as the hepatic futile cycle) occurs primarily in the liver and is the dynamic balance between glucose and glucose 6-phosphate. This is important for maintaining a constant concentration of glucose in the blood stream. Function The glucose cycle is required for one of the liver functions; the homeostasis of glucose in the blood stream. When the blood glucose level is too high, glucose can be stored in the liver as glycogen. When the level is too low, the glycogen can be catabolised and glucose may re-enter the blood stream. The catabolic process occurs at the nonreducing end of glycogen. A phosphate group breaks the bond between C 1 of a glucose ring and the O that connects it to the next(phosphorolysis). One glucose unit is thus split off. Glycogen (with n glucose units) is converted into G-1-P(a PO4 group now attaches to C1 where O used to ) and glycogen (with n-1 glucose units) by enzyme glycogen phosphorylase. G-1-P is then converted into G-6-P by enzyme phosphoglucomutase. A water molecule hydrolyses G-6-P to glucose, the enzyme is glucose-6-phosphatase. Cell specificity When glucose enters a cell it is rapidly changed to glucose 6-phosphate, by hexokinase or glucokinase. The glucose cycle can occur in liver cells due to a liver specific enzyme glucose-6-phosphatase, which catalyse the dephosphorylation of glucose 6-phosphate back to glucose. Glucose-6-phosphate is the product of glycogenolysis or gluconeogenesis, where the goal is to increase free glucose in the blood due body being in catabolic state. Other cells such as muscle and brain cells do not contain glucose 6-phosphatase. As a result, any glucose 6-phosphate produced in those cells is committed to cellular metabolic pathways, primarily pentose phosphate pathway or glycolysis. Regulation of glucose cycle Flux through the glucose cycle is regulated by several hormones including insulin and glucagon as well as allosteric regulation of both hexokinase and glucose 6-phosphatase. Diseases associated with glucose cycle A deficiency in glucose 6-phosphatase that disrupts the liver glucose cycle, can lead to von Gierke's disease. References Metabolic pathways
Glucose cycle
[ "Chemistry" ]
536
[ "Metabolic pathways", "Metabolism" ]
6,910,578
https://en.wikipedia.org/wiki/Cytoplasmic%20male%20sterility
Cytoplasmic male sterility is total or partial male sterility in hermaphrodite organisms, as the result of specific nuclear and mitochondrial interactions. Male sterility is the failure to produce functional anthers, pollen, or male gametes. Such male sterility in hermaphrodite populations leads to gynodioecious populations (populations with coexisting fully functioning hermaphrodites and male-sterile hermaphrodites). Cytoplasmic male sterility, as the name indicates, is under extranuclear genetic control (under control of the mitochondrial or plastid genomes). It shows non-Mendelian inheritance, with male sterility inherited maternally. In general, there are two types of cytoplasm: N (normal) and aberrant S (sterile) cytoplasms. These types exhibit reciprocal differences. History Joseph Gottlieb Kölreuter was the first to document male sterility in plants. In the 18th century, he reported on anther abortion within species and specific hybrids. Cytoplasmic male sterility (CMS) is mostly found in angiosperms and has been identified in more than 140 angiosperm species. CMS has also been identified in one animal species so far, Physa acuta, a fresh water snail. There is strong evidence for gynodioecy and CMS to be a transitionary step between hermaphrodites and separated sexes. Male sterility is more prevalent than female sterility. This could be because the male sporophyte and gametophyte are less protected from the environment than the ovule and embryo sac. Male-sterile plants can set seed and propagate. Female-sterile plants cannot develop seeds and will not propagate. Manifestation of male sterility in CMS may be controlled either entirely by cytoplasmic factors or by interactions between cytoplasmic factors and nuclear factors. Male sterility can arise spontaneously via mutations in nuclear genes and/or cytoplasmic or cytoplasmic–genetic. In this case, the trigger for CMS is in the extranuclear genome - (mitochondria or chloroplast). The extranuclear genome is only maternally inherited. Natural selection on cytoplasmic genes could also lead to low pollen production or male sterility. Male sterility is easy to detect because a large number of pollen grains are produced in male fertile plants. Pollen grains can be assayed through staining techniques (carmine, lactophenol or iodine). Resource relocation CMS is one case of male-sterility, but this condition can also originate from nuclear genes. In the case of nuclear male sterility (when male sterility is caused by a nuclear mutation), the transmission of the male sterility allele is cut in half, since the entire male reproductive pathway is canceled. CMS differs from the latter case (nuclear male sterility) because most cytoplasmic genetic elements are only transmitted maternally. This entails that for a cytoplasmic genetic element, causing male sterility doesn't affect its transmission rate since it is not transmitted via the male reproductive pathway. Inactivation of the male reproductive pathway (sperm production, production of male reproductive organs, etc) can lead to resource relocation to the female reproductive pathway, increasing the female reproductive capabilities (female fitness), this phenomenon is referred to as Female Advantage (FA). The female advantage of many gynodioecious species has been quantified (as the ratio between male-sterile's female fitness and hermaphrodites' female fitness) and is mostly comprised between 1 and 2. In the case of nuclear male-sterility, a female advantage of at least 2 is required to make it evolutionary neutral (FA=2) or advantageous (FA > 2) since half of the transmission is cut because of the male-sterility allele. Cytoplasmic male-sterility requires no female advantage to be evolutionary neutral (FA=1), or a small female advantage to be evolutionary advantageous (FA > 1). As far as we know, CMS is much more common than nuclear male-sterility with results from a study of 49 gynodioecious plants found 17 species (35%) exhibiting CMS and only 7 (14%) exhibiting nuclear male-sterility (all remaining species have unknown determinism of male-sterility). Genetic sterility While CMS is controlled by an extranuclear genome, nuclear genes may have the capability to restore fertility. When nuclear restoration of fertility genes is available for a CMS system in any crop, it is cytoplasmic–genetic male sterility; the sterility is manifested by the influence of both nuclear (with Mendelian inheritance) and cytoplasmic (maternally inherited) genes. There are also restorers of fertility (Rf) genes that are distinct from genetic male sterility genes. The Rf genes have no expression of their own unless the sterile cytoplasm is present. Rf genes are required to restore fertility in S cytoplasm that causes sterility. Thus plants with N cytoplasm are fertile and S cytoplasm with genotype Rf- leads to fertiles while S cytoplasm with rfrf produces only male steriles. Another feature of these systems is that Rf mutations (i.e., mutations to rf or no fertility restoration) are frequent, so that N cytoplasm with Rfrf is best for stable fertility. Cytoplasmic–genetic male sterility systems are widely exploited in crop plants for hybrid breeding due to the convenience of controlling sterility expression by manipulating the gene–cytoplasm combinations in any selected genotype. Incorporation of these systems for male sterility evades the need for emasculation in cross-pollinated species, thus encouraging cross breeding producing only hybrid seeds under natural conditions. In hybrid breeding Hybrid production requires a plant from which no viable male gametes are introduced. This selective exclusion of viable male gametes can be accomplished via different paths. One path, emasculation is done to prevent a plant from producing pollen so that it can serve only as a female parent. Another simple way to establish a female line for hybrid seed production is to identify or create a line that is unable to produce viable pollen. Since a male-sterile line cannot self-pollinate, seed formation is dependent upon pollen from another male line. Cytoplasmic male sterility is also used in hybrid seed production. In this case, male sterility is maternally transmitted and all progeny will be male sterile. These CMS lines must be maintained by repeated crossing to a sister line (known as the maintainer line) that is genetically identical except that it possesses normal cytoplasm and is therefore male-fertile. In cytoplasmic–genetic male sterility restoration of fertility is done using restorer lines carrying nuclear genes. The male-sterile line is maintained by crossing with a maintainer line carrying the same nuclear genome but with normal fertile cytoplasm. For crops such as onions or carrots where the commodity harvested from the F1 generation is vegetative growth, male sterility is not a problem. Maize breeding Cytoplasmic male sterility is an important part of hybrid maize production. The first commercial cytoplasmic male sterile, discovered in Texas, is known as CMS-T. The use of CMS-T, starting in the 1950s, eliminated the need for detasseling. In the early 1970s, plants containing CMS-T genetics were susceptible to southern corn leaf blight and suffered from widespread loss of yield. Since then, CMS types C and S were used instead. Unfortunately, these lines are prone to environmentally induced fertility restoration and must be carefully monitored in the field. Environmentally induced, in contrast to genetic, restoration occurs when certain environmental stimuli signal the plant to bypass sterility restrictions and produce pollen anyway. Genome sequencing of mitochondrial genomes of crop plants has facilitated the identification of promising candidates for CMS-related mitochondrial rearrangements. The systematic sequencing of new plant species in recent years has also uncovered the existence of several novel nuclear restoration of fertility (RF) genes and their encoded proteins. A unified nomenclature for the RF defines protein families across all plant species and facilitates comparative functional genomics. This nomenclature accommodates functional RF genes and pseudogenes, and offers the flexibility needed to incorporate additional RFs as they become available in future. References External links Biological approaches to preventing gene flow - Co-extra research project on coexistence and traceability of GM and non-GM supply chains Plant reproduction
Cytoplasmic male sterility
[ "Biology" ]
1,838
[ "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
6,910,700
https://en.wikipedia.org/wiki/King%20road%20drag
The King road drag (also known as the Missouri road drag and the split log road drag) was a simple form of a road grader implemented for grading dirt road. It revolutionized the maintenance of dirt roads in the early 1900s. It was invented by David Ward King, who went by "D. Ward King" and who was a farmer in Holt Township, near Maitland, Missouri. It started out as two parallel logs with the cut side facing the front separated three feet by rigid separators and pulled by a team of two horses. Variations of the two-plank drag design but pulled by trucks or tractors are still used today to smooth the dirt infields of baseball diamonds. In this simple design, the first log would remove clods and the second log would smooth the road. The logs were staggered so that dirt would be pushed to the center to create a crown so that water would rush off. The very simple design replaced the old practice of dragging a road with a single log which left the surface unrepaired and rut-filled. It also made it possible for farmers to improve roads near their homes without having to wait for government graders. D. Ward King of Maitland, Missouri requested a patent for the process in 1907 and received Patent 884,497 in 1908. He widely publicized the process in a U.S. Department of Agriculture Farmers' Bulletin #321 in 1908 under the title The use of the split-log drag on earth roads An important component of the grading process was that it had to occur when the road was wet. This invention was the horse-drawn forerunner of the modern-day road grader. It was a sensation in its day. States passed laws requiring its use. The design was so simple that King did not enforce his patent rights. However, he did tour the country explaining to how to use it. He also wrote articles such as one that appeared in the May 7, 1910 issue of the Saturday Evening Post entitled “Good Roads Without Money.” King would further enhance his invention with his patent 1,102,671 in 1914 which included four bars and two triangular scrapers. Before the King Road Drag, dirt roads turned into a quagmire when they were wet—especially in the winter. The widespread use of the King Road Drag came along during the Good Roads Movement, driven by bicyclists and later by automobile drivers. Automobiles benefitted since Macadam roads were rapidly destabilized by cars, which sucked the cementing dust out of smooth macadam roads. Solid roads meant people could use their automobiles on the roads between cities. Solid rural roads also made possible reliable rural mail delivery, which did much to promote commerce in the United States between city-based businesses and the rural population. For instance, they allowed Sears, Roebuck to start sending out its catalogs to small towns and farms and thereby vastly increase the size of its customer base. References Road construction
King road drag
[ "Engineering" ]
596
[ "Construction", "Road construction" ]
6,910,727
https://en.wikipedia.org/wiki/Programmable%20thermostat
A programmable thermostat is a thermostat which is designed to adjust the temperature according to a series of programmed settings that take effect at different times of the day. Programmable thermostats are also known as setback thermostats or clock thermostats. Benefits Heating and cooling losses from a building (or any other container) become greater as the difference in temperature increases. A programmable thermostat allows reduction of these losses by allowing the temperature difference to be reduced at times when the reduced amount of heating or cooling would not be objectionable. For example, during cooling season, a programmable thermostat used in a home may be set to allow the temperature in the house to rise during the workday when no one will be at home. It may then be set to turn on the air conditioning before the arrival of occupants, allowing the house to be cool upon the arrival of the occupants while still having saved air conditioning energy during the peak outdoor temperatures. The reduced cooling required during the day also decreases the demands placed upon the electrical supply grid. Conversely, during the heating season, the programmable thermostat may be set to allow the temperature in the house to drop when the house is unoccupied during the day and also at night after all occupants have gone to bed, re-heating the house prior to the occupants arriving home in the evening or waking up in the morning. Since (as a matter of sleep hygiene) people sleep better when the bedroom is cool, and furthermore the temperature differential between the interior and exterior of a building is the greatest on a cold winter night, this reduces energy loss. Similar scenarios are available in commercial buildings, with due consideration of the building's occupancy patterns. According to Consumer Reports magazine, programmable thermostats can reduce energy bills by about $180 a year. Controversy While programmable thermostats may be able to save energy when used correctly, little or no average energy savings has been demonstrated in residential field studies. Difficulty with usability in residential environments appears to lead to lack of persistence of energy savings in homes. According to the US EPA regarding residential programmable thermostat, "Available studies indicate no savings from programmable thermostat (PT) installation. Some studies indicate slight increased consumption." This is supported with studies by Nevius and Pigg, Cross and Judd and others and Peffer et al. has a recent review of the topic. In addition to potential increased energy consumption, digital programmable thermostats have been criticised for their poor usability. Several studies have found that digital programmable thermostats are difficult for users to programme and older people in particular can struggle to use them (see Combe et al.). It has been noted that the use of programmable thermostats is hampered by misconception about the setback feature, reducing the amount of heating or cooling in a building needs for a short time (e.g. at night or when it is unoccupied). The belief is that if the building is allowed to change temperature, its heating or cooling system has to "work harder" to bring it back to a comfortable temperature, counteracting or even exceeding the energy saved during reduced heating or cooling. If set up correctly the setback and recovery feature can result in energy savings of five to fifteen percent as the heat transfer between a structure and its environment is proportional to the temperature difference between the inside and outside of the structure. Construction and features Clock thermostats The most basic clock thermostats may only implement one program with two periods (a hotter period and a colder period), and the same program is run day after day. More sophisticated clock thermostats may allow four or more hot and cold periods to be set per day. Usually, only two distinct temperatures (a hotter temperature and a colder temperature) can be set, even if multiple periods are permitted. The hotter and colder temperatures are usually established simply by sliding two levers along an analogue temperature scale, much the same as in a conventional (non-clock) thermostat. This design, while simple to manufacture and relatively easy to program, sacrifices comfort on weekends since the program is repeated each of the seven days of the week with no variation. To overcome this deficit, a push-button is sometimes provided to allow the user to explicitly switch (once) the current period from hot period to a cold period or vice versa; the usual use of this button is to over-ride a "set back" that takes place during the workday when the home is normally unoccupied. The clock mechanism is electrical. Two methods have commonly been used to operate it: [1] A separate, continuous source of 24 volts alternating current (24 VAC) is provided to the thermostat. [2] A rechargeable battery in the thermostat operates the clock. This battery charges when the thermostat is not calling for heat and 24 VAC is available to it. It discharges to operate the clock when the thermostat is set for heating or cooling. Digital thermostats Digital thermostats may implement the same functions, but most provide more versatility. For example, they commonly allow setting temperatures for two, four, or six periods each day, and rather than being limited to a single "hotter" temperature and a single "colder" temperature, digital thermostats usually allow each period to be set to a unique temperature. The periods are commonly labeled "Morning", "Day", "Evening", and "Night", although nothing constrains the time intervals involved. Digital thermostats usually allow the user to override the programmed temperature for the period, automatically resuming programmed temperatures when the next period begins. A function to "hold" (lock-in) the current temperature is usually provided as well; in this case, the override temperature is maintained until the user cancels the hold or a programmed event occurs to resume the normal program. More-sophisticated models will allow for the release of the hold to take place at a set time in the future. As with clock thermostats, basic digital thermostats may have just one cycle that is run every day of the week. More-sophisticated thermostats may have a weekday schedule and a separate weekend schedule (so-called "5-2" setting) or separate Saturday and Sunday schedules (so-called "5-1-1" settings), while other thermostats will offer a separate schedule for each day of the week ("7 day" settings). The selection of which days are defined as the "weekend" is arbitrary, depending on the user's heating and cooling schedule requirements. Often, a manufacturer will sell three similar thermostats offering each of those levels of functionality, and there is no obvious difference in the thermostats other than the factory programming and the price. Most digital thermostats have separate programs for heating and cooling, and may feature a digital or manual switch to turn on the furnace blower for air circulation, even when the system isn't heating or cooling. More-sophisticated models may be programmed to run the circulating fan for a brief 5- to 10-minute period in the event a heating or cooling cycle has not taken place during the previous hour. This is particularly useful in buildings subject to stratification where without frequent air circulation, hot air rises and separates from the cooler air that falls. Digital thermostats may also have a user-programmable air filter change reminder; this counts the accumulated run-time of the heating/cooling system and reminds the user when it is time to change the filter. The feature often displays the accumulated run-time either as an aggregate of both heating and cooling or displaying each time separately. Some digital thermostats have the capability of being programmed using a touch-tone telephone or over the Internet, such as the Nest Learning Thermostat. Digital thermostats are usually powered one of three ways: A sophisticated power circuit operates from the 24 VAC supply when the thermostat is not calling, and operates from the current flowing in the thermostat circuit when the thermostat is calling. A battery is used to provide back-up during power failures. A rechargeable battery operates the thermostat just as in the clock thermostat, charging when the thermostat is not calling and discharging while the thermostat is calling. A non-rechargeable battery always powers the thermostat. To limit the amount of power drawn from the battery, such thermostats use an impulse relay that does not require the continuous application of power to the relay's coil. These thermostats can be used on millivolt circuits, as well as conventional 24 VAC circuits. Battery life is typically one to two years. Digital thermostats with PID controller More expensive models have a built-in PID controller, so that the thermostat "learns" via a feedback loop how the overall system (including the room itself) will react to its commands. Programming the morning temperature to be 21° C at 7:00 a.m., for instance, ensures that the temperature would then be 21 °C, whereas less sophisticated programmable thermostat would simply start working toward 21° at 7:00 a.m. Thus a PID controller sets the time at which the system should be activated in order to reach the desired temperature at the desired time, having processed the data of the room temperature regimen by comparing the past temperature status of the room and its current temperature for an optimal start. Process control or industrial thermostat also makes sure that the temperature is very stable(for instance, by reducing first overshoot and fluctuation at the end of the heating cycle) such that the comfort level is increased. Commercial thermostats In commercial applications, the thermostat may not contain any clock mechanism. Instead, another means may be used to select between the "hotter" and "colder" settings. For example, if the thermostat uses pneumatic controls, a change in the air pressure supplied to the thermostat may select between the "hotter" and "colder" settings, and this air pressure is determined by a central regulator. With electronic controls, a specific signal may indicate whether to operate at the "hotter" or "colder" setting. Terminal codes and colors See also Smart thermostat (and Wi-Fi thermostat) OpenTherm References External links Energy Savers, Programmable thermostat (EERE). Honeywell chronotherm III "How A Thermostat Tends Your Furnance" 1951 article on the basics of the automatic furnace thermostats—i.e. good drawings and illustrations with page 149 showing the first clock thermostats Temperature control
Programmable thermostat
[ "Technology" ]
2,269
[ "Home automation", "Temperature control" ]
6,910,868
https://en.wikipedia.org/wiki/BAR%20domain
In molecular biology, BAR domains are highly conserved protein dimerisation domains that occur in many proteins involved in membrane dynamics in a cell. The BAR domain is banana-shaped and binds to membrane via its concave face. It is capable of sensing membrane curvature by binding preferentially to curved membranes. BAR domains are named after three proteins that they are found in: Bin, Amphiphysin and Rvs. BAR domains occur in combinations with other domains Many BAR family proteins contain alternative lipid specificity domains that help target these protein to particular membrane compartments. Some also have SH3 domains that bind to dynamin and thus proteins like amphiphysin and endophilin are implicated in the orchestration of vesicle scission. N-BAR domain Some BAR domain containing proteins have an N-terminal amphipathic helix preceding the BAR domain. This helix inserts (like in the epsin ENTH domain) into the membrane and induces curvature, which is stabilised by the BAR dimer. Amphiphysin, endophilin, BRAP1/bin2 and nadrin are examples of such proteins containing an N-BAR. The Drosophila amphiphysin N-BAR (DA-N-BAR) is an example of a protein with a preference for negatively charged surfaces. Human proteins containing this domain AMPH; ARHGAP17; ARHGAP44; BIN1; BIN2; BIN3; SH3BP1; SH3GL1; SH3GL2; SH3GL3; SH3GLB1; SH3GLB2. F-BAR (EFC) domain F-BAR domains (for FCH-BAR, or EFC for Extended FCH Homology) are BAR domains that are extensions of the already established FCH domain. They are frequently found at the amino terminus of proteins. They can bind lipid membranes and can tubulate lipids in vitro and in vivo, but their exact physiological role still is under investigation. Examples of the F-BAR domain family are CIP4/FBP17/Toca-1, Syndapins (also called PACSINs) and muniscins. Gene knock-out of syndapin I in mice revealed that this brain-enriched isoform of the syndapin family is crucial for proper size control of synaptic vesicles and thereby indeed helps to define membrane curvature a physiological process. Work of the lab of Britta Qualmann also demonstrated that syndapin I is crucial for proper targeting of the large GTPase dynamin to membranes. Sorting nexins The sorting nexin family of proteins includes several members that possess a BAR domain, including the well characterized SNX1 and SNX9. Human proteins containing this domain AMPH; ARHGAP17; BIN1; BIN2; BIN3; DNMBP; GMIP; RICH2; SH3BP1; SH3GL1; SH3GL2; SH3GL3; SH3GLB1; SH3GLB2; See also Arfaptin, includes a BAR-like domain IMD domain, a BAR-like domain SNX8 - protein family with a combination of BAR-like and PX domains Epsin Membrane curvature External links endocytosis.org References Further reading Review. Peripheral membrane proteins Protein domains
BAR domain
[ "Biology" ]
693
[ "Protein domains", "Protein classification" ]
6,910,971
https://en.wikipedia.org/wiki/ANTH%20domain
The ANTH domain is a membrane binding domain that shows weak specificity for PtdIns(4,5)P2. It was found in AP180 (homologous to CALM) endocytotic accessory protein that has been implicated in the formation of clathrin-coated pits. The domain is involved in phosphatidylinositol 4,5-bisphosphate binding and is a universal adaptor for nucleation of clathrin coats. Its structure is a solenoid of 9 helices. The PtdIns(4,5)P2 binding residues are spread over several helices at the tip of the structure. The PtdIns(4,5)P2 binding sequence is Kx9Kx(K/R)(H/Y). An ANTH domain is also found in HIP1 and HIP1R, and the PtdIns(4,5)P2 binding sequence is conserved. Human proteins containing this domain HIP1; HIP1R; PICALM; SNAP91; References Further reading External links - Calculated spatial position of ANTH domain of CALM protein in membrane Protein domains Peripheral membrane proteins
ANTH domain
[ "Biology" ]
244
[ "Protein domains", "Protein classification" ]
6,911,043
https://en.wikipedia.org/wiki/ENTH%20domain
The epsin N-terminal homology (ENTH) domain is a structural domain that is found in proteins involved in endocytosis and cytoskeletal machinery. Structure This domain is approximately 150 amino acids in length and is always found located at the N-termini of proteins. The domain forms a compact globular structure, composed of nine alpha-helices connected by loops of varying length. The general topology is determined by three helical hairpins that are stacked consecutively with a right hand twist. An N-terminal helix folds back, forming a deep basic groove that forms the binding pocket for the Ins(1,4,5)P3 ligand. The lipid ligand is coordinated by residues from surrounding alpha-helices and all three phosphates are multiply coordinated. Interactions with the lipid bilayer Proteins containing this domain have been found to bind PtdIns(4,5)P2 and Ins(1,4,5)P3 suggesting that the domain is a membrane-interacting module. The main function of proteins containing this domain appears to be to act as accessory clathrin adaptors in endocytosis, epsin is able to recruit and promote clathrin polymerisation on a lipid monolayer, but may have additional roles in signalling and actin regulation. Epsin causes a strong degree of membrane curvature and tubulation, even fragmentation of membranes with a high PtdIns(4,5)P2 content. Epsin binding to membranes facilitates their deformation by insertion of the N-terminal helix into the inner leaflet of the bilayer, pushing the head groups apart. This would reduce the energy needed to curve the membrane into a vesicle, making it easier for the clathrin cage to fix and stabilise the curved membrane. This points to a pioneering role for epsin in vesicle budding, as it provides both a driving force and a link between membrane invagination and clathrin polymerisation. In particular, epsin-1 shows specificity for the membrane glycophospholipid phosphatidylinositol-4,5-bisphosphate, however not all ENTH domains bind to this molecule. Binding causes tubulation of liposomes and in vivo this membrane-binding function is normally coordinated with clathrin polymerisation. The N-terminal alpha-helix of this domain is hydrophobic and inserts into the membrane like a wedge and helps to drive membrane curvature. Human proteins containing this domain CLINT1; ENTHD1; EPN2; EPN3; References External links Endocytosis.org entry on epsin Further reading Protein domains Peripheral membrane proteins
ENTH domain
[ "Biology" ]
555
[ "Protein domains", "Protein classification" ]
6,911,288
https://en.wikipedia.org/wiki/List%20of%20common%20astronomy%20symbols
This is a compilation of symbols commonly used in astronomy, particularly professional astronomy. Age (stellar) τ - age Astrometry parameters Astrometry parameters Rv - radial velocity cz - apparent radial velocity z - Redshift μ - proper motion π - parallax J - epoch α - Right Ascension δ - Declination λ - Ecliptic longitude β - Ecliptic latitude l - Galactic longitude b - Galactic latitude Cosmological parameters Cosmological parameters h - dimensionless Hubble parameter H0 - Hubble constant Λ - cosmological constant Ω - density parameter ρ - density ρc - critical density z - redshift Distance description Distance description for orbital and non-orbital parameters: d - distance d - in km = kilometer d - in mi = mile d - in AU = astronomical unit d - in ly = light-year d - in pc = parsec d - in kpc = kiloparsec (1000 pc) DL - luminosity distance, obtaining an objects distance using only visual aspects Galaxy comparison Galaxy type and spectral comparison: see galaxy morphological classification Luminosity comparison Luminosity comparison: LS, - luminosity of the Sun Luminosity of certain object: Lacc - accretion luminosity Lbol - bolometric luminosity Mass comparison Mass comparison: ME, - mass of Earth , - mass of Jupiter MS, - mass of the Sun Mass of certain object: M● - mass of black hole Macc - mass of accretion disc Metallicity comparison Metallicity comparison: [Fe/H] - Ratio of Iron to Hydrogen. This is not an exact ratio, but rather a logarithmic representation of the ratio of a star's iron abundance compared to that of the Sun. for a given star (﹡) : , where the values represent the number densities of the given element. [M/H] - Metallicity ratio. Z - Metallicity Z☉, ZS - Metallicity of the Sun Orbital parameters Orbital Parameters of a Cosmic Object: α - RA, right ascension, if the Greek letter does not appear, á letter will appear. δ - Dec, declination, if the Greek letter does not appear, ä letter will appear. P or Porb or T - orbital period a - semi-major axis b - semi-minor axis q - periapsis, the minimum distance Q - apoapsis, the maximum distance e - eccentricity i - inclination Ω - longitude of ascending node ω - argument of periapsis RL - Roche lobe M - Mean anomaly Mo - Mean anomaly at epoch Radius comparison Radius comparison: RE, - Radius compared to Earth , - Radius compared to Jupiter RS, - Radius compared to The Sun Spectral comparison Spectral comparison: see Stellar classification m(object) - Apparent magnitude M(object) - Absolute magnitude, for galaxies and stars H(object) - Absolute magnitude, for planets and nonstellar objects Temperature description Temperature description: Teff - Temperature Effect, usually associated with luminous object Tmax - Temperature Maximum, usually associated with non-luminous object Tavg - Temperature Average, usually associated with non-luminous object Tmin - Temperature Minimum, usually associated with non-luminous object K - Kelvin See also List of astronomy acronyms Astronomical symbols Stellar classification Galaxy morphological classification List of astronomical catalogues Glossary of astronomy References Symbols Astronomy
List of common astronomy symbols
[ "Astronomy", "Mathematics" ]
686
[ "Astronomy-related lists", "Symbols", "Lists of symbols" ]
6,911,994
https://en.wikipedia.org/wiki/S1%20Core
S1 Core (codename Sirocco) is an open source hardware microprocessor design developed by Simply RISC. Based on Sun Microsystems' UltraSPARC T1, the S1 Core is licensed under the GNU General Public License, which is the license Sun chose for the OpenSPARC project. The main goal of the project is to keep the S1 Core as simple as possible to encourage developers. The major differences between T1 and S1 include: S1 Core only has one 64-bit SPARC Core (supporting one to four independent threads of execution) instead of eight cores; S1 Core adds a Wishbone bridge, a reset controller and a basic interrupt controller; the S1 Core environment can be run using only free tools on a common x86 Linux machine. See also LEON OpenRISC External links Simply RISC - S1 Core (archive.org link - as of 2018/11/5 original url redirects to OpenPiton) OpenPiton Simply RISC site redirects to here, as of 2018/11/5, it is unclear if it is related. S1 Core page on OpenCores S1 Core page on SunSource Open microprocessors SPARC microprocessors
S1 Core
[ "Technology" ]
260
[ "Computing stubs", "Computer hardware stubs" ]
6,912,505
https://en.wikipedia.org/wiki/VNI
VNI Software Company is a developer of various education, entertainment, office, and utility software packages. They are known for developing an encoding (VNI encoding) and a popular input method (VNI Input) for Vietnamese on for computers. VNI is often available on computer systems to type Vietnamese, alongside TELEX input method as well. The most common pairing is the use of VNI on keyboard and computers, whilst TELEX is more common on phones or touchscreens. History The VNI company is a family-owned company and based in Westminster, California. It was founded in 1987 by Hồ Thành Việt to develop software that eases Vietnamese language use on computers. Among their products were the VNI Encoding and VNI Input Method. The VNI Input Method has since grown to become the top two most popular input methods for Vietnamese, alongside TELEX which is more advantageous for phones and touchscreens whilst VNI has found more use on keyboard computer systems. VNI vs. Microsoft In the 1990s, Microsoft recognized the potential of VNI's products and incorporated VNI Input Method into Windows 95 Vietnamese Edition and MSDN, in use worldwide. Upon Microsoft's unauthorized use of these technologies, VNI took Microsoft to court over the matter. Microsoft settled the case out of court, withdrew the input method from their entire product line, and developed their own input method. It has, although virtually unknown, appeared in every Windows release since Windows 98. Starting with Windows 10 version 1903, the VNI Input Method (as "Vietnamese Number Key-based"), along with the Telex input method, are now natively supported. Unicode Despite the growing popularity of Unicode in computing, the VNI Encoding (see below) is still in wide use by Vietnamese speakers both in Vietnam and abroad. All professional printing facilities in the Little Saigon neighborhood of Orange County, California continue to use the VNI Encoding when processing Vietnamese text. For this reason, print jobs submitted using the VNI Character Set are compatible with local printers. Input methods VNI invented, popularized, and commercialized an input method and an encoding, the VNI Character Set, to assist computer users entering Vietnamese on their computers. The user can type using only ASCII characters found on standard computer keyboard layouts. Because the Vietnamese alphabet uses a complex system of diacritics for tones and other letters of the Vietnamese alphabet, the keyboard would need 133 alphanumeric keys and a Shift key to cover all possible characters. VNI Input Method Originally, VNI's input method utilized function keys (F1, F2, ...) to enter the tone marks, which later turned out to be problematic, as the operating system used those keys for other purposes. VNI then turned to the numerical keys along the top of the keyboard (as opposed to the numpad) for entering tone marks. This arrangement survives today, but users also have the option of customizing the keys used for tone marks. With VNI Tan Ky mode on, the user can type in diacritical marks anywhere within a word, and the marks will appear at their proper locations. For example, the word trường, which means 'school', can be typed in the following ways: truong-7-2 → (most conventional way) 72truong → t72ruong → tr72uong → tru7o72ng → truo72ng → truo7ng2 → The first way is the conventional method, following handwriting and spelling convention, where the base is written first () and then the tonal marks added later one by one. VNI Tan Ky With the release of VNI Tan Ky 4 in the 1990s, VNI freed users from having to remember where to correctly insert tone marks within a word, because, as long as the user enters all the required characters and tone marks, the software will group them correctly. This feature is especially useful for newcomers to the language. VNI Auto Accent VNI Auto Accent is the company's most recent software release (2006), with the purpose of alleviating repetitive strain injury (RSI) caused by prolonged use of computer keyboards. Auto Accent helps reduce the number of keystrokes needed to type each word by automatically adding diacritical marks for the user. The user must still enter every base letter in the word. Character encodings VNI Encoding (Windows/Unix) The VNI Encoding uses up to two bytes to represent one Vietnamese vowel character, with the second byte supplying additional diacritical marks, therefore removing the need to replace control characters with Vietnamese characters, a problematic system found in TCVN1 (VSCII-1) and in VISCII, or using two different fonts such as is sometimes employed for TCVN3 (VSCII-3), one containing lowercase characters and the other uppercase characters. A similar approach is taken by Windows-1258 and VSCII-2. This solution is more portable between different versions of Windows and between different platforms. However, due to the presence of multiple characters in a file to represent one written character increases the file size. The increased file size can usually be accounted for by compressing the data into a file format such as ZIP. The VNI encoding was used extensively in the south of Vietnam, and sometimes used overseas, while TCVN 5712 was dominant in the north. Points 0x00 through 0x7F follow ASCII. VNI Encoding for Macintosh A version intended for use on Macintosh systems, with a different arrangement (corresponding to the different arrangement between Windows-1252 and Mac OS Roman). VNI Encoding for DOS The VNI encoding for use on DOS does not use separate characters for diacritics, instead replacing certain ASCII punctuation characters with tone-marked uppercase letters (compare ISO 646). VIQR and VNI-Internet Mail The use of Vietnamese Quoted-Readable (VIQR), a convention for writing in Vietnamese using ASCII characters, began during the Vietnam War, when typewriters were the main tool for word processing. Because the U.S. military required a way to represent Vietnamese scripts accurately on official documents, VIQR was invented for the military. Due to its longstanding use, VIQR was a natural choice for computer word processing, prior to the appearance of VNI, VPSKeys, VSCII, VISCII, and Unicode. It is still widely used for information exchange on computers, but is not desirable for design and layout, due to its cryptic appearance. VIQR's main issue was the difficulty of reading VIQR text, especially for inexperienced computer users. VNI created and released a free font called VNI-Internet Mail, which utilized a variant of the VIQR notation and VNI's combining character technique to give VIQR text a more natural appearance by replacing certain ASCII punctuation with combining characters. The following table compares VNI-Internet Mail to other codified VIQR or VIQR-like conventions. See also Telex (input method) Vietnamese Quoted-Readable (VIQR) VISCII VPSKeys VNLabs Guide to inputting Vietnamese text at the Vietnamese Wikipedia Vietnamese language and computers References External links VNI Software Co. VietUni Converter VNI products VNI Auto Accent VNI XP & Dai Tu Dien VNI Tan Viet VNI Tan Ky 4 VNI Dai Tu Dien Learn English by Phonic Learn English by Pictures VNI An Sao VNI-Internet Mail Software companies established in 1987 1987 establishments in California Character encoding Companies based in Orange County, California Companies based in Westminster, California Educational software Vietnamese character input
VNI
[ "Technology" ]
1,585
[ "Natural language and computing", "Character encoding" ]
6,912,901
https://en.wikipedia.org/wiki/Agaricus%20impudicus
Agaricus impudicus, also known as the tufted wood mushroom, is a mushroom of Agaricus, a genus with many edible species. Description As with all Agaricus species, gills are free, colour progresses with age from pale-pink to a chocolate color, and spores are dark brown. The stipe has a clear annulus (ring). Cap 4–15 cm wide, and appears brownish due to numerous brownish scales on a white background. The stipe is white, 6–12 cm tall and 0.8–2 cm thick, cylindrical and wider towards the bottom, or ending in a bulb. It is distinguished from similar forest-growing Agaricus mushrooms in that it does not bruise yellowish or reddish when cut, except for the attachment of stalk and cap which may turn slightly pink, and the widening stipe. Taste is mild and earthy, and the mushroom is sometimes regarded as edible; however, other authors treat it as inedible in practice if not in theory because it has a nauseating smell resembling rotten radish, which persists during cooking. Habitat Known to occur in Western and Southern Europe and New Zealand, this uncommon mushroom is found in deciduous or coniferous forest in autumn. Taxonomy This species is known under a number of synonyms, all these refer to the same species: Agaricus brunnoleus (J. Lange) Pilát Agaricus koelerionensis (Bon) Bon 1980 Agaricus reae Bon 1981 Agaricus variegans F.H. Møller 1952 Agaricus variegatus (F.H. Møller) Pilát 1951 Psalliota impudica Rea 1932 Psalliota variegata F.H. Møller 1950 Psalliota variegata var. koelerionis Bon 1972 See also List of Agaricus species External links "Danske storsvampe. Basidiesvampe" [a key to Danish basidiomycetes] J.H. Petersen and J. Vesterholt eds. Gyldendal. Viborg, Denmark, 1990. Agaricus impudicus entry at Global Biodiversity Information Facility. References impudicus Fungi described in 1932 Fungi of Europe Fungi of New Zealand Fungus species
Agaricus impudicus
[ "Biology" ]
471
[ "Fungi", "Fungus species" ]
6,913,929
https://en.wikipedia.org/wiki/Iproclozide
Iproclozide (trade names Sursum, Sinderesin) is an irreversible and selective monoamine oxidase inhibitor (MAOI) of the hydrazine chemical class that was used as an antidepressant, but has since been discontinued. It has been known to cause fulminant hepatitis and there have been at least three reported fatalities due to administration of the drug. See also Hydrazine (antidepressant) References Monoamine oxidase inhibitors Withdrawn drugs Hepatotoxins Hydrazides Phenol ethers 4-Chlorophenyl compounds Isopropylamino compounds
Iproclozide
[ "Chemistry" ]
132
[ "Drug safety", "Withdrawn drugs" ]
6,914,161
https://en.wikipedia.org/wiki/Faculty%20of%20Electrical%20Engineering%20and%20Computing%2C%20University%20of%20Zagreb
The Faculty of Electrical Engineering and Computing (, abbr: FER) is a faculty of the University of Zagreb. It is the largest technical faculty and the leading educational facility for research and development in the fields of electrical engineering and computing in Croatia. FER owns four buildings situated in the Zagreb neighbourhood of Martinovka, Trnje. The total area of the site is . , the Faculty employs more than 160 professors and 210 teaching and research assistants. In the academic year 2010/2011, the total number of students was about 3,800 in the undergraduate and graduate level, and about 450 in the PhD program. As of the academic year 2004./2005., when the implementation of the Bologna process started at the University of Zagreb, the faculty has two baccalaureus programmes (each lasting 3 years): Electrical engineering and information technology Computing After receiving a bachelor's degree, students can take part in one of three master's programmes: Electrical engineering and information technology, with the following profiles: Audio Technologies and Electroacoustics Electrical Power Engineering Electronic and Computer Engineering Electronics Electric Machines, Drives and Automation Information and communication technology, with the following profiles: Control System and Robotics Information and Communication Engineering Communication and Space Technologies Computing, with the following profiles Software Engineering and Information Systems Computer Engineering Computational Modelling in Engineering Computer Science Network Science Data Science Organisation The Faculty comprises 12 academic departments: Applied Physics Applied Computing Applied Mathematics Fundamentals of Electrical Engineering and Measurements Electric Machines, Drives and Automation Energy and Power Systems Telecommunications Electronic Systems and Information Processing Control and Computer Engineering in Automation Electroacoustics Electronics, Microelectronics, Computer and Intelligent Systems Communication and Space Technologies History The Faculty of Electrical Engineering (, abbr: ETF) was formed on 1 July 1956 when the College of Engineering of the University of Zagreb was divided into ETF and three other new faculties. The faculty existed under this name until 7 February 1995 when it was renamed to its current name. In 1956, the first curriculum was formed, offering students programme called "Study of Electrical Engineering". The faculty was divided into two departments, one for weak current (Odjel za slabu struju) and another for the strong current (Odjel za jaku struju). This was later referred to as the ETF-1 programme. The Faculty changed its curriculum in 1967, when the ETF-2 curriculum introduced a division of studies into electrical power systems, electronics, electrical machinery and automation. In 1970, the ETF-3 curriculum introduced further specializations, such as nuclear power systems and computing. There was also an ETF-4 curriculum later. In 1994 name of the faculty changed, and the curriculum was changed from ETF-4 to FER-1. A separate study called "Study of Computing" was formed, so the faculty from then on offered two different degrees - one was the existing diplomirani inženjer elektrotehnike, or graduate engineer of electrical engineering, and the new one was diplomirani inženjer računarstva, or graduate engineer of computing. In 2004 FER-1 was transformed to FER-2, to conform to the Bologna process. This involved, among other things, changing the length of the essential course set from four semesters to two semesters, the renaming of the first study program to include the term information technology, and the reworking of the program subdivisions so that they each include five specialized modules. Starting with the academic year 2018./2019. the curriculum was changed from FER-2 to FER-3 and is mandatory for new students. Deans Anton Dolenc (1956–1957) Danilo Blanuša (1957–1958) Božidar Stefanini (1958–1959) Vatroslav Lopašić (1959–1960) Hrvoje Požar (1960–1962) Vladimir Matković (1962–1964) Radenko Wolf (1964–1966) Vladimir Muljević (1966–1968) Hrvoje Požar (1968–1970) Vojislav Bego (1970–1972) Zlatko Smrkić (1972–1974) Zvonimir Sirotić (1974–1976) Uroš Peruško (1976–1978) Ante Šantić (1978–1980) Berislav Jurković (1980–1982) Milan Šodan (1982–1984) Nedžat Pašalić (1984–1986) Leo Budin (1986–1988) Vladimir Naglić (1988–1990) Ivan Ilić (1990–1992) Danilo Feretić (1992–1994) Stanko Tonković (1994–1996) Stanko Tonković (1996–1998) Slavko Krajcar (1998–2000) Slavko Krajcar (2000–2002) Mladen Kos (2002–2004) Mladen Kos (2004–2006) Vedran Mornar (2006–2010) Nedjeljko Perić (2010–2014) Mislav Grgić (2014–2018) Gordan Gledec (2018–2022) Vedran Bilas (2022-current) Notable alumni Ante Marković, last prime minister of SFRJ Branko Jeren, Croatian minister of Science and Technology 1993-1995 Damir Boras, Rector of University of Zagreb since 2014- Vedran Mornar, Croatian minister of Science, Education and Sport 2013-2015 Notable professors Danilo Blanuša, a mathematician, inventor of second and third known snark (was dean of FER 1957-1958) KSET The Electrical Engineering Student Club (Croatian: Klub studenata elektrotehnike, abbr: KSET) is a student association founded by students of the Croatian Faculty of Electrical Engineering, and plays an active role in the social life of the University of Zagreb and Zagreb in general. The club is part of a larger building complex of its native faculty. References External links Homepage Homepage History and organization of ETF Electrical Engineering Engineering universities and colleges in Croatia Computer science departments Science and technology in Croatia Universities and colleges established in 1956 1950s establishments in Croatia 1956 establishments in Yugoslavia University and college buildings completed in 1956 Modernist architecture in Croatia Electrical engineering departments Electrical and computer engineering departments
Faculty of Electrical Engineering and Computing, University of Zagreb
[ "Engineering" ]
1,277
[ "Electrical and computer engineering departments", "Engineering universities and colleges", "Electrical engineering departments", "Electrical and computer engineering", "Electrical engineering organizations" ]
6,914,199
https://en.wikipedia.org/wiki/Hinode%20%28satellite%29
Hinode (; , , Sunrise), formerly Solar-B, is a Japan Aerospace Exploration Agency Solar mission with United States and United Kingdom collaboration. It is the follow-up to the Yohkoh (Solar-A) mission and it was launched on the final flight of the M-V rocket from Uchinoura Space Center, Japan on 22 September 2006 at 21:36 UTC (23 September, 06:36 JST). Initial orbit was perigee height 280 km, apogee height 686 km, inclination 98.3 degrees. Then the satellite maneuvered to the quasi-circular Sun-synchronous orbit over the day/night terminator, which allows near-continuous observation of the Sun. On 28 October 2006, the probe's instruments captured their first images. The data from Hinode are being downloaded to the Norwegian, terrestrial Svalsat station, operated by Kongsberg a few kilometres west of Longyearbyen, Svalbard. From there, data is transmitted by Telenor through a fibre-optic network to mainland Norway at Harstad, and on to data users in North America, Europe and Japan. Mission Hinode was planned as a three-year mission to explore the magnetic fields of the Sun. It consists of a coordinated set of optical, extreme ultraviolet (EUV), and x-ray instruments to investigate the interaction between the Sun's magnetic field and its corona. The result will be an improved understanding of the mechanisms that power the solar atmosphere and drive solar eruptions. The EUV imaging spectrometer (EIS) was built by a consortium led by the Mullard Space Science Laboratory (MSSL) in the UK. NASA, the space agency of the United States, was involved with three science instrument components: the Focal Plane Package (FPP), the X-Ray Telescope (XRT), and the Extreme Ultraviolet Imaging Spectrometer (EIS) and shares operations support for science planning and instrument command generation. , the operation is planned to continue until 2033. Instruments Hinode carries three main instruments to study the Sun. SOT (Solar Optical Telescope) A 0.5 meter Gregorian optical telescope with an angular resolution of about 0.2 arcsecond over the field of view of about 400 x 400 arcsec. At the SOT focal plane, the Focal Plane Package (FPP) built by the Lockheed Martin Solar and Astrophysics Laboratory in Palo Alto, California consists of three optical instruments: the Broadband Filter Imager (BFI) which produces images of the solar photosphere and chromosphere in six wide-band interference filters; the Narrowband Filter Imager (NFI) which is a tunable Lyot-type birefringent filter capable of producing magnetogram and dopplergram images of the solar surface; and the Spectropolarimeter (SP) which produces the most sensitive vector magnetograph maps of the photosphere to date. The FPP also includes a Correlation Tracker (CT) which locks onto solar granulation to stabilize the SOT images to a fraction of an arcsecond. The spatial resolution of the SOT is a factor of 5 improvement over previous space-based solar telescopes (e.g., the MDI instrument on the SOHO). XRT (X-ray Telescope) A modified Wolter I telescope design that uses grazing incidence optics to image the solar corona's hottest components (0.5 to 10 Million K) with an angular resolution consistent with 1 arcsec pixels at the CCD. The telescope has an imaging field of view of 34 arcminutes. It is capable of capturing an image of the full sun when pointed at the center of the solar disk. The telescope was designed and built by Smithsonian Astrophysical Observatory (SAO), which, with the Harvard College Observatory (HCO) form the Harvard-Smithsonian Center for Astrophysics (CfA). The camera was developed by NAOJ and JAXA. EIS (Extreme-Ultraviolet Imaging Spectrometer) A normal incidence extreme ultraviolet (EUV) spectrometer that obtains spatially resolved spectra in two wavelength bands: 17.0–21.2 and 24.6–29.2 nm. Spatial resolution is around 2 arcsec, and the field of view is up to 560 x 512 arcsec2. The emission lines in the EIS wavelength bands are emitted at temperatures ranging from 50,000 K to 20 million K. EIS is used to identify the physical processes involved in heating the solar corona. See also Sunrise – balloon-borne solar telescope SOLAR-C – planned follow-up to Hinode References External links NASA Mission Site for Hinode JAXA overview of mission Mission overview QuickTime, preparation for launch QuickTime Windows Media, launch QuickTime Windows Media (in Japanese) Solar-B Mission Profile by NASA's Solar System Exploration Solar-B project page of National Astronomical Observatory of Japan Solar-B project page of Lockheed Martin Solar and Astrophysics Laboratory Solar-B project page of Mullard Space Science Laboratory Solar-B project page of PPARC Solar-B project page of NASA MSFC HINODE (SOLAR-B) SOT-FPP Education/Public Outreach at Chabot Space and Science Center Satellites orbiting Earth Missions to the Sun Space telescopes X-ray telescopes Satellites of Japan Solar telescopes Spacecraft launched in 2006
Hinode (satellite)
[ "Astronomy" ]
1,105
[ "Space telescopes" ]
6,914,308
https://en.wikipedia.org/wiki/Structuralism%20%28biology%29
Biological or process structuralism is a school of biological thought that objects to an exclusively Darwinian or adaptationist explanation of natural selection such as is described in the 20th century's modern synthesis. It proposes instead that evolution is guided differently, by physical forces which shape the development of an animal's body, and sometimes implies that these forces supersede selection altogether. Structuralists have proposed different mechanisms that might have guided the formation of body plans. Before Darwin, Étienne Geoffroy Saint-Hilaire argued that animals shared homologous parts, and that if one was enlarged, the others would be reduced in compensation. After Darwin, D'Arcy Thompson hinted at vitalism and offered geometric explanations in his classic 1917 book On Growth and Form. Adolf Seilacher suggested mechanical inflation for "pneu" structures in Ediacaran biota fossils such as Dickinsonia. Günter P. Wagner argued for developmental bias, structural constraints on embryonic development. Stuart Kauffman favoured self-organisation, the idea that complex structure emerges holistically and spontaneously from the dynamic interaction of all parts of an organism. Michael Denton argued for laws of form by which Platonic universals or "Types" are self-organised. Stephen J. Gould and Richard Lewontin proposed biological "spandrels", features created as a byproduct of the adaptation of nearby structures. Gerd B. Müller and Stuart A. Newman argued that the appearance in the fossil record of most of the phyla in the Cambrian explosion was "pre-Mendelian" evolution caused by physical factors. Brian Goodwin, described by Wagner as part of "a fringe movement in evolutionary biology", denies that biological complexity can be reduced to natural selection, and argues that pattern formation is driven by morphogenetic fields. Darwinian biologists have criticised structuralism, emphasising that there is plentiful evidence both that natural selection is effective and, from deep homology, that genes have been involved in shaping organisms throughout evolutionary history. They accept that some structures such as the cell membrane self-assemble, but deny the ability of self-organisation to drive large-scale evolution. History Geoffroy's law of compensation In 1830, Étienne Geoffroy Saint-Hilaire argued a structuralist case against the functionalist (teleological) position of Georges Cuvier. Geoffroy believed that homologies of structure between animals indicated that they shared an ideal pattern; these did not imply evolution but a unity of plan, a law of nature. He further believed that if one part was more developed within a structure, the other parts would necessarily be reduced in compensation, as nature always used the same materials: if more of them were used for one feature, less was available for the others. D'Arcy Thompson's morphology In his "eccentric, beautiful" 1917 book On Growth and Form, D'Arcy Wentworth Thompson revisited the old idea of "universal laws of form" to explain the observed forms of living organisms. The science writer Philip Ball states that Thompson "presents mathematical principles as a shaping agency that may supersede natural selection, showing how the structures of the living world often echo those in inorganic nature", and notes his "frustration at the 'Just So' explanations of morphology offered by Darwinians." Instead, Ball writes, Thompson elaborates on how not heredity but physical forces govern biological form. The philosopher of biology Michael Ruse similarly wrote that Thompson "had little time for natural selection", certainly preferring "mechanical explanations" and possibly straying into vitalism. Seilacher's pneu structures Like Thompson, the palaeontologist Adolf Seilacher emphasised fabricational constraints on form. He interpreted fossils such as Dickinsonia in the Ediacaran biota as "pneu" structures determined by mechanical inflation like a quilted air mattress, rather than having been driven by natural selection. Wagner's constraints on development In his 2014 book Homology, Genes, and Evolutionary Innovation, the evolutionary biologist Günter P. Wagner argues for "the study of novelty as distinct from adaptation." He defines novelty as occurring when some part of the body develops an individual and quasi-independent existence, in other words as a distinct and recognisable structure, which he implies might occur before natural selection begins to adapt the structure for some function. He forms a structuralist picture of evolutionary developmental biology, using empirical evidence, arguing that homology and biological novelty are key aspects requiring explanation, and that developmental bias (i.e. structural constraints on embryonic development) is a key explanation for these. Kauffman's self-organisation The mathematical biologist Stuart Kauffman suggested in 1993 that self-organization may play a role alongside natural selection in three areas of evolutionary biology, namely population dynamics, molecular evolution, and morphogenesis. With respect to molecular biology, Kauffman has been criticised for ignoring the role of energy in driving biochemical reactions in cells, which can fairly be called self-catalysing but which do not simply self-organise. Denton's 'Types' The biochemist Michael Denton has argued a structuralist case for self-organization. In a 2013 paper, he claimed that "the basic forms of the natural world—the Types—are immanent in nature, and determined by a set of special natural biological laws, the so called 'laws of form'." He asserts that these "recurring patterns and forms" are "genuine universals". Form is in this view not shaped by natural selection, but by "self-organizing properties of particular categories of matter" and by "cosmic fine-tuning of the laws of nature". Denton has been criticised by the biochemist Laurence A. Moran as anti-Darwinian and favouring creationism. Gould and Lewontin's spandrels In 1979, influenced by Seilacher among others, the paleontologist Stephen J. Gould and the population geneticist Richard Lewontin wrote what Wagner called "the most influential structuralist manifesto", "The Spandrels of San Marco and the Panglossian Paradigm". They pointed out that biological features (like architectural spandrels) did not necessarily have adaptation as their direct cause. Instead, architects couldn't help creating small triangular areas between arches and pillars, as arches need (evolve) to be curved, and pillars need to be vertical. The resulting spandrels are exaptations, consequences of other evolutionary changes. Evolution, they argued, did not select for a protruding human chin: instead, reducing the length of the tooth row left the jaw protruding. Müller and Newman's pre-Mendelian evolution Extreme structuralists like Gerd B. Müller and Stuart A. Newman, inheriting the viewpoint of D'Arcy Thompson, have proposed that physical laws of structure, not genetics, govern major diversifications such as the Cambrian explosion, followed later by co-opted genetic mechanisms. They argued further that there was a "pre-Mendelian" phase of the evolution of animals, involving physical forces, before genes took over. Darwinian biologists freely admit that physical factors such as surface tension can cause self-assembly, but insist that genes play a crucial role. They note for example that deep homologies between widely separated groups of organisms, such as the signalling pathways and transcription factors of choanoflagellates and metazoans, demonstrate that genes have been involved throughout evolutionary history. Goodwin's morphogenetic fields What Wagner calls "a fringe movement in evolutionary biology", the form of structuralism exemplified by Brian Goodwin, effectively denies that natural selection is important, or at least that biological complexity could be reduced to natural selection. This led to conflict with Darwinists such as Richard Dawkins. Goodwin related the old concept of a morphogenetic field to the spatial distribution of chemical signals in a developing embryo. He demonstrated with a mathematical model that a variety of patterns could be formed by choosing parameter values to set up either static geometric patterns or dynamic oscillations, implying that the signalling system involved was somehow an alternative to natural selection. Dawkins commented "He thinks he's anti-Darwinian, although he can't be, because he has no alternative explanation." Criticism While agreeing that pattern formation mechanisms such as those described by Goodwin exist, the biologists Richard Dawkins, Stephen J. Gould, Lynn Margulis, and Steve Jones have criticised Goodwin for suggesting that chemical signalling forms an alternative to natural selection. Moran, a "skeptical biochemist", comments that 'structuralism' is a "new buzzword ... guaranteed to impress the creationist crowd because nobody understands what it means but it sounds very 'sciency' and philosophical." The philosopher of science Paul E. Griffiths writes that structuralists "view this structuring of the space of biological possibility as part of the fundamental physical structure of nature. But the phenomena of phylogenetic inertia and developmental constraint do not support this interpretation. These phenomena show that the evolutionary pathways available to an organism are a function of the developmental structure of the organism." Moran summarizes: "There's nothing in science that supports the views of the structuralists. We have perfectly good explanations for why bumblebees are different than mushrooms and why all vertebrates have vertebrae and not exoskeletons. There's no evidence to support the idea that if you replay the tape of life it will come out looking anything like what we see today. You can be confident that when you visit another planet you will not find vertebrates." The evolutionary developmental biologist Lewis Held wrote that "The notion that aspects of anatomy can be explained by physical forces (like expansion cracking) was advocated ~ 100 years earlier in D'Arcy Thompson's 1917 On Growth and Form and in Theodore Cook's 1914 book The Curves of Life. Over the intervening century, various traits have been proposed to arise mechanically rather than genetically: brain convolutions, cartilage condensations, flower corrugations, tooth cusps, and fish otoliths. To this kooky list we can now add the crooked smile of the crocodile, or at least the cracked skin that surrounds it." See also Alternatives to Darwinism Eclipse of Darwinism Extended evolutionary synthesis Orthogenesis Notes References Non-Darwinian evolution Philosophy of biology Evolutionary biology
Structuralism (biology)
[ "Biology" ]
2,115
[ "Evolutionary biology", "Non-Darwinian evolution", "Biology theories" ]
6,915,019
https://en.wikipedia.org/wiki/Acridine%20yellow
Acridine yellow, also known as acridine yellow G, acridine yellow H107, basic yellow K, and 3,6-diamino-2,7-dimethylacridine, is a yellow dye with strong bluish-green fluorescence. It is a derivate of acridine. In histology, it is used as a fluorescent stain, and as a fluorescent probe for non-invasive measurements of cytoplasmic pH changes in whole cells. It is also used as a topical antiseptic. It is usually available as a hydrochloride salt. Acridine yellow damages DNA and is used as a mutagen in microbiology. Acridine yellow is similar to acridine orange. According to a publication by Karl Drechsler, a student of Guido Goldschmiedt at the Imperial and Royal University of Vienna, Moriz Freund discovered the substance in 1896 during experiments at the University of Prague. Drechsler was then able to produce the substance in larger quantities and subsequently also examine it more closely. References External links Acridine yellow absorption and emission spectra Acridine dyes Staining dyes Antiseptics Substances discovered in the 19th century
Acridine yellow
[ "Chemistry" ]
253
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
6,915,658
https://en.wikipedia.org/wiki/C%2B%2B%20string%20handling
The C++ programming language has support for string handling, mostly implemented in its standard library. The language standard specifies several string types, some inherited from C, some designed to make use of the language's features, such as classes and RAII. The most-used of these is . Since the initial versions of C++ had only the "low-level" C string handling functionality and conventions, multiple incompatible designs for string handling classes have been designed over the years and are still used instead of std::string, and C++ programmers may need to handle multiple conventions in a single application. History The type is the main string datatype in standard C++ since 1998, but it was not always part of C++. From C, C++ inherited the convention of using null-terminated strings that are handled by a pointer to their first element, and a library of functions that manipulate such strings. In modern standard C++, a string literal such as still denotes a NUL-terminated array of characters. Using C++ classes to implement a string type offers several benefits of automated memory management and a reduced risk of out-of-bounds accesses, and more intuitive syntax for string comparison and concatenation. Therefore, it was strongly tempting to create such a class. Over the years, C++ application, library and framework developers produced their own, incompatible string representations, such as the one in AT&T's Standard Components library (the first such implementation, 1983) or the type in Microsoft's MFC. While standardized strings, legacy applications still commonly contain such custom string types and libraries may expect C-style strings, making it "virtually impossible" to avoid using multiple string types in C++ programs and requiring programmers to decide on the desired string representation ahead of starting a project. In a 1991 retrospective on the history of C++, its inventor Bjarne Stroustrup called the lack of a standard string type (and some other standard types) in C++ 1.0 the worst mistake he made in its development; "the absence of those led to everybody re-inventing the wheel and to an unnecessary diversity in the most fundamental classes". Implementation issues The various vendors' string types have different implementation strategies and performance characteristics. In particular, some string types use a copy-on-write strategy, where an operation such as string a = "hello!"; string b = a; // Copy constructor does not actually copy the content of to ; instead, both strings share their contents and a reference count on the content is incremented. The actual copying is postponed until a mutating operation, such as appending a character to either string, makes the strings' contents differ. Copy-on-write can make major performance changes to code using strings (making some operations much faster and some much slower). Though no longer uses it, many (perhaps most) alternative string libraries still implement copy-on-write strings. Some string implementations store 16-bit or 32-bit code points instead of bytes, this was intended to facilitate processing of Unicode text. However, it means that conversion to these types from or from arrays of bytes is dependent on the "locale" and can throw exceptions. Any processing advantages of 16-bit code units vanished when the variable-width UTF-16 encoding was introduced (though there are still advantages if you must communicate with a 16-bit API such as Windows). Qt's is an example. Third-party string implementations also differed considerably in the syntax to extract or compare substrings, or to perform searches in the text. Standard string types The class is the standard representation for a text string since C++98. The class provides some typical string operations like comparison, concatenation, find and replace, and a function for obtaining substrings. An can be constructed from a C-style string, and a C-style string can also be obtained from one. The individual units making up the string are of type , at least (and almost always) 8 bits each. In modern usage these are often not "characters", but parts of a multibyte character encoding such as UTF-8. The copy-on-write strategy was deliberately allowed by the initial C++ Standard for because it was deemed a useful optimization, and used by nearly all implementations. However, there were mistakes, in particular the returned a non-const reference in order to make it easy to port C in-place string manipulations (such code often assumed one byte per character and thus this may not have been a good idea!) This allowed the following code that shows that it must make a copy even though it is almost always used only to examine the string and not modify it: std::string original("aaaaaaa"); std::string string_copy = original; // make a copy char* pointer = &string_copy[3]; // some tried to make operator[] return a "trick" class but this makes it complex arbitrary_code_here(); // no optimizations can fix this *pointer = 'b'; // if operator[] did not copy, this would change original unexpectedly This caused some implementations to abandon copy-on-write. It was also discovered that the overhead in multi-threaded applications due to the locking needed to examine or change the reference count was greater than the overhead of copying small strings on modern processors (especially for strings smaller than the size of a pointer). The optimization was finally disallowed in C++11, with the result that even passing a as an argument to a function, viz. void print(std::string s) { std::cout << s; } must be expected to perform a full copy of the string into newly allocated memory. The common idiom to avoid such copying is to pass as a const reference: void print(const std::string& s) { std::cout << s; } In C++17 added a new class that is only a pointer and length to read-only data, makes passing arguments far faster than either of the above examples: void print(std::string_view s) { std::cout << s; } ... std::string x = ...; print(x); // does not copy x.data() print("this is a literal string"); // also does not copy the characters! ... Example usage #include <iostream> #include <iomanip> #include <string> int main() { std::string foo = "fighters"; std::string bar = "stool"; if (foo != bar) std::cout << "The strings are different!\n"; std::cout << "foo = " << std::quoted(foo) << " while bar = " << std::quoted(bar); } Related classes is a typedef for a particular instantiation of the template class. Its definition is found in the header: using string = std::basic_string<char>; Thus provides functionality for strings having elements of type . There is a similar class , which consists of , and is most often used to store UTF-16 text on Windows and UTF-32 on most Unix-like platforms. The C++ standard, however, does not impose any interpretation as Unicode code points or code units on these types and does not even guarantee that a holds more bits than a . To resolve some of the incompatibilities resulting from 's properties, C++11 added two new classes: and (made up of the new types and ), which are the given number of bits per code unit on all platforms. C++11 also added new string literals of 16-bit and 32-bit "characters" and syntax for putting Unicode code points into null-terminated (C-style) strings. A is guaranteed to be specializable for any type with a struct to accompany it. As of C++11, only , , and specializations are required to be implemented. A is also a Standard Library container, and thus the Standard Library algorithms can be applied to the code units in strings. Critiques The design of has been held up as an example of monolithic design by Herb Sutter, who reckons that of the 103 member functions on the class in C++98, 71 could have been decoupled without loss of implementation efficiency. References C++ C++ Standard Library C++ Articles with example C++ code
C++ string handling
[ "Mathematics", "Technology" ]
1,816
[ "Sequences and series", "Computer science", "Mathematical structures", "String (computer science)" ]
6,916,227
https://en.wikipedia.org/wiki/Miniature%20pioneering
Miniature Pioneering or Model Pioneering is an art form featuring the miniaturized version of pioneering construction. This technique was originally used by Boy Scouts to create a model for campsite planning. Models are a convenient way to plan a construction project, requiring the same techniques as a full-scale model, and allowing for accurate equipment lists to be developed, as well as for difficulties in sequencing construction to be identified. However, scout troops in Malaysia are innovating it into a new form of art through competitions. While real pioneering is a combination of wooden spars and ropes, these materials are replaced by wooden sticks and white thread in Miniature Pioneering. Although design and complexity plays a major part in judging the value of a model, lashing quality plays a major role whereby it is evaluated based on the three criteria of tightness, tidiness, and cleanliness. Similar to all handmade models, the activity of making a miniature pioneering model is good training for patience and perfection. See also Scale model Scoutcraft Scale modeling
Miniature pioneering
[ "Physics" ]
204
[ "Scale modeling" ]
6,916,598
https://en.wikipedia.org/wiki/Overwintering
Overwintering is the process by which some organisms pass through or wait out the winter season, or pass through that period of the year when "winter" conditions (cold or sub-zero temperatures, ice, snow, limited food supplies) make normal activity or even survival difficult or near impossible. In some cases "winter" is characterized not necessarily by cold but by dry conditions; passing through such periods could likewise be called overwintering. Hibernation and migration are the two major ways in which overwintering is accomplished. Animals may also go into a state of reduced physiological activity known as torpor. Overwintering occurs in several classes of lifeform. Insects In entomology, overwintering is how an insect passes the winter season. Many insects overwinter as adults, pupae, or eggs. This can be done inside buildings, under tree bark, or beneath fallen leaves or other plant matter on the ground, among other places. All such overwintering sites shield the insect from adverse conditions associated with winter. Activity almost completely ceases until conditions become more favourable. One example is the mourning cloak butterfly, which experiences advantages to overwintering in its desired locations by being one of the first butterflies to emerge after a cold winter. Another example are the eggs of the forest tent caterpillar moth which overwinter tightly packed on tree branches. Other insects, such as the monarch butterfly, migrate and overwinter in warmer areas. Additionally, the ghost moth overwinters as a larva. The common brimstone, found across a broad geographic range, overwinters for 7 months to wait for the development of their larval host plants. Another unique butterfly, the large white, will only overwinter in southern Eurasia; they are not seen overwintering elsewhere. Some species of parasitic conopid flies, such as P. tibialis, are known to overwinter inside of the corpse of their bee/wasp hosts before emerging in the spring. The queens of the yellow-faced bumblebee (Bombus vosnesenskii) will over-winter, and then emerge early in the flight season to obtain the best available subterranean nests. Lastly, many species of Lasioglossum, including L. hemichalceum (which is a common sweat bee), will overwinter in underground nests before emerging in the spring to start new colonies. Birds Many birds migrate and then overwinter in regions where temperatures are warmer or food is more readily available, in Europe for example common crane and white storks. Some birds, however, such as black-capped chickadees, Golden-crowned kinglets, woodpeckers, and corvids, instead remain in colder areas throughout the winter, often remaining in groups for warmth. Plants Plants are sometimes said to overwinter. At such times, growth of vegetative tissues and reproductive structures becomes minimal or ceases completely. For plants, overwintering often involves restricted water supplies and reduced light exposure. In the spring following overwintering many plants will enter their flowering stage. Farmers and gardeners use a process of "overwintering" to achieve early spring harvests of some crops by planting annual or biennial species in fall, often under the protection of high or low tunnels. In plant pathology, overwintering is where a plant pathogen survives the winter, during which its normal crop host species is not growing, by transferring to an alternative host, living freely in the soil or surviving on plant refuse such as discarded potatoes. People People are also described from time to time as overwintering. This was especially true in the past during the exploration of the planet when people had to pass the winter in places not ideally suited for winter survival, and even today in the polar regions. Today people may be said to overwinter when they temporarily move to warmer areas during the months of prevailing cold weather in northern latitudes, such as people from various parts of North America staying in Florida, Arizona, or New Mexico (among other places) for parts of November to March. References Physiology Winter phenomena
Overwintering
[ "Biology" ]
835
[ "Physiology" ]
6,916,694
https://en.wikipedia.org/wiki/Benjamin%20scale
The Sex Orientation Scale (SOS) was Harry Benjamin's attempt to classify and understand various forms and subtypes of transvestism and transsexualism in people assigned male at birth, published in 1966. It was a seven-point scale (with three types of transvestism, three types of transsexualism, and one category for typical males); it was analogous to the Kinsey Scale as it relates to sexual orientation, which also had seven categories. Much like Kinsey's understanding of sexual orientation, Benjamin understood the nature of gender identity and gender expression not as a discrete scale, but as a spectrum, a continuum with many variations. However the Benjamin scale does not reflect a modern understanding of gender identity, and is not useful as a contemporary diagnostic tool, especially due to its conflation of gender identity with sexual orientation. Benjamin feared legal consequences for surgeons who performed sex reassignment surgery, and focused on the patients being able to pass and unlikely to regret their decision when deciding whether to recommend someone for an operation—in addition to possessing an unchanging gender identity. Sex Orientation Scale (S.O.S.) Sex and Gender Role Disorientation and Indecision (Males) Benjamin noted, "It must be emphasized again that the remaining six types are not and never can be sharply separated." Benjamin added a caveat: Benjamin's Scale references and uses Alfred Kinsey's sexual orientation scale to distinguish between "true transsexualism" and "transvestism". Modern views Contemporary views on gender identity and classification differ markedly from Harry Benjamin's original opinions. Sexual orientation is no longer regarded a criterion for diagnosis, or for distinction between transsexuality, transvestism and other forms of gender variant behavior and expression. Modern views also exclude fetishistic transvestism from the spectrum of transsexual identity/classification, this type of transvestism is not related to gender expression or identity but is a distinctly sexual phenomenon most commonly practised by people who are neither transsexual nor homosexual. Benjamin's scale was designed for use with heterosexual trans women, and trans men's identities do not align with these categories. See also Classification of transgender people Harry Benjamin International Gender Dysphoria Association Transgender References External links Harry Benjamin's Gender Scale The Scale in the Harry Benjamin's book "The Transsexual Phenomenon" Transgender studies Scales Sexology Cross-dressing
Benjamin scale
[ "Biology" ]
502
[ "Behavioural sciences", "Behavior", "Sexology" ]
6,916,708
https://en.wikipedia.org/wiki/Modal%20companion
In logic, a modal companion of a superintuitionistic (intermediate) logic L is a normal modal logic that interprets L by a certain canonical translation, described below. Modal companions share various properties of the original intermediate logic, which enables to study intermediate logics using tools developed for modal logic. Gödel–McKinsey–Tarski translation Let A be a propositional intuitionistic formula. A modal formula T(A) is defined by induction on the complexity of A: for any propositional variable , As negation is in intuitionistic logic defined by , we also have T is called the Gödel translation or Gödel–McKinsey–Tarski translation. The translation is sometimes presented in slightly different ways: for example, one may insert before every subformula. All such variants are provably equivalent in S4. Modal companions For any normal modal logic M that extends S4, we define its si-fragment ρM as The si-fragment of any normal extension of S4 is a superintuitionistic logic. A modal logic M is a modal companion of a superintuitionistic logic L if . Every superintuitionistic logic has modal companions. The smallest modal companion of L is where denotes normal closure. It can be shown that every superintuitionistic logic also has a largest modal companion, which is denoted by σL. A modal logic M is a companion of L if and only if . For example, S4 itself is the smallest modal companion of intuitionistic logic (IPC). The largest modal companion of IPC is the Grzegorczyk logic Grz, axiomatized by the axiom over K. The smallest modal companion of classical logic (CPC) is Lewis' S5, whereas its largest modal companion is the logic More examples: Blok–Esakia isomorphism The set of extensions of a superintuitionistic logic L ordered by inclusion forms a complete lattice, denoted ExtL. Similarly, the set of normal extensions of a modal logic M is a complete lattice NExtM. The companion operators ρM, τL, and σL can be considered as mappings between the lattices ExtIPC and NExtS4: It is easy to see that all three are monotone, and is the identity function on ExtIPC. L. Maksimova and V. Rybakov have shown that ρ, τ, and σ are actually complete, join-complete and meet-complete lattice homomorphisms respectively. The cornerstone of the theory of modal companions is the Blok–Esakia theorem, proved independently by Wim Blok and Leo Esakia. It states The mappings ρ and σ are mutually inverse lattice isomorphisms of ExtIPC and NExtGrz. Accordingly, σ and the restriction of ρ to NExtGrz are called the Blok–Esakia isomorphism. An important corollary to the Blok–Esakia theorem is a simple syntactic description of largest modal companions: for every superintuitionistic logic L, Semantic description The Gödel translation has a frame-theoretic counterpart. Let be a transitive and reflexive modal general frame. The preorder R induces the equivalence relation on F, which identifies points belonging to the same cluster. Let be the induced quotient partial order (i.e., ρF is the set of equivalence classes of ), and put Then is an intuitionistic general frame, called the skeleton of F. The point of the skeleton construction is that it preserves validity modulo Gödel translation: for any intuitionistic formula A, A is valid in ρF if and only if T(A) is valid in F. Therefore, the si-fragment of a modal logic M can be defined semantically: if M is complete with respect to a class C of transitive reflexive general frames, then ρM is complete with respect to the class . The largest modal companions also have a semantic description. For any intuitionistic general frame , let σV be the closure of V under Boolean operations (binary intersection and complement). It can be shown that σV is closed under , thus is a general modal frame. The skeleton of σF is isomorphic to F. If L is a superintuitionistic logic complete with respect to a class C of general frames, then its largest modal companion σL is complete with respect to . The skeleton of a Kripke frame is itself a Kripke frame. On the other hand, σF is never a Kripke frame if F is a Kripke frame of infinite depth. Preservation theorems The value of modal companions and the Blok–Esakia theorem as a tool for investigation of intermediate logics comes from the fact that many interesting properties of logics are preserved by some or all of the mappings ρ, σ, and τ. For example, decidability is preserved by ρ, τ, and σ, finite model property is preserved by ρ, τ, and σ, tabularity is preserved by ρ and σ, Kripke completeness is preserved by ρ and τ, first-order definability on Kripke frames is preserved by ρ and τ. Other properties Every intermediate logic L has an infinite number of modal companions, and moreover, the set of modal companions of L contains an infinite descending chain. For example, consists of S5, and the logics for every positive integer n, where is the n-element cluster. The set of modal companions of any L is either countable, or it has the cardinality of the continuum. Rybakov has shown that the lattice ExtL can be embedded in ; in particular, a logic has a continuum of modal companions if it has a continuum of extensions (this holds, for instance, for all intermediate logics below KC). It is unknown whether the converse is also true. The Gödel translation can be applied to rules as well as formulas: the translation of a rule is the rule A rule R is admissible in a logic L if the set of theorems of L is closed under R. It is easy to see that R is admissible in a superintuitionistic logic L whenever T(R) is admissible in a modal companion of L. The converse is not true in general, but it holds for the largest modal companion of L. References Alexander Chagrov and Michael Zakharyaschev, Modal Logic, vol. 35 of Oxford Logic Guides, Oxford University Press, 1997. Vladimir V. Rybakov, Admissibility of Logical Inference Rules, vol. 136 of Studies in Logic and the Foundations of Mathematics, Elsevier, 1997. Companion
Modal companion
[ "Mathematics" ]
1,400
[ "Mathematical logic", "Modal logic" ]
6,916,771
https://en.wikipedia.org/wiki/MVDS
MVDS is an acronym for terrestrial "Multipoint Video Distribution System". MVDS currently is a part of broader MWS (Multimedia Wireless System) standards. In the European Union MWS works in 10.7–13.5 and 40.5–43.5 GHz frequency bands. Research for 42 GHz frequency has been done under the European Commission EMBRACE (Efficient Millimetre Broadband Radio Access for Convergence and Evolution) initiative. Standards ETSI EN 300 748 EN 301 215-3 EN 301 997-2 UK Standards MPT 1550 (obsolete) MPT 1560 (obsolete) CEPT ERC/DEC/(99)15 ECC/REC/(01)04 Manufacturers of MVDS equipment MDS America Inc Newtec EF Data BluWan Philips Broadband Network Hughes Network systems Thales Group (Thomson) Trophy electronics Technosystem Digital Network S.p.A. (TDN) Marconi Technology Centres (GMTT) United Monolithic Semiconductors (UMS) DOK Ltd (Elvalink) Q-par Angus Ltd ROKS Mobile technology
MVDS
[ "Technology" ]
221
[ "nan" ]
6,916,790
https://en.wikipedia.org/wiki/Aeroacoustic%20analogy
Acoustic analogies are applied mostly in numerical aeroacoustics to reduce aeroacoustic sound sources to simple emitter types. They are therefore often also referred to as aeroacoustic analogies. In general, aeroacoustic analogies are derived from the compressible Navier–Stokes equations (NSE). The compressible NSE are rearranged into various forms of the inhomogeneous acoustic wave equation. Within these equations, source terms describe the acoustic sources. They consist of pressure and speed fluctuation as well as stress tensor and force terms. Approximations are introduced to make the source terms independent of the acoustic variables. In this way, linearized equations are derived which describe the propagation of the acoustic waves in a homogeneous, resting medium. The latter is excited by the acoustic source terms, which are determined from the turbulent fluctuations. Since the aeroacoustics are described by the equations of classical acoustics, the methods are called aeroacoustic analogies. Types The Lighthill analogy considers a free flow, as for example with an engine jet. The nonstationary fluctuations of the stream are represented by a distribution of quadrupole sources in the same volume. The Curle analogy is a formal solution of the Lighthill analogy, which takes hard surfaces into consideration. The Ffowcs Williams–Hawkings analogy is valid for aeroacoustic sources in relative motion with respect to a hard surface, as is the case in many technical applications for example in the automotive industry or in air travel. The calculation involves quadrupole, dipole and monopole terms. References Further reading Blumrich, R.: Berechnungsmethoden für die Aeroakustik von Fahrzeugen. Tagungsband der ATZ/MTZ-Konferenz Akustik 2006, Stuttgart, 17–18.5.2006.. Contribution of the Technical University of Dresden to the modeling of flow sound sources with elementary emitters. Contribution of the Technical University of Dresden to the history of aeroacoustics. Computational fluid dynamics Fluid mechanics Acoustics Analogy
Aeroacoustic analogy
[ "Physics", "Chemistry", "Engineering" ]
434
[ "Computational fluid dynamics", "Classical mechanics", "Acoustics", "Computational physics", "Civil engineering", "Fluid mechanics", "Fluid dynamics stubs", "Fluid dynamics" ]
6,917,139
https://en.wikipedia.org/wiki/Band%20diagram
In solid-state physics of semiconductors, a band diagram is a diagram plotting various key electron energy levels (Fermi level and nearby energy band edges) as a function of some spatial dimension, which is often denoted x. These diagrams help to explain the operation of many kinds of semiconductor devices and to visualize how bands change with position (band bending). The bands may be coloured to distinguish level filling. A band diagram should not be confused with a band structure plot. In both a band diagram and a band structure plot, the vertical axis corresponds to the energy of an electron. The difference is that in a band structure plot the horizontal axis represents the wave vector of an electron in an infinitely large, homogeneous material (a crystal or vacuum), whereas in a band diagram the horizontal axis represents position in space, usually passing through multiple materials. Because a band diagram shows the changes in the band structure from place to place, the resolution of a band diagram is limited by the Heisenberg uncertainty principle: the band structure relies on momentum, which is only precisely defined for large length scales. For this reason, the band diagram can only accurately depict evolution of band structures over long length scales, and has difficulty in showing the microscopic picture of sharp, atomic scale interfaces between different materials (or between a material and vacuum). Typically, an interface must be depicted as a "black box", though its long-distance effects can be shown in the band diagram as asymptotic band bending. Anatomy The vertical axis of the band diagram represents the energy of an electron, which includes both kinetic and potential energy. The horizontal axis represents position, often not being drawn to scale. Note that the Heisenberg uncertainty principle prevents the band diagram from being drawn with a high positional resolution, since the band diagram shows energy bands (as resulting from a momentum-dependent band structure). While a basic band diagram only shows electron energy levels, often a band diagram will be decorated with further features. It is common to see cartoon depictions of the motion in energy and position of an electron (or electron hole) as it drifts, is excited by a light source, or relaxes from an excited state. The band diagram may be shown connected to a circuit diagram showing how bias voltages are applied, how charges flow, etc. The bands may be colored to indicate filling of energy levels, or sometimes the band gaps will be colored instead. Energy levels Depending on the material and the degree of detail desired, a variety of energy levels will be plotted against position: EF or μ: Although it is not a band quantity, the Fermi level (total chemical potential of electrons) is a crucial level in the band diagram. The Fermi level is set by the device's electrodes. For a device at equilibrium, the Fermi level is a constant and thus will be shown in the band diagram as a flat line. Out of equilibrium (e.g., when voltage differences are applied), the Fermi level will not be flat. Furthermore, in semiconductors out of equilibrium it may be necessary to indicate multiple quasi-Fermi levels for different energy bands, whereas in an out-of-equilibrium insulator or vacuum it may not be possible to give a quasi-equilibrium description, and no Fermi level can be defined. EC: The conduction band edge should be indicated in situations where electrons might be transported at the bottom of the conduction band, such as in an n-type semiconductor. The conduction band edge may also be indicated in an insulator, simply to demonstrate band bending effects. EV: The valence band edge likewise should be indicated in situations where electrons (or holes) are transported through the top of the valence band such as in a p-type semiconductor. Ei: The intrinsic Fermi level may be included in a semiconductor, to show where the Fermi level would have to be for the material to be neutrally doped (i.e., an equal number of mobile electrons and holes). Eimp: Impurity energy level. Many defects and dopants add states inside the band gap of a semiconductor or insulator. It can be useful to plot their energy level to see whether they are ionized or not. Evac: In a vacuum, the vacuum level shows the energy , where is the electrostatic potential. The vacuum can be considered as a sort of insulator, with Evac playing the role of the conduction band edge. At a vacuum-material interface, the vacuum energy level is fixed by the sum of work function and Fermi level of the material. Electron affinity level: Occasionally, a "vacuum level" is plotted even inside materials, at a fixed height above the conduction band, determined by the electron affinity. This "vacuum level" does not correspond to any actual energy band and is poorly defined (electron affinity strictly speaking is a surface, not bulk, property); however, it may be a helpful guide in the use of approximations such as Anderson's rule or the Schottky–Mott rule. Band bending When looking at a band diagram, the electron energy states (bands) in a material can curve up or down near a junction. This effect is known as band bending. It does not correspond to any physical (spatial) bending. Rather, band bending refers to the local changes in electronic structure, in the energy offset of a semiconductor's band structure near a junction, due to space charge effects. The primary principle underlying band bending inside a semiconductor is space charge: a local imbalance in charge neutrality. Poisson's equation gives a curvature to the bands wherever there is an imbalance in charge neutrality. The reason for the charge imbalance is that, although a homogeneous material is charge neutral everywhere (since it must be charge neutral on average), there is no such requirement for interfaces. Practically all types of interface develop a charge imbalance, though for different reasons: At the junction of two different types of the same semiconductor (e.g., p-n junction) the bands vary continuously since the dopants are sparsely distributed and only perturb the system. At the junction of two different semiconductors there is a sharp shift in band energies from one material to the other; the band alignment at the junction (e.g., the difference in conduction band energies) is fixed. At the junction of a semiconductor and metal, the bands of the semiconductor are pinned to the metal's Fermi level. At the junction of a conductor and vacuum, the vacuum level (from vacuum electrostatic potential) is set by the material's work function and Fermi level. This also (usually) applies for the junction of a conductor to an insulator. Knowing how bands will bend when two different types of materials are brought in contact is key to understanding whether the junction will be rectifying (Schottky) or ohmic. The degree of band bending depends on the relative Fermi levels and carrier concentrations of the materials forming the junction. In an n-type semiconductor the band bends upward, while in p-type the band bends downward. Note that band bending is due neither to magnetic field nor temperature gradient. Rather, it only arises in conjunction with the force of the electric field. See also Anderson's rule – approximate rule for band alignment of heterojunctions based on vacuum electron affinity Schottky–Mott rule – approximate rule for band alignment of metal–semiconductor junctions based on vacuum electron affinity and work function Field effect (semiconductor) – band bending induced by an electric field at the vacuum (or insulator) surface of a semiconductor Thomas–Fermi screening – rudimentary theory of the band bending that occurs around a charged defect Quantum capacitance – special case of band bending in field effect, for a material system containing a two-dimensional electron gas References James D. Livingston, Electronic Properties of Engineering Materials, Wiley (December 21, 1999). Electronic band structures Semiconductor structures
Band diagram
[ "Physics", "Chemistry", "Materials_science" ]
1,624
[ "Electron", "Electronic band structures", "Condensed matter physics" ]
6,917,217
https://en.wikipedia.org/wiki/Autoinoculation
Autoinoculation is derived from the Latin root words "autos" and "inoculate" that mean "self implanting" or "self infection" or "implanting something from oneself". Autoinoculation can refer to both beneficial medical procedures (e.g. vaccination) as well as non-beneficial or harmful natural processes (e.g. infection or disease). One beneficial autoinoculation medical procedure is when cells are removed from a person's body, medically altered then reinserted ("implanted" or "infected") into the same organism or person again to achieve some diagnostic or treatment aim. For example, stem cell treatments involve the harvesting of stem cells from one's own bone marrow and reintroduction (autoinoculation) of those cells at a later date, sometimes after altering those stem cells. Autoinoculation may also be used for the transplantation of a patient's own healthy bone marrow after recovering from a condition afflicting the tissue. Autoinoculation can also refer to the process by which viruses reproduce themselves within an organism by implanting themselves in an organism's cells, altering the metabolism, DNA repair, and replication processes of those cells, using those processes to reproduce and transmit itself throughout the organism. For example, warts and molluscum contagiosum can be spread by this method if wart tissue cells (skin cells altered by a papillomavirus) are mechanically transported to another part of the body. This transmission or autoinoculation of the wart can occur by mechanical touching of one part of the organism to another, friction that removes a portion of the infected cells to an external surface (or another organism) and then reintroduces those cells upon contact with the body elsewhere, or when wart cells or tissue are transported though the blood stream of an organism. References External links US National Library of Medicine Diagram Medical tests Virology
Autoinoculation
[ "Biology" ]
401
[ "Virus stubs", "Viruses" ]
6,917,569
https://en.wikipedia.org/wiki/Ceragon
Ceragon Networks Ltd. is a networking equipment vendor, focused on wireless point-to-point connectivity, mostly used for wireless backhaul by mobile operators and wireless service providers as well as private businesses. History Ceragon was established in 1996 under the name Giganet. It was listed on the NASDAQ on September 6, 2000 (symbol: CRNT). Ceragon designs and manufactures high-capacity communication systems for wireless backhaul, mid-haul, and front-haul. It addresses the segment of the cellular market that connects a typical cell site to an operator's core network (backhaul) and different cell site functions that reside in separate geographical locations (mid-haul and front-haul). Ceragon provides wireless equipment with capacities of up to 20 Gbit/s and plans to add products based on higher frequency bands, to support up to 100 Gbit/s.Ceragon markets its products under the IP-20 and IP-50 brands. Ceragon has sales offices located throughout North and South America, EMEA, and Asia that handle direct sales. Partnerships with distributors, VARs, and system integrators around the world provide an active indirect channel. Its US headquarters was opened in 1999 and its European headquarters in 2000. Ceragon reported worldwide revenue of $290.8 million US dollars for 2021. Ceragon's products include Short-Haul and Long-Haul wireless point-to-point systems in licensed microwave licensed spectrum (4–42 GHz) and millimeter-wave (57–88 GHz and, in the future – up to 170 GHz) spectrum range. Ceragon is also a provider of 5G wireless transport, enabling it to connect broadband sites to the core network in a wireless manner. This is a common way of connection when using an optic fiber connection is not an option. References Technology companies of Israel Computer hardware companies Companies listed on the Nasdaq Telecommunications companies established in 1996 1996 establishments in Israel Manufacturing companies based in Tel Aviv Networking companies Networking hardware companies Information technology companies of Israel Electronics companies of Israel Companies listed on the Tel Aviv Stock Exchange
Ceragon
[ "Technology" ]
436
[ "Computer hardware companies", "Computers" ]
6,918,599
https://en.wikipedia.org/wiki/Tom%20Bethell
Tom Bethell (; July 17, 1936 – February 12, 2021) was an American journalist who wrote mainly on economic and scientific issues. Life and career Bethell was born and raised in London, England. He was educated at Downside School and Trinity College, Oxford. A resident of the District of Columbia, he lived in Virginia, Louisiana, and California. From 1962 to 1965 he taught math at Woodberry Forest School, Virginia. He was married to Donna R. Fitzpatrick of Washington, D.C. He was a senior editor of The American Spectator and was for 25 years a media fellow of the Hoover Institution. He was Washington editor of Harper's, and an editor of the Washington Monthly. In 1980, he received a Gerald Loeb Award Honorable Mention for Columns/Editorial for "Fooling With the Budget." Jim Garrison investigation Bethell was hired as a researcher by New Orleans district attorney Jim Garrison to assist with his prosecution of Clay Shaw for conspiracy to assassinate John F. Kennedy. Bethell gave no credence to Garrison's charges that Shaw was involved. Shaw was acquitted after the jury deliberated for about an hour. Controversy In 1976, Bethell wrote a controversial article for Harper’s Magazine titled "Darwin's Mistake". According to Bethell there is no independent criterion of fitness and natural selection is a tautology. Bethell also stated that Darwin's theory was on "the verge of collapse" and natural selection had been "quietly abandoned" by his supporters. These claims were disputed by biologists. The paleontologist Stephen Jay Gould wrote a rebuttal to Bethell's arguments. Bethell was a member of the Group for the Scientific Reappraisal of the HIV-AIDS Hypothesis, which denies that HIV causes AIDS. In The Politically Incorrect Guide to Science (2005), he promoted denial of the existence of man-made global warming, AIDS denialism, and denial of evolution (which Bethell denied was "real science"). Bethell endorsed the intelligent design documentary-style film Expelled: No Intelligence Allowed. Bethell died from complications of Parkinson's disease at his home in Washington, D.C. in February 2021, aged 84. Selected publications Articles "Darwin's Mistake." Harper's Magazine, Vol. 252, No. 1509, February 1976, pp. 70-75. "Against Bilingual Education." Harper’s Magazine, Vol. 258, February 1979, pp. 30-33. . . "The Longshoreman Philosopher."Hoover Digest, No. 1, January 2003. Archived from the original. "Arnold Beichman, 1913-2010: an oral history and remembrance of a great adventurer and friend." The American Spectator, Vol. 43, No. 4, May 2010. Archived from the original. Books George Lewis: A Jazzman From New Orleans. Berkeley: University of California Press, 1977. The Electric Windmill: An Inadvertent Autobiography. Washington, D.C.: Regnery Gateway, 1988. Noblest Triumph: Property and Prosperity through the Ages. New York: St. Martin's Press, 1998. The Politically Incorrect Guide to Science. Washington, D.C.: Regnery Press, 2005. Questioning Einstein: Is Relativity Necessary? Vales Lake Publishing, 2009. Eric Hoffer: The Longshoreman Philosopher. Stanford: Hoover Institution Press, 2012. Darwin's House of Cards. Seattle: Discovery Institute Press, 2017. Audiobook available. Book contributions "Mises And Gorbachev: Why Socialism Still Doesn't Work." pp. 226-230. The Free Market Reader, edited by Lew Rockwell. Auburn: The Ludwig von Mises Institute, 1988. . "Bilingual Education in the Eighties: One Hispanic's Perspective." pp. 153-162. American Education: Essays in the Economics of Liberty, edited by Robert Emmet Long. New York: H. W. Wilson, 1984. . References External links 1936 births 2021 deaths American male journalists Gerald Loeb Award winners for Columns, Commentary, and Editorials HIV/AIDS denialists Intelligent design advocates Relativity critics The American Spectator people Hoover Institution people English emigrants to the United States Alumni of Trinity College, Oxford
Tom Bethell
[ "Physics" ]
859
[ "Relativity critics", "Theory of relativity" ]
9,185,833
https://en.wikipedia.org/wiki/Energy%20descent
Energy descent is a process whereby a society either voluntarily or involuntarily reduces its total energy consumption. Energy descent can be understood in relation to peak oil, in which case there is a theoretical post-peak-oil transitional phase characterized by a descending use of energy. The peak oil energy descent model has focused mainly on resource scarcity leading to an involuntary contraction of energy use. The phrase "energy descent" has also become increasingly associated with the voluntary and deliberate choice of a society to reduce energy consumption in response to the global climate crisis. The basic premise of energy descent in this latter context is that a simple replacement of fossil fuels with renewable and cleaner energy sources will not be feasible in the time frame required by an effective response to the global climate crisis. That is, those who call for a voluntary energy descent doubt that clean and renewable energy sources can simply replace the total quantity of energy currently in use while also reducing greenhouse gas emissions. Summary Energy descent refers to retraction of oil use after the peak oil availability or voluntary energy use reductions in response to the global climate crisis. Planning and preparing for the peak oil energy descent period has been recently promoted by David Holmgren, Rob Hopkins of the Transition Towns movement, and Richard Heinberg in the 2004 book Power down. Many who have planned and prepared for peak oil now see the climate crisis as an equally important—or greater—near term concern as compared with energy resource scarcity brought about by peak oil. That oil reserves are dwindling is now becoming acknowledged more widely, especially after the International Energy Agency released the 2008 World Energy Outlook report. Between 2007 and 2008 the IEA changed its figures for projected rate of decline in world energy supply from 3.7% a year (2007) to a projected rate of decline of 6.7% a year (2008) leading to a peak in oil supplies in 2020. In 2008 several major companies including Arup, Yahoo, and Virgin created the UK Industry Taskforce on Peak Oil and Energy Security (ITPOES) and released a report, The Oil Crunch, which calls for 'collaborative contingency planning' by government and industry in the face of dwindling oil reserves. An Energy Descent Action Plan (EDAP) is a local plan for planning and preparing for energy descent. It goes well beyond issues of energy supply, to look at across-the-board creative adaptations in the realms of health, education, economy and much more. Energy Descent Planning is a process developed by the Transition Towns Movement. Criticism Some techno-optimists, such as Julian Simon, have disputed energy projections such as this, arguing that as oil becomes more expensive, humanity will tend to diversify its energy sources away from a reliance on oil, thus avoiding undesired global reductions in energy usage. See also The Carbon War: Global Warming and the End of the Oil Era (book) Malthusian catastrophe Societal collapse Transition town Notes and references Further reading The End of Energy Obesity (book) De Young, R. (2014). Some behavioral aspects of energy descent: How a biophysical psychology might help people transition through the lean times ahead. Frontiers in Psychology, 5, 1255. Peak oil Energy economics Energy and the environment
Energy descent
[ "Environmental_science" ]
651
[ "Energy economics", "Environmental social science" ]
9,186,994
https://en.wikipedia.org/wiki/Belize%20Barrier%20Reef
The Belize Barrier Reef is a series of coral reefs straddling the coast of Belize, roughly offshore in the north and in the south within the country limits. The Belize Barrier Reef is a long section of the Mesoamerican Barrier Reef System, which is continuous from Cancún on the north-eastern tip of the Yucatán Peninsula through the Riviera Maya and down to Honduras, making it the second largest coral reef system in the world after the Great Barrier Reef in Australia. It is Belize's top tourist destination, popular for scuba diving and snorkeling and attracting almost half of its 260,000 visitors. It is also vital to the country's fishing industry. Charles Darwin described it as "the most remarkable reef in the West Indies" in 1842. In addition to its barrier reef, it also boasts three distinct Caribbean atolls: Turneffe Atoll, Glover's Reef and Lighthouse Reef. Lighthouse Reef is the most easterly diving area in Belize, it is home to the Great Blue Hole, made famous by Jacques Cousteau in 1970; Turneffe Atoll lies directly to the east of Belize City and is the nearest of the atolls to that city. These different reefs provide diverse scuba diving opportunities that include walls, pinnacles and reef flats that are located throughout an enormous area of sea. Species The Belize Barrier Reef is home to a large diversity of plants and animals: 70 hard coral species 36 soft coral species 500 species of fish hundreds of invertebrate species With 90% of the reef still needing to be researched, it is estimated that only 10% of all species have been discovered. Environmental protection A large portion of the reef is protected by the Belize Barrier Reef Reserve System, which includes seven marine reserves, 450 cayes, and three atolls. It totals in area, including: Glover's Reef Marine Reserve Great Blue Hole South Water Caye Marine Reserve Half Moon Caye Natural Monument Hol Chan Marine Reserve Cayes include: Ambergris Caye, Caye Caulker, Caye Chapel, Carrie Bow Caye, St. George's Caye, English Caye, Rendezvous Caye, Gladden Caye, Ranguana Caye, Long Caye, Moho Caye, Blackbird Caye, Three Corner Caye, Northern Caye, Tobacco Caye, and Sandbore Caye. In 1996 the reserve system was designated a World Heritage Site due to its vulnerability and the fact that it contains the most important and significant natural habitats for in-situ conservation of biological diversity (according to criteria VII, IX, and X). Belize became the first country in the world to completely ban bottom trawling in December 2010. In December 2015, Belize banned offshore oil drilling within 1 km of the Barrier Reef. Despite these protective measures, the reef remains under threat from oceanic pollution as well as uncontrolled tourism, shipping, and fishing. Other threats include hurricanes, along with global warming and the resulting increase in ocean temperatures, which causes coral bleaching. It is claimed by scientists that over 40% of Belize's coral reef has been damaged since 1998. The Belize Barrier Reef has been affected by mass-bleaching events. The first mass bleaching occurred in 1995, with an estimated mortality of 10 percent of coral colonies, according to a report by the Coastal Zone Management Institute in Belize. A second mass-bleaching event occurred, when Hurricane Mitch struck in 1998. Biologists observed a 48 percent reduction in live coral cover across the Belize reef system. Usually, it is hard to distinguish whether the reason for coral bleaching is human activities or natural reasons such as storms or bacterial fluctuations. In the case of the Belize Barrier Reef, many factors which make the distinction difficult do not apply. Human population in this area is much more sparse than the corresponding areas near other coral reefs, so the human activity and pollution are much lower compared to other coral reefs and the Belize reef system is in a much more enclosed area. When coral bleaching occurs, a large part of the coral dies, and the remaining part of the ecosystem begins the process of repairing the damage. But the chances of recovery are low, as corals that are bleached become much more vulnerable to disease. Disease often kills more corals than the bleaching event itself. With continuous bleaching, the coral reef will have little to no chance of recovery. Gallery See also List of reefs World Heritage Sites in Danger References External links UNESCO World Heritage website Barrier Reef Barrier Reef Coral reefs Barrier Reef Barrier Reef Mesoamerican Barrier Reef System
Belize Barrier Reef
[ "Biology" ]
936
[ "Biogeomorphology", "Coral reefs" ]
9,188,536
https://en.wikipedia.org/wiki/Aircraft%20dope
Aircraft dope is a plasticised lacquer that is applied to fabric-covered aircraft. It tightens and stiffens fabric stretched over airframes, which renders them airtight and weatherproof, increasing their durability and lifespan. The technique has been commonly applied to both full-size and flying models of aircraft. Attributes Doping techniques have been employed in aircraft construction since the dawn of heavier-than-air flight; the fabric of the ground-breaking Wright Flyer had benefitted from doping, as did many of the aircraft that soon followed. Without the application of dope, fabric coverings lacked durability while being highly flammable, both factors rendering them far less viable. By the 1910s, a wide variety of doping agents had entered widespread use while entirely original formulas were being regularly introduced in the industry. Typical doping agents include nitrocellulose, cellulose acetate and cellulose acetate butyrate. Liquid dopes are often highly flammable; nitrocellulose, for instance, is also known as the explosive propellant "guncotton". Dopes often have colouring pigments added to facilitate even application, and are available in a wide range of colours. Dope has been applied to various aircraft fabrics, such as madapollam; in more recent decades, it has also been applied to polyester and other fabrics with similar fine weave and absorbent qualities. Reportedly, polyester fabric coverings have become an industry-wide standard; the use of both cotton and linen fabrics have effectively been eliminated. In addition to changes in the materials that dope is applied to, the methods of application have also been refined to reduce shrinking, improve adherence and increase lifespan. By the 1910s, it was recognised that, while the practice was highly beneficial, certain types of doping agents posed a risk to workers' health. While acetate and nitrate-based dopes were believed to pose little risk by themselves, the volatile compounds to dissolve them prior to application were poisonous. The medical profession across several nations became aware of this threat just prior to the First World War, and promoted the need for adequate workplace ventilation as a mitigating measure in factories where doping was performed. In the United Kingdom specifically, studies were performed into the potential health impacts of various dopes, concluding that those produced to Royal Aircraft Factory specifications rendered them less liable to result in illness than several others. Investigations into health concerns surrounding dope were also conducted during the Second World War. Due to more powerful engines and advanced aerodynamic techniques, aluminium (and subsequently composites) supplanted fabric as the primary material used in the aviation industry by the latter half of the 20th century. Various light aircraft, including gliders, home-built kits, and light sport aircraft, have continued to use fabrics. Thus doping techniques continue to be employed, albeit to a lesser degree than at the dawn of aviation. There are several covering methods that do not use dope coating processes, as alternative treatment methods have been devised. Identical materials and techniques must be used during maintenance as had been employed in construction; thus, traditionally built aircraft continue to use doping techniques throughout their operating lives. Accidents Numerous accidents have occurred as a result of incorrect use of doping techniques. Examples of common mistakes include mixing dope with other chemicals, using it on the wrong fabrics, or applying it to contaminated or improperly prepared surfaces. During the investigation into the 1930 R101 airship disaster, it was determined that improper doping practices had resulted in the fabric of the airship having become brittle and easy to damage. Among the hypotheses for the 1937 Hindenburg airship disaster, the Incendiary Paint Theory, presented by Addison Bain, is that a spark between inadequately grounded fabric cover segments of the Hindenburg started the fire, and that the spark had ignited the "highly flammable" outer skin doped with iron oxide and aluminum-impregnated cellulose acetate butyrate, which remain potentially reactive even after fully setting. The hypothesis has been disputed. On 27 April 1995, 91-year-old aircraft designer, builder and significant figure in the homebuilt aircraft movement Steve Wittman and Paula Muir, Wittman's wife, were killed when their Wittman O&O Special broke up in flight due to delamination and separation of the wing fabric, resulting in wing aeroelastic flutter. The US National Transportation Safety Board investigation determined that the layers and types of doping that had been used on the aircraft did not have "the best adhesive qualities" and referred to "the Poly-Fiber Covering and Painting Manual" for proper processes to use. References External links Doping Techniques ~ 1943 US Navy Training Film – Instructional film Further reading Dope Coatings
Aircraft dope
[ "Chemistry" ]
966
[ "Coatings" ]
9,188,741
https://en.wikipedia.org/wiki/Ena/Vasp%20homology%20proteins
ENA/VASP homology proteins or EVH proteins are a family of closely related proteins involved in cell motility in vertebrate and invertebrate animals. EVH proteins are modular proteins that are involved in actin polymerization, as well as interactions with other proteins. Within the cell, ENA/VASP proteins are found at the leading edge of lamellipodia and at the tips of filopodia. ENA, the founding member of the family, was discovered in a fruit fly genetic screen for mutations that act as dominant suppressors of the AB non receptor tyrosine kinase. Invertebrate animals have one Ena homologue, whereas mammals have three, named Mena, VASP, and Evl. ENA/VASP proteins promote the spatially regulated actin polymerization required for efficient chemotaxis in response to attractive and repulsive guidance cues. Mice lacking functional copies of all three family members display pleiotropic phenotypes including exencephaly, edema, failures in neurite formation, and embryonic lethality. A sub-domain of EVH is the EVH1 domain. VASP Vasodilator-stimulated phosphoprotein (VASP) 45-residue-long tetramerization protein domain which regulates actin dynamics in the cytoskeleton. This is vital for processes such as cell adhesion and cell migration. Function Ena/VASP proteins are actin cytoskeletal regulatory proteins. Ena/VASP proteins are often found in dynamic actin structures like filopodia and lamellipodia, but the precise function in their formation is controversial. Ena/VASP proteins remain processively bound to growing barbed (+) ends of an actin filaments. They promote actin filament elongation both by delivering monomeric actin to the barbed (+) ends as well as protecting these ends from F-actin capping protein. Structure The tetramerisation domain has a right-handed alpha helical coiled-coil structure. References EVH domain Protein domains Proteins Cell movement Cytoskeleton
Ena/Vasp homology proteins
[ "Chemistry", "Biology" ]
451
[ "Biomolecules by chemical classification", "Molecular and cellular biology stubs", "Protein classification", "Biochemistry stubs", "Protein domains", "Molecular biology", "Proteins" ]
9,189,659
https://en.wikipedia.org/wiki/Alpha%20Muscae
Alpha Muscae, Latinized from α Muscae, is a star in the southern circumpolar constellation of Musca. With an apparent visual magnitude of +2.7, it is the brightest star in the constellation. The distance to this star has been determined using parallax measurements, giving an estimate of about from Earth. With a stellar classification of B2 IV-V, this star appears to be in the process of evolving away from the main sequence of stars like the Sun and turning a subgiant star, as the supply of hydrogen at its core becomes exhausted. It is larger than the Sun, with nearly nine times the mass and almost five times the radius. This star is radiating around 4,000 times as much luminosity as the Sun from its outer atmosphere at an effective temperature of 21,400 K, giving it the blue-white hue of a B-type star. Alpha Muscae appears to be a Beta Cephei variable star. Telting and colleagues report it as a Beta Cephei with a high degree of confidence as they found regular pulsations in its spectrum in a high-resolution spectroscopy study published in 2006, although Stankov and Handler (2005) listed it as a poor or rejected candidate in their Catalog of Galactic β Cephei Stars. The International Variable Star Index lists it as a Beta Cephei variable which varies in brightness from magnitude 2.68 to 2.73, with a period of 2.17 hours. Alpha Muscae is rotating rapidly with a projected rotational velocity of and has an estimated age of about 18 million years. This star is a proper motion member of the Lower Centaurus–Crux sub-group in the Scorpius–Centaurus OB association, the nearest such association of co-moving massive stars to the Sun. Alpha Muscae has a peculiar velocity of 10 km s−1, which, while high, is not enough for it to be considered a runaway star. References B-type main-sequence stars B-type subgiants Beta Cephei variables Lower Centaurus Crux Musca Muscae, Alpha CD-68 01104 109668 061585 4798
Alpha Muscae
[ "Astronomy" ]
450
[ "Musca", "Constellations" ]
9,189,674
https://en.wikipedia.org/wiki/Cuspate%20foreland
Cuspate forelands, also known as cuspate barriers or nesses in Britain, are geographical features found on coastlines and lakeshores that are created primarily by longshore drift. Formed by accretion and progradation of sand and shingle, they extend outwards from the shoreline in a triangular shape. Some cuspate forelands may be stabilised by vegetation, while others may migrate down the shoreline. Because some cuspate forelands provide an important habitat for flora and fauna, effective management is required to reduce the impacts from both human activities and physical factors such as climate change and sea level rise. Formation The debate involving how cuspate forelands form is ongoing. However, the most widely accepted process of formation involves long shore drift. Where longshore drift occurs in opposite directions, two spits merge into a triangular protrusion along a coastline or lakeshore. Their formation is also dependent on dominant and prevailing winds working in opposite directions. Formation can also occur when waves are diffracted around a barrier. Cuspate forelands can form both along coastlines and along lakeshores. Those formed along coastlines can be in the lee of an offshore island, along a coastline that has no islands in the vicinity, or at a stream mouth where disposition occurs. Formation in narrow straits or on open coastlines A cuspate foreland can form in a strait or along a coastline that has no islands or shoals in the area. In this case, longshore drift as well as prevailing wind and waves bring sediment together from opposite directions. If there is a large angle between the waves and the shoreline, the sediment converges, accumulates, and forms beach ridges. Over time, a cuspate foreland forms as a result of continued accretion and progradation. An example of this type of cuspate foreland is the one found at Dungeness along the southern coast of Britain. This cuspate foreland has formed as a result of the merging of SW waves from the English Channel, and waves from the east from the Strait of Dover. Another example is the cuspate foreland found between Awatere River and White Bluffs in Marlborough, New Zealand. This foreland has ridges on the eastern and northern sides which face the prominent waves. In other circumstances, spits are formed when long shore drift moves beach material down the beach until the coastline makes an abrupt change in direction, leading to the beach material 'spilling over' the corner to create a protrusion. This normally occurs across a river mouth. In the case of a cuspate foreland, the prevailing wind and a powerful secondary wind in the opposite direction move shingle down the coastline from both directions to a place where the coastline changes, causing a foreland to develop. The majority of cuspate forelands are formed over a coastline that juts out into the sea at enough of an angle to allow the drifting beach material to 'spill over' as a result of long shore drift in both directions. Formation in the lee of an island A cuspate foreland can form in the lee of an island. In this case, oncoming waves are diffracted around the island, protecting the coastline from the oncoming wave fronts. Sediments brought along the shoreline via longshore drift are then able to settle and accumulate in the lee of the island where there is less wave energy. This type of foreland has formed on the west shore of the North Island of New Zealand, in the lee of Kapiti Island. Waves refract around Kapiti Island, forming an area of low wave energy where sediment from the Waikanae River is able to settle. There is uncertainty whether the cuspate foreland has formed as a result of sediments coming from the north via longshore drift, or whether it has formed as a result of a complex cycle of sediments moving out to the continental shelf and then back again. Formation along lakeshores As well as forming along coastlines, cuspate forelands can also form along lake shores, although less is known about this type of cuspate foreland. This type of cuspate foreland includes Point Pelee along the shoreline of Lake Erie, and those found along the shoreline of Lake Victoria in Australia. There are two theories with regard to the formation of Point Pelee. Firstly, it is thought that Point Pelee has formed from depositional processes. Alternatively, it is suggested that Point Pelee is a relic of a past feature that has eroded over time. This gap in knowledge provides the opportunity for further research. It is likely that Point Pelee is migrating westwards since accretion is occurring on the western side, and erosion is occurring on the eastern side. Lake Victoria in Australia also has a number of cuspate forelands. Point Scott is a cuspate foreland along this lakeshore that has formed from the gradual accumulation of sand and gravel. Features Cuspate forelands can be separated into three distinct areas: the central nose or apex, and two marginal wings. The apex usually has ridges that run parallel to the converging shorelines. Cuspate forelands can extend up to 5 km from the shoreline, and an underwater shoal may extend much further, up to 15 km from the exposed apex. Located between the mainland and the foreland are often lagoons or marshy areas. In some areas, such as along the North Carolina coastline, a series of cuspate forelands may form at least 100 km apart. In areas that have a large amount of shingle, such as the cuspate foreland at Dungeness, it is also common for a fresh water table to be present. Movement Once formed, cuspate forelands can remain where they are and continue to develop as sediment accumulates, or alternatively they may migrate down the coast as one side of the foreland erodes and the other side accretes. Cuspate Forelands that move are typical of those that are formed on open coastlines. The direction of migration is often indicated by a series of successive beach ridges on the advancing side of the foreland where there is less wave energy. The movement of cuspate forelands is commonly explained by longshore drift acting as the main process. However, there have been observed cases where two cuspate forelands on the same shoreline have migrated in opposite directions, showing that longshore drift does not always provide a sufficient explanation for their migration. If there is an offshore sandbank present, the position of the cuspate foreland is usually related to its position. If there is a change in the position of the sandbank, the position of the cuspate foreland typically follows. Not only does the sandbank act like an island since it causes waves to refract around it, but it also provides a source of sediment. As sand erodes from the sandbank, it is pushed towards the coastline, contributing to the formation of the cuspate foreland as the sandbank migrates along the coast. This often occurs in the opposite direction to longshore drift. In the case of a cuspate foreland that has formed close to an island, it is possible for it to extend right up to the island, forming a tombolo. Depending on the physical conditions such as storms, the feature can alternate between a cuspate foreland and a tombolo. Gabo Island in South Australia is an example of where this occurs. Succession After the formation of the cuspate foreland into its distinctive triangular shape, it will start to be colonised by pioneer species that are hardy and tough enough to survive in the environment. These pioneer species secure the cuspate foreland and allow a greater amount of sediment to further secure it. Colonization and succession of vegetation is dependent on a number of factors. Firstly, if the shingle is too coarse, the amount of fine sediment that can remain between the spaces is reduced, and the likelihood that seeds will germinate and grow upwards is low. Seeds will also fail to germinate and grow if there is insufficient retention of fresh water. Stable cuspate forelands that are composed of shingle often have vegetation above the high tide line. As vegetation is established, mites and collembolans break down plant matter such as roots, resulting in the accumulation of organic matter. Plants also cause the soil to develop and water retention to increase, therefore providing a habitat where more plants can grow. Vegetation above the high tide line is common on cuspate forelands that are stable and composed of shingle. Biological habitat Cuspate forelands provide a habitat for various flora and fauna. If a foreland is relatively stable and experiences low wave impact, it may be possible for vegetation to grow. In the United Kingdom, 11 taxa of invertebrates are found on shingle habitats. Shingle beaches also provide a habitat for birds to breed, nest, and rest en route while migrating. Impacts and management There are different management issues with regard to cuspate forelands depending on their formation. If a cuspate foreland has formed from deposition, it may be vulnerable if human interference alters the transport of sediments from the shoreline. However, if the cuspate foreland is a relic of a past feature that has eroded, human interference with longshore sediment movement will not have a significant impact on the cuspate foreland. For a cuspate foreland to be maintained, the input of sediment must be greater than output of sediment. Activities such as coastal development or engineering must be regulated for sediment to continue moving towards the foreland where it can be deposited. Development along cuspate forelands is risky due to erosion and the vulnerability to storms and sea level rise. As sea levels rise, cuspate forelands are likely to be at risk as they could move inland. At Point Pelee, approximately 1,900 hectares of former agricultural land on the cuspate foreland is now under water as a result of wind erosion and compaction of organic soils on the foreland. This foreland is particularly vulnerable to erosion when high lake levels are combined with spring and autumn cyclonic activity. Erosion can also occur as spring storms cause ice to scour the lake bottom at the edge of the foreland. Because there is uncertainty about its formation, there is uncertainty with regard to management, although Parks Canada realises the importance of including Point Pelee National Park in management plans. When there is an aquifer present under a cuspate foreland, regulation of water removal is required. At Dungeness, water restrictions have been put in place to maintain the aquifer level. The management of coastlines needs to take into account the natural processes that occur on cuspate forelands since many provide a habitat for birds. Alternative ways of managing coastal erosion is needed, such as the use of ‘soft’ defences instead of high impact defences such as sea walls. Some cuspate forelands naturally do not contain any vegetation due to a high level of disturbance from physical factors such as wave action. However, with the frequency of storms arising from climate change, the effect on forelands and their associated vegetation needs to be effectively managed. See also Integrated coastal zone management Beach evolution Longshore transport Point Pelee National Park References External links Dungeness, Romney Marsh Point Pelee National Park, Parks Canada Dungeness National Nature Reserve, Romney Marsh Countryside Project Cuspate Forelands at Lakes Entrance, Department of Primary Industries Coastal geography Geological processes Physical oceanography Coastal and oceanic landforms
Cuspate foreland
[ "Physics" ]
2,356
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
9,189,771
https://en.wikipedia.org/wiki/Inter-server
In computer network protocol design, inter-server communication is an extension of the client–server model in which data are exchanged directly between servers. In some fields server-to-server (S2S) is used as an alternative, and the term inter-domain can in some cases be used interchangeably. Protocols Protocols that have inter-server functions as well as the regular client–server communications include the following: IPsec, secure network protocol that can be used to secure a host-to-host connection The domain name system (DNS), which uses an inter-server protocol for zone transfers; The Dynamic Host Configuration Protocol (DHCP); FXP, allowing file transfer directly between FTP servers; The Inter-Asterisk eXchange (IAX); InterMUD; The IRC, an Internet chat system with an inter-server protocol allowing clients to be distributed across many servers; The Network News Transfer Protocol (NNTP); The Protocol for SYnchronous Conferencing (PSYC); SIP, a signaling protocol commonly used for Voice over IP; SILC, a secure Internet conferencing protocol; The Extensible Messaging and Presence Protocol (XMPP, formerly named Jabber). ActivityPub a client/server API for creating, updating and deleting content, as well as a federated server-to-server API for delivering notifications and content. SMTP which accepts both MUA->MTA traffic, as well as MTA->MTA, but it is usually recommended that different ports are used for these actions Some of these protocols employ multicast strategies to efficiently deliver information to multiple servers at once. See also Overlay network IRC Network protocols References
Inter-server
[ "Technology" ]
359
[ "Computing stubs", "Computer network stubs" ]
9,190,231
https://en.wikipedia.org/wiki/Quad%20%28unit%29
A quad is a unit of energy equal to (a short-scale quadrillion) BTU, or (1.055 exajoules or EJ) in SI units. The unit is used by the U.S. Department of Energy in discussing world and national energy budgets. The global primary energy production in 2022 was 637.8 quad, i.e., 672.9 EJ. Conversion Some common types of an energy carrier approximately equal to 1 quad are: 8,007,000,000 gallons (US) of gasoline 293,071,000,000 kWh 293.07 terawatt-hours (TWh) 33.434 gigawatt-years (GWy) 36,000,000 tonnes of coal 970,434,000,000 cubic feet of natural gas 5,996,000,000 UK gallons of diesel oil 25,200,000 tonnes of oil 252,000,000 tonnes of TNT or five times the energy of the Tsar Bomba nuclear test 12.69 tonnes of uranium-235 (with 83.14 TJ/kg) 6 seconds of sunlight reaching Earth See also Units of energy Orders of magnitude (energy) References Units of energy
Quad (unit)
[ "Mathematics" ]
256
[ "Quantity", "Units of energy", "Units of measurement" ]
9,190,532
https://en.wikipedia.org/wiki/Mark%20Norris%20%28technology%20writer%29
Mark Norris is a British consultant in the field of software engineering and telecommunications, noted as for his work writing on technology-related subjects. He gained a doctorate from the University of Glasgow in 1979 and has since worked in Australia, Europe, the Middle East and Japan in the telecommunications industry (working for some years for BT Group) and in academia (holding a position of visiting professor at the University of Ulster). Norris is a Fellow of the Institution of Engineering and Technology (formerly known as the Institution of Electrical Engineers). References Bibliography Norris, Mark; Rigby, Peter, Software Engineering Explained, John Wiley & Sons Ltd, Chichester, 1992. ISBN Norris, Rigby, Payne, The Healthy Software Project: A Guide to Successful Development and Management, John Wiley & Sons Ltd, Chichester, 1993, ISBN Norris, Mark, Survival in the Software Jungle, Artech House, 1995. ISBN Norris, Mark; Winton, Neil, Energize the Network: Distributed Computing Explained, Addison-Wesley, 1997. ISBN Norris, Mark; Frost, Andrew, Exploiting the Internet: Understanding and Exploiting an Investment in the Internet, John Wiley & Sons Ltd, Chichester, 1997. ISBN West, Steve; Norris, Mark, Media Engineering: A Guide to Developing Information Products, John Wiley & Sons Ltd, Chichester, 1997, ISBN Atkins, John; Norris, M, Total Area Networking, 2nd Edition, John Wiley & Sons Ltd, Chichester, 1999. ISBN Norris, Mark; Pretty, Steve, Designing the Total Area Network: Intranets, VPN's and Enterprise Networks Explained, John Wiley & Sons Ltd, Chichester, 1999, ISBN Bustard, Dave; Kawalek, Peter; Norris, Mark (Editors), Systems Modeling for Business Process Improvement, Artech House, 2000. ISBN Norris, M, Communications Technology Explained, John Wiley & Sons Ltd, Chichester, 2000. ISBN Norris, Mark; West, Steve, eBusiness essentials: technology and network requirements for mobile and online markets, 2nd Edition, John Wiley & Sons Ltd, Chichester, 2001. ISBN Norris, Mark, Mobile IP Technology for M-Business, Artech House, 2001.ISBN Norris, Mark, Gigabit Ethernet Technology and Applications,Artech House, 2002. ISBN British technology writers Norris, Mark (technology writer) Fellows of the Institution of Engineering and Technology Alumni of the University of Glasgow British Telecom people Year of birth missing (living people)
Mark Norris (technology writer)
[ "Engineering" ]
495
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
9,190,726
https://en.wikipedia.org/wiki/N%C3%A9ron%20model
In algebraic geometry, the Néron model (or Néron minimal model, or minimal model) for an abelian variety AK defined over the field of fractions K of a Dedekind domain R is the "push-forward" of AK from Spec(K) to Spec(R), in other words the "best possible" group scheme AR defined over R corresponding to AK. They were introduced by for abelian varieties over the quotient field of a Dedekind domain R with perfect residue fields, and extended this construction to semiabelian varieties over all Dedekind domains. Definition Suppose that R is a Dedekind domain with field of fractions K, and suppose that AK is a smooth separated scheme over K (such as an abelian variety). Then a Néron model of AK is defined to be a smooth separated scheme AR over R with fiber AK that is universal in the following sense. If X is a smooth separated scheme over R then any K-morphism from XK to AK can be extended to a unique R-morphism from X to AR (Néron mapping property). In particular, the canonical map is an isomorphism. If a Néron model exists then it is unique up to unique isomorphism. In terms of sheaves, any scheme A over Spec(K) represents a sheaf on the category of schemes smooth over Spec(K) with the smooth Grothendieck topology, and this has a pushforward by the injection map from Spec(K) to Spec(R), which is a sheaf over Spec(R). If this pushforward is representable by a scheme, then this scheme is the Néron model of A. In general the scheme AK need not have any Néron model. For abelian varieties AK Néron models exist and are unique (up to unique isomorphism) and are commutative quasi-projective group schemes over R. The fiber of a Néron model over a closed point of Spec(R) is a smooth commutative algebraic group, but need not be an abelian variety: for example, it may be disconnected or a torus. Néron models exist as well for certain commutative groups other than abelian varieties such as tori, but these are only locally of finite type. Néron models do not exist for the additive group. Properties The formation of Néron models commutes with products. The formation of Néron models commutes with étale base change. An Abelian scheme AR is the Néron model of its generic fibre. The Néron model of an elliptic curve The Néron model of an elliptic curve AK over K can be constructed as follows. First form the minimal model over R in the sense of algebraic (or arithmetic) surfaces. This is a regular proper surface over R but is not in general smooth over R or a group scheme over R. Its subscheme of smooth points over R is the Néron model, which is a smooth group scheme over R but not necessarily proper over R. The fibers in general may have several irreducible components, and to form the Néron model one discards all multiple components, all points where two components intersect, and all singular points of the components. Tate's algorithm calculates the special fiber of the Néron model of an elliptic curve, or more precisely the fibers of the minimal surface containing the Néron model. See also Minimal model program References W. Stein, What are Néron models? (2003) Algebraic geometry Number theory
Néron model
[ "Mathematics" ]
722
[ "Fields of abstract algebra", "Number theory", "Discrete mathematics", "Algebraic geometry" ]
9,191,733
https://en.wikipedia.org/wiki/GT%20Nexus
Infor Nexus (formerly known as GT Nexus) is an independent business unit of Infor LLC offering a multienterprise supply chain network. The on-demand global supply chain management platform and integrated applications are used worldwide by businesses to manage global direct procurement, supplier networks, global logistics and global trade processes. Founded in 1998, in Oakland, California, it merged with TradeCard in 2013, and in September 2015, GT Nexus was acquired by Infor. Today, Infor Nexus is a business unit of Infor. Infor Nexus operates in the Americas, Europe, and Asia with a focus on retail/apparel and industrial manufacturing. Customers include companies in pharmaceuticals, high-tech, automotive, CPG, apparel and footwear. Logistics service providers, financial service providers, and suppliers are also part of the Infor Nexus network. Its customers include Brooks Brothers, Sears, Adidas, Procter & Gamble, Del Monte Foods, Caterpillar Inc., Koch Industries, Abercrombie & Fitch, and Home Depot. History 1998 – Founded in Alameda, CA as Tradiant. 2001 – Renamed GT Nexus from Tradiant. 2008 – Acquired Metaship, a provider of logistics management technology. 2013 – Merged with TradeCard. Joint company employs about 1,000 people, and serves about 20,000 businesses in manufacturing, retail, and logistics. 2014 – Acquired Clear Abacus, a cloud-based solution that optimizes multimodal transportation planning. 2015 – Acquired by Infor, a technology company delivering industry-specific cloud suites. The deal, valued at $675 million, closed on September 21, 2015. 2018 – GT Nexus launched new global trade management platform. 2019 – GT Nexus relaunched as Infor Nexus. Products Infor Nexus products are used by importers, exporters, logistics providers, and financial institutions to manage the flow of inventory, transactions, and information related to global trade. All capabilities are delivered in the cloud with a subscription pricing model. The platform includes: Supply Chain Visibility Supply Chain Intelligence Factory Management Transportation Management Inventory Management Supply Collaboration Procure-to-pay Supply Chain Finance Competitors include SAP, Descartes, Oracle, and IBM. See also Shipping portal Supply-chain management Supply chain management software Supply chain network Transportation management system Vendor relationship management References External links Official Site Supply chain software companies Software companies based in California Companies based in Oakland, California Software companies established in 1998 ERP software companies Service-oriented (business computing) Cloud platforms Business software companies Defunct software companies of the United States
GT Nexus
[ "Technology" ]
513
[ "Cloud platforms", "Computing platforms" ]
9,192,883
https://en.wikipedia.org/wiki/Compound%20heterozygosity
In medical genetics, compound heterozygosity is the condition of having two or more heterogeneous recessive alleles at a particular locus that can cause genetic disease in a heterozygous state; that is, an organism is a compound heterozygote when it has two recessive alleles for the same gene, but with those two alleles being different from each other (for example, both alleles might be mutated but at different locations). Compound heterozygosity reflects the diversity of the mutation base for many autosomal recessive genetic disorders; mutations in most disease-causing genes have arisen many times. This means that many cases of disease arise in individuals who have two unrelated alleles, who technically are heterozygotes, but both the alleles are defective. These disorders are often best known in some classic form, such as the homozygous recessive case of a particular mutation that is widespread in some population. In its compound heterozygous forms, the disease may have lower penetrance, because the mutations involved are often less deleterious in combination than for a homozygous individual with the classic symptoms of the disease. As a result, compound heterozygotes often become ill later in life, with less severe symptoms. Although compound heterozygosity as a cause of genetic disease had been suspected much earlier, widespread confirmation of the phenomenon was not feasible until the 1980s, when polymerase chain reaction techniques for amplification of DNA made it cost-effective to sequence genes and identify polymorphic alleles. Cause Compound heterozygosity is one of the causes of variation in genetic disease. The diagnosis and nomenclature for such disorders sometimes reflects history, because most diseases were first observed and classified based on biochemistry and pathophysiology before genetic diagnosis was available. Some genetic disorders are really a family of related disorders that occur in the same metabolic pathway, or in related pathways. Naming conventions for the disease became established before precise molecular diagnosis was possible. For example, hemochromatosis is the name given to several different heritable diseases with the same outcome, excess absorption of iron. These variants all reflect a failure in a metabolic pathway associated with iron metabolism, however mutations that cause hemochromatosis can occur at different gene loci. Mutations have occurred at each locus many times, and a few such mutations have become widespread in some population. The fact that multiple loci are involved is the primary cause for the variant forms of hemochromatosis and its outcome. This variation is caused not by compound heterozygosity, but rather by the fact that several different enzyme defects can cause the disease. Clinically, most cases of hemochromatosis are found in homozygotes for the most common mutation in the HFE gene. But at each gene locus associated with the disease, there is the possibility of compound heterozygosity, often caused by inheritance of two unrelated alleles, of which one is a common or classic mutation, while the other is a rare or even novel one. For some genetic diseases, environmental cofactors are an important determinant of variation and outcome. In the case of hemochromatosis, penetrance is incomplete, even for the classic HFE mutation, and is affected by gender, diet, and behaviors such as alcohol consumption. Compound heterozygotes are often observed only through subclinical symptoms such as excess iron. Disease is rarely observed in such compound heterozygotes unless other causal factors (such as alcoholism) are present. As a result, compound heterozygosity for hemochromatosis may be more common than diagnosis based on pathology would suggest. Some genetic diseases are named more precisely, and represent a single point of failure on a metabolic pathway. For example, Tay–Sachs disease, GM2-gangliosidosis, AB variant, and Sandhoff disease might easily have been defined together as a single disease, because the three disorders are associated with failure of the same enzyme and have the same outcome. However, the three were discovered and named separately, and each represents a distinct molecular point of failure in a subunit that is required for activation of the enzyme. For all three disorders, compound heterozygosity is responsible for variant forms. For example, both TSD and Sandhoff disease have a more common infantile form and several late-onset variants. Post-infantile forms, which are rare, are generally caused by the inheritance of two unrelated alleles, of which one is usually a classic mutation, while the other is a rare or even novel one. Examples Phenylketonuria. Because phenylketonuria was the first genetic disorder for which mass post-natal genetic screening was available, beginning in the early 1960s, atypical cases were detected almost immediately. Molecular analysis of the genome was not yet possible, but protein sequencing revealed cases caused by compound heterozygosity. As molecular genomic techniques became available in the 1980s and 1990s, it became possible to explain a range of disorders in heterozygotes carrying one copy of one of the classic mutations for phenylketonuria. Tay–Sachs disease. In addition to its classic infantile form, Tay Sachs disease may present in juvenile or adult onset forms, often as the result of compound heterozygosity between two alleles, one that causes the classic infantile disease in homozygotes and another that allows some residual HEXA enzyme activity. Sickle cell syndromes. A variety of sickle cell disorders result from inheritance of the sickle cell gene in a compound heterozygous manner with other mutant beta globin genes. These disorders include sickle cell-beta thalassemia. In the case of sickle cell anemia, an individual with one allele for hemoglobin S and one allele for hemoglobin C would still develop the disease, despite being heterozygous for both genes. References Genetics Autosomal recessive disorders
Compound heterozygosity
[ "Biology" ]
1,245
[ "Genetics" ]
9,192,926
https://en.wikipedia.org/wiki/3Dconnexion
3Dconnexion is a German manufacturer of human interface devices for manipulating and navigating computer-generated 3D imagery. These devices are often referred to as 3D motion controllers, 3D navigation devices, 6DOF devices (six degrees of freedom) or a 3D mouse. Commonly utilized in CAD applications, 3D modeling, animation, 3D visualization and product visualization, users can manipulate the controller's pressure-sensitive handle (historically referred to as either a cap, ball, mouse or knob) to fly through 3D environments or manipulate 3D models within an application. The appeal of these devices over a mouse and keyboard is the ability to pan, zoom and rotate 3D imagery simultaneously, without stopping to change directions using keyboard shortcuts or a software interface. 3Dconnexion devices are compatible with over 300 applications including Autodesk Inventor, Autodesk Fusion 360, AutoCAD, Siemens NX, CATIA, SOLIDWORKS, PTC Creo, Solid Edge, Blender, Rhinoceros, Revit, SketchUp, Unreal Engine, Unity, Cinema4D, 3ds Max, Maya, Google Earth, Second Life, NASA World Wind, Virtual Earth 3D, Geomagic, T-FLEX CAD, Photoshop, and more. Products CadMouse Pro Wireless, CadMouse Pro Wireless Left, CadMouse Pro CadMouse Compact Wireless, CadMouse Compact SpaceMouse Enterprise SpaceMouse Pro Wireless, SpaceMouse Pro SpaceMouse Wireless, SpaceMouse Compact Keyboard Pro with Numpad Discontinued products: SpaceNavigator, SpaceNavigator for Notebook (Discontinued) SpaceExplorer (Discontinued) SpacePilot, SpacePilot Pro (Discontinued) SpaceTraveler (Discontinued) SpaceBall (Discontinued) Magellan/SpaceMouse Classic/Plus/XT serial or USB (Discontinued) History 3Dconnexion was formed in September 2001 by Logitech, combining LogiCAD3D, based in Europe, and Labtec's 3D peripheral business, based in the United States. The two companies combined have over 20 years of experience in 3D input devices. LogiCAD3D's product, the Magellan controller, was used in fields such as automotive design and aerospace. A NASA project used a Magellan product to control a robot in space. The SpaceBall also had a history in space, having been used to remotely drive the Sojourner robot on Mars. References External links 3Dconnexion.com Free software driver and SDK for Linux 3Dconnexion section on Spacemice.org Human–computer interaction Companies based in San Jose, California Video game control methods Logitech
3Dconnexion
[ "Engineering" ]
547
[ "Human–computer interaction", "Human–machine interaction" ]
9,193,086
https://en.wikipedia.org/wiki/Skip%20counting
Skip counting is a mathematics technique taught as a kind of multiplication in reform mathematics textbooks such as TERC. In older textbooks, this technique is called counting by twos (threes, fours, etc.). In skip counting by twos, a person can count to 10 by only naming every other even number: 2, 4, 6, 8, 10. Combining the base (two, in this example) with the number of groups (five, in this example) produces the standard multiplication equation: two multiplied by five equals ten. References Mathematics education Multiplication
Skip counting
[ "Mathematics" ]
115
[ "Mathematical objects", "Numbers", "Number stubs" ]
9,193,185
https://en.wikipedia.org/wiki/Countryman%20line
In mathematics, a Countryman line (named after Roger Simmons Countryman Jr.) is an uncountable linear ordering whose square is the union of countably many chains. The existence of Countryman lines was first proven by Shelah. Shelah also conjectured that, assuming PFA, every Aronszajn line contains a Countryman line. This conjecture, which remained open for three decades, was proven by Justin Moore. References Roger S. Countryman, Jr. Spaces having a -monotone base. Preprint, 1970. Order theory Infinity Set theory
Countryman line
[ "Mathematics" ]
118
[ "Set theory", "Mathematical logic", "Mathematical objects", "Infinity", "Mathematical logic stubs", "Order theory" ]
9,193,588
https://en.wikipedia.org/wiki/Castor%20and%20Pollux%20%28Prado%29
The Castor and Pollux group (also known as the San Ildefonso Group, after San Ildefonso in Segovia, Spain, the location of the palace of La Granja at which it was kept until 1839) is an ancient Roman sculptural group of the 1st century AD, now in the Museo del Prado, Madrid. Drawing on 5th- and 4th-century BC Greek sculptures in the Praxitelean tradition, such as the Apollo Sauroctonos and the "Westmacott Ephebe", and without copying any single known Greek sculpture, it shows two idealised nude youths, both wearing laurel wreaths. The young men lean against each other, and to their left on an altar is a small female figure, usually interpreted as a statue of a female divinity. She holds a sphere, variously interpreted as an egg or pomegranate. The group is 161 cm high and is now accepted as portraying Castor and Pollux. Identification The lefthand figure was originally headless but was restored in the 17th century, the heyday of interpretive restorations, by Ippolito Buzzi, when the sculpture was in the collection of Cardinal Ludovico Ludovisi, using a Hadrianic-era (ca. 130) bust of Antinous of the Apollo-Antinous type from another statue. The identification of the figures inspired many choices of male pairs during the 17th and 18th centuries. During the 19th century, it became known as "Antinous and Hadrian's genius", to get over the problem of their both being youths, whereas ahistorically it was an important feature of Antinous' relationship with Hadrian that Antinous was a youthful eromenos and Hadrian an elder erastes. Alternatively "Antinous and a sacrificial daemon" was suggested, in reference to the myth that Antinous had killed himself as a sacrifice to lengthen Hadrian's life), or simply as Antinous and Hadrian pledging their fidelity to one another. Other alternative identifications in the past have included: Hypnos and Thanatos, interpreting the sphere as a pomegranate, symbol of death Corydon and Alexis Winckelmann's suggestion of Orestes and Pylades offering a sacrifice to the statue of goddess Artemis, which they wanted to seize, or in front of the tomb of murdered Agamemnon. Winckelmann was the first to publish the sculpture, in Monumenti Antichi Inediti 1767, pp xxi–xxii. All these identifications are now thought to be erroneous and simply due to the figure's restoration as Antinous: the group is now accepted as Castor and Pollux, offering a sacrifice to Persephone. Such an identification is based on the right-hand figure, who holds two torches, one downturned (on a flower-wreathed altar) and one upturned (behind his back), and on identifying the woman's sphere as an egg (like that from which the Dioscuri were born). The interpretation was supported by Goethe, who owned a cast of the group. Some scholars assert that the statute group was originally created by the ancient sculptor Pasiteles. Style The work is an outstanding example of neo-Attic eclecticism frequent at the end of the Roman Republic and during the first decades of the Roman Empire, around the Augustan period, combining two different aesthetic streams: whilst the right-hand youth is Polyclitean, the left-hand one is in a softer, more sensual and Praxitelean style. History Its find site is unknown, but by 1623 it was in the Ludovisi collection at the Villa Ludovisi in Rome, where the Ludovisi restorer, the sculptor Ippolito Buzzi (1562–1634), restored it that year. Nicolas Poussin (illustration, left) saw it in the Ludovisi collection or in that of Cardinal Camillo Massimo, who owned it later. Poussin's sketch was not intended as a faithful representation of the sculpture, but to be stored and referred to, as part of his visual repertory of antiquities, which was extensive and which made its presence felt in most of his paintings. In his sketch of the San Ildefonso group Poussin has made minor adjustments to the poses, but his major change is in transforming the lithe adolescents into more muscular athletes or heroes. Its reputation soon spread and shortly after 1664 it was acquired by Queen Christina of Sweden to join the large art collection that she gathered during her stay in Rome. The ancient sculptures in that collection were transferred to the Odescalchi who, in 1724, offered this group to Philip V of Spain. Philip's second wife Isabella Farnese (from the Farnese of Parma, which had a history of sculpture collecting) acquired it at above-market price for him and had it sent to the Palace of La Granja de San Ildefonso (Segovia). From there it came into the Prado (catalogue number Catalogue Nr. E.28). Copies The erroneous identification with Antinous generated high interest in the sculpture, with large numbers of copies being produced, largely made in Italy and Northern Europe and based on plaster casts rather than made in Spain and based on the original there. These inevitably stoked the interest by obscuring the fact that the Antinous head was in fact a restoration, instead smoothing the two into a meaningful whole (as did the casts on which they were based). Notes References Main site about copies of the Ildefonso (or Castor & Pollux) group : http://www.antinoos.info/copies1.htm Copies Caroline Vout, Antinous: The Face of the Antique (Henry Moore Sculpture Trust, 2006), p83 Jules David Prown, 'Benjamin West and the Use of Antiquity' (American Art, Vol. 10, No. 2 – Summer, 1996), pp. 28–49 Bryn Mawr Classical Review 2006.02.36 – Stephan F. Schröder, Katalog der antiken Skulpturen des Museo del Prado in Madrid. Vol. 2: Idealplastik. Mainz am Rhein: von Zabern, 2004. Pp. xii, 537. . Francis Haskell and Nicholas Penny, Taste and the Antique (Yale University Press) 1981. Viktor Rydberg, Romerska dagar (Roman Days, 1877) John Addington Symonds, Excerpts from "Antinous," in 'Sketches and Studies in Italy and Greece' – "could we but understand [the group's] meaning clearly, the mystery of Antinous would be solved" External links Prado link (English) Copies Image Copy by Joseph Nollekens at the Victoria and Albert Museum Another view (large) (Flickr) Back view (large) (Flickr) (The Flickr images were taken after the statue was moved to the British Galleries; previously it had stood against a wall, preventing its back from being photographed) Re-created using 3D computer generated characters as part of 'Classics' by Beverley Hood, 2001 1st-century Roman sculptures 1623 archaeological discoveries Sculptures in the Museo del Prado Sculptures of men in Spain Sculptures of Greek mythology Neo-Attic sculptures Statues mistaken for Antinous Castor and Pollux
Castor and Pollux (Prado)
[ "Astronomy" ]
1,529
[ "Castor and Pollux", "Astronomical myths" ]
9,194,082
https://en.wikipedia.org/wiki/Collybia%20personata
Collybia personata (also recognised as Lepista personata, Lepista saeva, Clitocybe saeva and Tricholoma personatum, and commonly known as the field blewit and blue-leg) is a species of edible fungus commonly found growing in grassy areas across Europe and is morphologically related to the wood blewit Collybia nuda (formerly Lepista nuda). This mushroom was moved to the genus Collybia in 2023. Taxonomy This species was originally proclaimed by Elias Fries in 1818, as Agaricus personatus. Cooke proposed in 1871 another name still in use until recently — Lepista personata. Other names were to follow, namely Lepista saeva by P.D. Orton in 1960 and Clitocybe saeva by H.E. Bigelow & A.H. Smith in 1969, the latter placing the fungus in the larger genus Clitocybe. In Latin, the specific epithet sævus is an adjective meaning either fierce, outrageous, angry or strong. Likewise, personatus is a participle meaning disguised, pretended or false. Description The fruiting body of the mushroom resembles an agaric. The cap is at first hemispherical or convex, becoming almost flat with maturity, up to 16 cm in diameter. The cap cuticle is colored cream to light brown with a smooth texture to the touch, and is often seen glistening when fresh. Along the periphery, the cap ends in a thick incurved margin which may unfold as the mushroom expands. The white to pallid flesh is thick, firm and delicate upon slicing. The underside of the cap bears crowded pinkish, cream to light brown gills, which are free or emarginate in relation to the stem. The stem itself is cylindrical with a bulbous, or sometimes tapering base, and does not bear a ring. The stem is covered by a striking lavender or lilac-coloured fibrous skin which fades in older individuals, and has a thick, firm flesh concolorous with that of the cap. It is up to 6–7 cm tall and 2.5–3 cm in diameter. Under a light microscope, the spores are seen hyaline to pink, ellipsoid in shape, and with fine warts. The spore dimensions are 6-8 by 4-5 μm. C. personata produces a pale pink spore print. Distribution and habitat Collybia personata is found fruiting in open grasslands, parks, pastures, forest clearings, and in the vicinity of forest edges, unlike Collybia nuda which is commonly found in woodland. Collybia personata fruits gregariously, forming distinctive fairy rings. Its fruiting season extends from summer to the beginning of winter, and is widespread in Europe. In the UK, the season extends from September through to December. It has also been allegedly reported from California in North America. The California field blewit has also been described as Clitocybe tarda. Edibility Field blewits are edible. Blewits can be eaten as a cream sauce or sautéed in butter; they can also be cooked like tripe or as an omelette filling. Field blewits are often infested with fly larvae and don't store very well; they should therefore be used soon after picking. They are also very porous, so they are best picked on a dry day. References External links "Mushroom-Collecting.com - The Blewit" All that Rain Promises and More - Blewit Edible fungi personata Tricholomataceae Fungi of Europe Fungi found in fairy rings Taxa named by Elias Magnus Fries Fungus species
Collybia personata
[ "Biology" ]
758
[ "Fungi", "Fungus species" ]
9,194,453
https://en.wikipedia.org/wiki/Danish%20oil
Danish oil is a wood finishing oil, often made of tung oil or polymerized linseed oil. Because there is no defined formulation, its composition varies among manufacturers. Danish oil is a hard drying oil, meaning it can polymerize into a solid form when it reacts with oxygen in the atmosphere. It can provide a hard-wearing, often water-resistant satin finish, or serve as a primer on bare wood before applying paint or varnish. It is a "long oil" finish, a mixture of oil and varnish, typically around one-third varnish and the rest oil. Uses When applied in coats over wood, Danish oil cures to a hard satin finish that resists liquid well. As the finished coating is not glossy or slippery, it is a suitable finish for items such as food utensils or tool handles, giving some additional water resistance and also leaves a dark finish to the wood. Special dyed grades are available if wood staining is also needed. Application Compared to varnish it is simple to apply, usually a course of three coats by brush or cloth with any excess being wiped off shortly after application. The finish is left to dry for around 4–24 hours between coats, depending on the mixture being used and the wood being treated. Danish oil provides a coverage of approximately 12.5 m2/L (600 sq. ft./gallon). Spontaneous combustion Rags used for Danish oil, like those used for linseed oil, have some potential risk of spontaneous combustion and starting fires from exothermic oxidation, so it is best to dry rags flat before disposing of them, or else soak them in water. See also Tung oil References Varnishes Oils Painting materials Vegetable oils Wood finishing materials
Danish oil
[ "Chemistry" ]
354
[ "Varnishes", "Carbohydrates", "Coatings", "Oils" ]
9,194,777
https://en.wikipedia.org/wiki/Open%20Base%20Station%20Architecture%20Initiative
The Open Base Station Architecture Initiative (OBSAI) was a trade association created by Hyundai, LG Electronics, Nokia, Samsung and ZTE in September 2002 with the aim of creating an open market for cellular network base stations. The hope was that an open market would reduce the development effort and costs traditionally associated with creating base station products. Goal The OBSAI specifications provided the architecture, function descriptions and minimum requirements for integration of a set of common modules into a base transceiver station (BTS). It: defined an internal modular structure of wireless base stations. defined a set of standard BTS modules with specified form, fit and function such that BTS vendors can acquire and integrate modules from multiple vendors in an OEM fashion. defined internal digital interfaces between BTS modules to assure interoperability and compatibility. supported different access technologies such as GSM, Enhanced Data Rates for GSM Evolution (EDGE), CDMA2000, WCDMA or IEEE 802.16 marketed as WiMAX. This was intended to provide the BTS integrator with flexibility. A version 2.0 system reference document was published in 2006. BTS structure The OBSAI Reference Architecture defines four functional blocks, interfaces between them, and requirements for external interfaces. Functional blocks A base transceiver station (BTS) has four main blocks or logical entities: Radio Frequency (RF) block, Baseband block, Control and Clock block, and Transport block. The Radio Frequency Block sends and receives signals to/from portable devices (via the air interface) and converts between digital data and antenna signal. Some of the main functions are D/A and A/D conversion, up/down conversion, carrier selection, linear power amplification, diversity transmit and receive, RF combining and RF filtering. The Baseband Block processes the baseband signal. The functions include encoding/decoding, ciphering/deciphering, frequency hopping (GSM), spreading and Rake receiver (WCDMA), MAC (WiMAX), protocol frame processing, MIMO etc. The Transport Block interfaces to external network, and provides functions such as QoS, security functions and synchronization. Coordination between these three blocks is maintained by the Control and Clock Block. Internal interfaces Internal interfaces between the functional blocks are called reference points (RP). RP1 is the interface that allows communication between the control block and the other three blocks. It includes control and clock signals. RP1 specification also specifies UDPCP - a UDP based reliable communication protocol. A version 2.1 of the reference point 1 interface was published in 2008. RP2 provides a link between the transport and baseband blocks. Version 2.1 of the reference point 2 interface was published in 2008. RP3 is the interface between baseband block and RF block. RP3-01 is an (alternate) interface between Local Converter and Remote RF block. Version 4.2 of the reference point 3 interface was published in 2010. RP4 provides the DC power interface between the internal modules and DC power sources. Version 1.1 of the reference point 4 interface was published in 2010. Most of the industry at the time revolved around achieving lower cost RF modules and power amplifiers (PA), as these two components usually account for nearly 50 percent of the BTS cost. Consequently, OBSAI works to define reference point 3 (RP3) prior to the other reference points to promote more competitive sources in the RF module and PA market. External interfaces Transport Block provides external network interface to operator network. Examples are: (lub) to the Radio Network Controller (RNC) for 3GPP systems, R6 to the Access Services Network Gateway (centralized Gateway) or R3 to Connectivity Services Network (CSN) for WiMAX systems. RF Block provides external radio interface to subscriber devices. Examples are Uu or Um to the user equipment (UE) for 3GPP systems or R1 for WiMAX. See also Common Public Radio Interface (CPRI), an alternative, competing, standard. References Mobile telecommunications Network access
Open Base Station Architecture Initiative
[ "Technology", "Engineering" ]
837
[ "Mobile telecommunications", "Electronic engineering", "Network access" ]
9,194,858
https://en.wikipedia.org/wiki/Area%20of%20a%20triangle
In geometry, calculating the area of a triangle is an elementary problem encountered often in many different situations. The best known and simplest formula is where b is the length of the base of the triangle, and h is the height or altitude of the triangle. The term "base" denotes any side, and "height" denotes the length of a perpendicular from the vertex opposite the base onto the line containing the base. Euclid proved that the area of a triangle is half that of a parallelogram with the same base and height in his book Elements in 300 BCE. In 499 CE Aryabhata, used this illustrated method in the Aryabhatiya (section 2.6). Although simple, this formula is only useful if the height can be readily found, which is not always the case. For example, the land surveyor of a triangular field might find it relatively easy to measure the length of each side, but relatively difficult to construct a 'height'. Various methods may be used in practice, depending on what is known about the triangle. Other frequently used formulas for the area of a triangle use trigonometry, side lengths (Heron's formula), vectors, coordinates, line integrals, Pick's theorem, or other properties. History Heron of Alexandria found what is known as Heron's formula for the area of a triangle in terms of its sides, and a proof can be found in his book, Metrica, written around 60 CE. It has been suggested that Archimedes knew the formula over two centuries earlier, and since Metrica is a collection of the mathematical knowledge available in the ancient world, it is possible that the formula predates the reference given in that work. In 300 BCE Greek mathematician Euclid proved that the area of a triangle is half that of a parallelogram with the same base and height in his book Elements of Geometry. In 499 Aryabhata, a great mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, expressed the area of a triangle as one-half the base times the height in the Aryabhatiya. A formula equivalent to Heron's was discovered by the Chinese independently of the Greeks. It was published in 1247 in Shushu Jiuzhang ("Mathematical Treatise in Nine Sections"), written by Qin Jiushao. Using trigonometry The area of a triangle can be found through the application of trigonometry. Knowing SAS (side-angle-side) Using the labels in the image on the right, the height or altitude is . Substituting this in the area formula derived above, the area of the triangle can be expressed as: Where: a is the line BC, b is the line AC, c is the line AB; and: α is the interior angle at A, β is the interior angle at B, is the interior angle at C. Furthermore, since sin α = sin (π − α) = sin (β + ), and similarly for the other two angles: Knowing AAS (angle-angle-side) and analogously if the known side is a or c. Knowing ASA (angle-side-angle) and analogously if the known side is b or c. Using side lengths (Heron's formula) A triangle's shape is uniquely determined by the lengths of the sides, so its metrical properties, including area, can be described in terms of those lengths. By Heron's formula, where is the semiperimeter, or half of the triangle's perimeter. Three other equivalent ways of writing Heron's formula are Formulas resembling Heron's formula Three formulas have the same structure as Heron's formula but are expressed in terms of different variables. First, denoting the medians from sides a, b, and c respectively as ma, mb, and mc and their semi-sum as σ, we have Next, denoting the altitudes from sides a, b, and c respectively as ha, hb, and hc, and denoting the semi-sum of the reciprocals of the altitudes as we have And denoting the semi-sum of the angles' sines as , we have where D is the diameter of the circumcircle: Using vectors The area of triangle ABC is half of the area of a parallelogram: where , , and are vectors to the triangle's vertices from any arbitrary origin point, so that and are the translation vectors from vertex to each of the others, and is the wedge product. If vertex is taken to be the origin, this simplifies to . The oriented relative area of a parallelogram in any affine space, a type of bivector, is defined as where and are translation vectors from one vertex of the parallelogram to each of the two adjacent vertices. In Euclidean space, the magnitude of this bivector is a well-defined scalar number representing the area of the parallelogram. (For vectors in three-dimensional space, the bivector-valued wedge product has the same magnitude as the vector-valued cross product, but unlike the cross product, which is only defined in three-dimensional Euclidean space, the wedge product is well-defined in an affine space of any dimension.) The area of triangle ABC can also be expressed in terms of dot products. Taking vertex to be the origin and calling translation vectors to the other vertices and , where for any Euclidean vector . This area formula can be derived from the previous one using the elementary vector identity In two-dimensional Euclidean space, for a vector with coordinates and vector with coordinates , the magnitude of the wedge product is (See the following section.) Using coordinates If vertex A is located at the origin (0, 0) of a Cartesian coordinate system and the coordinates of the other two vertices are given by and , then the area can be computed as times the absolute value of the determinant For three general vertices, the equation is: which can be written as If the points are labeled sequentially in the counterclockwise direction, the above determinant expressions are positive and the absolute value signs can be omitted. The above formula is known as the shoelace formula or the surveyor's formula. If we locate the vertices in the complex plane and denote them in counterclockwise sequence as , , and , and denote their complex conjugates as , , and , then the formula is equivalent to the shoelace formula. In three dimensions, the area of a general triangle , and ) is the Pythagorean sum of the areas of the respective projections on the three principal planes (i.e. x = 0, y = 0 and z = 0): Using line integrals The area within any closed curve, such as a triangle, is given by the line integral around the curve of the algebraic or signed distance of a point on the curve from an arbitrary oriented straight line L. Points to the right of L as oriented are taken to be at negative distance from L, while the weight for the integral is taken to be the component of arc length parallel to L rather than arc length itself. This method is well suited to computation of the area of an arbitrary polygon. Taking L to be the x-axis, the line integral between consecutive vertices (xi,yi) and (xi+1,yi+1) is given by the base times the mean height, namely . The sign of the area is an overall indicator of the direction of traversal, with negative area indicating counterclockwise traversal. The area of a triangle then falls out as the case of a polygon with three sides. While the line integral method has in common with other coordinate-based methods the arbitrary choice of a coordinate system, unlike the others it makes no arbitrary choice of vertex of the triangle as origin or of side as base. Furthermore, the choice of coordinate system defined by L commits to only two degrees of freedom rather than the usual three, since the weight is a local distance (e.g. in the above) whence the method does not require choosing an axis normal to L. When working in polar coordinates it is not necessary to convert to Cartesian coordinates to use line integration, since the line integral between consecutive vertices (ri,θi) and (ri+1,θi+1) of a polygon is given directly by . This is valid for all values of θ, with some decrease in numerical accuracy when |θ| is many orders of magnitude greater than π. With this formulation negative area indicates clockwise traversal, which should be kept in mind when mixing polar and cartesian coordinates. Just as the choice of y-axis () is immaterial for line integration in cartesian coordinates, so is the choice of zero heading () immaterial here. Using Pick's theorem See Pick's theorem for a technique for finding the area of any arbitrary lattice polygon (one drawn on a grid with vertically and horizontally adjacent lattice points at equal distances, and with vertices on lattice points). The theorem states: where is the number of internal lattice points and B is the number of lattice points lying on the border of the polygon. Other area formulas Numerous other area formulas exist, such as where r is the inradius, and s is the semiperimeter (in fact, this formula holds for all tangential polygons), and where are the radii of the excircles tangent to sides a, b, c respectively. We also have and for circumdiameter D; and for angle α ≠ 90°. The area can also be expressed as In 1885, Baker gave a collection of over a hundred distinct area formulas for the triangle. These include: for circumradius (radius of the circumcircle) R, and Upper bound on the area The area T of any triangle with perimeter p satisfies with equality holding if and only if the triangle is equilateral. Other upper bounds on the area T are given by and both again holding if and only if the triangle is equilateral. Bisecting the area There are infinitely many lines that bisect the area of a triangle. Three of them are the medians, which are the only area bisectors that go through the centroid. Three other area bisectors are parallel to the triangle's sides. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter. There can be one, two, or three of these for any given triangle. See also Area of a circle Congruence of triangles References Area Triangles
Area of a triangle
[ "Physics", "Mathematics" ]
2,173
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Wikipedia categories named after physical quantities", "Area" ]
9,195,888
https://en.wikipedia.org/wiki/Marimastat
Marimastat was a proposed antineoplastic drug developed by British Biotech. It acted as a broad-spectrum matrix metalloproteinase inhibitor. Marimastat performed poorly in clinical trials, and development was terminated. This may be, however, a result of targeting cancer at too late of a stage. This is supported by the fact that MMP inhibitors have more recently been shown in animal models to be more effective in earlier stages of cancers. (Effects of angiogenesis inhibitors on multistage carcinogenesis in mice. Science 284, 808-812. Bergers, G., Javaherian, K., Lo, K.-M., Folkman, J., and Hanahan, D. (1999)). See also Batimastat References Experimental cancer drugs Hydroxamic acids Matrix metalloproteinase inhibitors Isobutyl compounds Tert-butyl compounds
Marimastat
[ "Chemistry" ]
190
[ "Organic compounds", "Functional groups", "Hydroxamic acids" ]
9,196,154
https://en.wikipedia.org/wiki/CryptoGraf
CryptoGraf is a secure messaging application for smartphones running Symbian OS and Windows Mobile. It allows the user to compose and send SMS and MMS messages that are encrypted and digitally signed using methods that are based on the S/MIME standard. Secure e-mail messaging is not supported. The cryptographic algorithms supported by CryptoGraf include AES, RSA and SHA-256. RSA public keys of other users are stored in a "Crypto Contacts" list . The user sends an encrypted SMS or MMS to a recipient listed in Crypto Contacts. Keys must be exchanged before messages can be sent. The way a Crypto Contact is received determines the trust level assigned to the key: High trust for Crypto Contacts received by Bluetooth. Medium trust for Crypto Contacts received via High trust contacts. Low trust for Crypto Contacts received by SMS or MMS. The Crypto Contacts list is based on a trust model similar to the Web of trust known from PGP. Crypto Contacts are compatible with X.509 digital certificates and contain RSA (1024/2048 bit) public keys. Messages are encrypted using AES-256 bit and digitally signed using RSA (1024/2048 bit) with SHA-256. CryptoGraf is integrated with standard messaging application in both Symbian and Windows Mobile and stores messages in the default Inbox, Sent and other folders. CryptoGraf in the press CryptoGraf got attention from local newspaper after their first release. The Nation, 23 January 2007. Data-encryption firm upbeat See also S/MIME PGP Public key infrastructure Random number generator X.509 Web of trust External links CryptoGraf website CryptoGraf documentation CryptoGraf FAQ Cryptographic software Pocket PC software Symbian instant messaging clients
CryptoGraf
[ "Mathematics" ]
379
[ "Cryptographic software", "Mathematical software" ]
9,196,294
https://en.wikipedia.org/wiki/Tend%20and%20befriend
Tend-and-befriend is a purported behavior exhibited by some animals, including humans, in response to threat. It refers to protection of offspring (tending) and seeking out their social group for mutual defense (befriending). In evolutionary psychology, tend-and-befriend is theorized as having evolved as the typical female response to stress. The tend-and-befriend theoretical model was originally developed by Shelley E. Taylor and her research team at the University of California, Los Angeles and first described in a Psychological Review article published in the year 2000. Biological bases According to the Polyvagal theory developed by Dr. Stephen Porges, the "Social Nervous System" is an affiliative neurocircuitry that prompts affiliation, particularly in response to stress. This system is described as regulating social approach behavior. A biological basis for this regulation appears to be oxytocin. Oxytocin has been tied to a broad array of social relationships and activities, including peer bonding, sexual activity, and affiliative preferences. Oxytocin is released in humans in response to a broad array of stressors, especially those that may trigger affiliative needs. Oxytocin promotes affiliative behavior, including maternal tending and social contact with peers. Thus, affiliation under stress serves tending needs, including protective responses towards offspring. Affiliation may also take the form of befriending, namely seeking social contact for one's own protection, the protection of offspring, and the protection of the social group. These social responses to threat reduce biological stress responses, including lowering heart rate, blood pressure, and hypothalamic pituitary adrenal axis (HPA) stress activity, such as cortisol responses. According to some research, women are more likely to respond to stress through tending and befriending than men. Paralleling this behavioral sex difference, estrogen enhances the effects of oxytocin, whereas androgens inhibit oxytocin release. Tending under stress Female stress responses that increased offspring survival would have led to higher fitness and thus were more likely to be passed on through natural selection. In the presence of threats, protecting and calming offspring while blending into the environment may have increased chances of survival for mother and child. When faced with stress, females often respond by tending to offspring, which in turn reduces stress levels. Studies conducted by Repetti (1989) show that mothers respond to highly stressful workdays by providing more nurturing behaviors towards their children. In contrast, fathers who experienced stressful workdays were more likely to withdraw from their families or were more interpersonally conflictual that evening at home. Furthermore, physical contact between mothers and their offspring following a threatening event decreased HPA activity and sympathetic nervous system arousal. Oxytocin, released in response to stressors, may be the mechanism underlying the female caregiving response. Studies of ewes show that administration of oxytocin promoted maternal behavior. Breastfeeding in humans, which is associated with maternal oxytocin release, is physiologically calming to both mothers and infants. Cooperative breeding Tend-and-befriend is a critical, adaptive strategy that is hypothesized to have enhanced reproductive success among female cooperative breeders. Cooperative breeders are group-living animals where infant and juvenile care from non-mother helpers are essential to offspring survival. Cooperative breeders include wolves, elephants, many nonhuman primates, and humans. Among all primates and most mammals, endocrinological and neural processes lead females to nurture infants, including unrelated infants, after being exposed long enough to infant signals. Non-mother female wolves and wild dogs sometimes begin lactating to nurse the alpha female's pups. Humans are born helpless and altricial, mature slowly, and depend on parental investment well into their young adult lives, and often even later. Humans have spent most of human evolution as hunter-gatherer foragers. Among foraging societies without modern birth control methods, women tend to give birth about every four years during their reproductive lifespan. When mothers give birth, they often have multiple dependent children in their care, who rely on adults for food and shelter for years. Such a reproductive strategy would not have been able to evolve if women did not have help from others. Allomothers (helpers who are not a child's mother) often protect, provision, carry, and care for children. Allomothers are usually a child's aunts, uncles, fathers, grandmothers, siblings, and other persons in the community. Even in modern Western societies, parents often rely on family members, friends, and babysitters to help care for children. Burkart, Hrdy, and Van Schaik (2009) argue that cooperative breeding in humans may have led to the evolution of psychological adaptations for greater prosociality, enhanced social cognition, and cognitive abilities for cooperative purposes, including willingness to share mental states and shared intentionality. These cognitive, prosocial processes brought on by cooperative breeding may have led to the emergence of culture and language. Befriending under stress Group living provides numerous benefits, including protection from predators and cooperation to achieve shared goals and access to resources. In modernized societies at least, it is found that women create, maintain, and use social networks—especially friendships with other women—to manage stressful conditions. During threatening situations, group members can be a source of support and protection for women and their children. Research shows that women operating in a modern and westernized paradigm are more likely to seek the company of others in times of stress, compared to men. In some cultures, women and adolescent girls report more sources of social support and are more likely to turn to same-sex peers for support than men or boys are. One study of six cultures (five of whom were non-western) found that women and girls tend to provide more frequent and effective support than men do, and they are more likely to seek help and support from other female friends and family members, although there was a degree of cultural variation based on the metric used. Women tend to affiliate with other women under stressful situations. However, when women were given a choice to either wait alone or to affiliate with an unfamiliar man before a stressful laboratory challenge, they chose to wait alone. Female-female social networks can provide assistance for childcare, exchange of resources, and protection from predators, other threats, and other group members. Smuts (1992) and Taylor et al. (2000) argue that female social groups also provide protection from male aggression. In spite of the large cultural diversity within this six-culture sample, none of the societies included demonstrated matrilineal tendencies, which have been found to negate and cancel out many supposedly "universal" sex differences (see "criticism" section below). Additionally, the metrics used by the Whitings for evaluating sex differences in social support are somewhat questionable in their ability to predict friendship and relational quality and solidarity. Many other surveys and tests, for instance, find that males actually demonstrate a greater degree of social support than women do in many non-western cultures, particularly from same-gender friendship networks. Neuroendocrine underpinnings Human and animal studies (reviewed in Taylor et al., 2000) suggest that oxytocin is the neuroendocrine mechanism underlying the female "befriend" stress response. Oxytocin administration to rats and prairie voles increased social contact and social grooming behaviors, reduced stress, and lowered aggression. In humans, oxytocin promotes mother-infant attachments, romantic pair bonds, and friendships. Social contact or support during stressful times leads to lowered sympathetic and neuroendocrine stress responses. Although social support downregulates these physiological stress responses in both men and women, women are more likely to seek some forms of social contact during stress. Furthermore, support from another female provides enhanced stress-reducing benefits to women. However, a review of female aggression noted that "The fact that OT [oxytocin] enhances, rather than diminishes, attention to potential threat in the environment casts doubt on the popular ‘tend-and-befriend’ hypothesis which is based on the presumed anxiolytic effect of OT". Benefits of affiliation under stress According to Taylor (2000), affiliative behaviors and tending activities reduce biological stress responses in both parents and offspring, thereby reducing stress-related health threats. "Befriending" may lead to substantial mental and physical health benefits in times of stress. Social isolation is associated with significantly enhanced risk of mortality, whereas social support is tied to positive health outcomes, including reduced risk of illness and death. Women have higher life expectancies from birth in modernized countries where there is equal access to medical care. In the United States, for example, this difference is almost 6 years. One hypothesis is that men's responses to stress (which include aggression, social withdrawal, and substance abuse) place them at risk for adverse health-related consequences. In contrast, women's responses to stress, which include turning to social sources for support, may be protective to health. There are a number of problems and controversies inherent in this reading, however. One major issue is that the female advantage in life expectancy is relatively recent and seems to be related to major societal changes accompanying industrialization, only some of which relate to modern medical advancements. Prior to the Industrial Revolution, men outlived women in many of the societies for which we have demographic data, and in many non-western societies the gap only begun to close and then reverse in the mid-to-late 20th century. The supposed "universality" of women's more adaptive coping mechanisms in response to stress is further challenged by pre-modern data indicating that female rates of suicide were much higher than male rates in many traditional societies. Competition for resources Group living and affiliation with multiple unrelated others of the same sex (who do not share genetic interests) also presents the problem of competing for access to limited resources, such as social status, food, and mates. Interpersonal stress is the most common and distressing type of stress for women. Although the befriending stress response may be especially activated for women under conditions of resource scarcity, resource scarcity also entails more intense competition for these resources. In environments with a female-biased sex ratio, where males are a more limited resource, female-to-female competition for mates is intensified, sometimes even resorting to violence. Although male crime rates far exceed those of females, arrests for assault among females follow a similar age distribution as in males, peaking for females in the late teens to mid-twenties. Those are ages in which females are at peak reproductive potential and experience the most mating competition. However, the benefits of affiliation would have outweighed the costs in order for tend-and-befriend to have evolved. Competition and aggression Rates of aggression between human males and females may not differ, but the patterns of aggression between the sexes do differ in many societies and by many different metrics. Although females in general are less physically aggressive, they tend to engage in as much or even more indirect aggression (e.g. social exclusion, gossip, rumors, denigration). When experimentally primed with a mating motive or status competition motive, men were more willing to become directly aggressive towards another man, whereas women were more likely to indirectly aggress against another woman in an aggression-provoking situation. However, experimentally priming people with a resource competition motive increased direct aggression in both men and women. Consistent with this result, rates of violence and crime are higher among males and females under conditions of resource scarcity. In contrast, resource competition did not increase direct aggression in either men or women when they were asked to imagine themselves married and with a young child. The costs of physical injury to a parent would also entail costs to his or her family. Lower variance in reproductive success and higher costs of physical aggression may explain the lower rates of physical aggression among human females compared to males. Females are in general more likely to produce offspring in their lifetimes than males, although this difference lessens or disappears in societies where monogamy or polyandry have become standardized. Therefore, they typically have less to gain from fighting and the risk of injury or death would produce greater fitness cost for females. The survival of young children might depend more on maternal than paternal care (although a number of studies of traditional societies have found that parental care in general is less essential than sometimes believed, and can be compensated for via alloparenting by both sexes), which underscores the importance of maternal safety, survival, and risk aversion. In this hypothetical model, infants' primary attachment is to their mother; notably, one study found that maternal death increased the chances of childhood mortality in foraging societies by fivefold, compared to threefold in the cases of paternal death. Therefore, women are believed by certain researchers to respond to threats by tending and befriending, and female aggression is often indirect and covert in nature to avoid retaliation and physical injury. Informational warfare Women befriend others not only for protection, but also to form alliances to compete with outgroup members for resources, such as food, mates, and social and cultural resources (e.g. status, social positions, rights and responsibilities). Informational warfare is the strategic, competitive tactics taking the form of indirect, verbal aggression directed towards rivals. Gossip is one such tactic, functioning to spread information that would damage the reputation of a competitor. There are several theories regarding gossip, including social bonding and group cohesion. However, consistent with informational warfare theory, the content of gossip is relevant to the context in which competition is occurring. For example, when competing for a work promotion, people were more likely to spread negative work-related information about a competitor to coworkers. Negative gossip also increases with resource scarcity and higher resource value. In addition, people are more likely to spread negative information about potential rivals but more likely to pass on positive information about family members and friends. As mentioned above, befriending can serve to protect women from threats, including harm from other people. Such threats are not limited to physical harm but also include reputational damage. Women form friendships and alliances in part to compete for limited resources, and also in part to protect themselves from relational and reputational harm. The presence of friends and allies can help deter malicious gossip, due to an alliance's greater ability to retaliate, compared to a single individual's ability. Studies by Hess and Hagen (2009) show that the presence of a competitor's friend reduced people's tendencies to gossip about the competitor. This effect was stronger when the friend was from the same competitive social environment (e.g. same workplace) than when the friend was from a nonrelevant social environment. Friends increase women's perceived capabilities for inflicting reputational harm on a rival as well as perceptions of defensive capabilities against indirect aggression. Criticism and controversy Like most evolutionary psychological theories related to sex differences in behavior, the "tend and befriend" model relies on a great deal of speculation, projection of present-day data into the distant past, untestable and unfalsifiable hypotheses, and reliance on a model of gender essentialism which has come under increasing critique from various social scientists in recent years. One major issue from an anthropological standpoint is the considerable diversity of gendered norms and behaviors in traditional societies, and the difficulty for western researchers to interpret these adequately using quantitative and etic means. Social and behavioral scientists often struggle to keep their personal biases and paradigms from affecting their interpretation of the data, with mixed results. For instance, anthropologists working within a psychoanalytic framework often set out on their project expecting to find cross-cultural confirmation of western gendered ideas such as castration anxiety or the Oedipus complex, only to run into considerable difficulty when non-western societies frequently deviate from these perceived "universal" norms. Sociobiologists and evolutionary psychologists in general have come under fire for cherry-picking and misinterpreting cross-cultural data in order to align with preconceptions about the universality of "human nature", and then accusing cultural anthropologists of various cognitive biases and over-reliance on the alleged "standard social science model". The perceived cross-cultural validation of gender norms such as higher female nurturance or male aggression and assertiveness would therefore have to be evaluated, as much as possible, using emic or culturally-specific means, or through researchers trained in culturally sensitive methodologies (such as Franz Boas' cultural relativism) with the hopes of minimizing western cultural biases. In spite of the perceived universal and biological basis for the tend and befriend response in human women, there is actually a great deal of controversy as to how consistently replicable western gender norms are across the broad range of human societies. Some researchers have found apparently consistent differences across countries favoring women's greater sociability and agreeableness (the dimensions most likely to map onto the tend and befriend theory). However, there are considerable variations between countries, particularly on extraversion, which would seem to frustrate any attempt to find universal bidirectional patterns favoring women's greater tendencies towards cooperative or gregarious behaviors. Many cross-cultural quantitative samples utilized by evolutionary psychologists are also plagued by a patrilineal or patriarchal bias. There is a rich body of data illustrating greater tendencies among women in various cultures toward cooperation, less overt competitiveness, more pro-social and nurturant responses, and preference for indirect and non-confrontational speech styles. For instance, Whiting and Whiting's influential "six culture study" found apparently consistent confirmation of western-stereotyped gender behaviors in six different communities spread across the world: New Englanders in the United States, Mixtec in Mexico, Ilocanos in the Philippines, Rajputs in India, Okinawans in Japan, and Gusii in Kenya. All of these communities are traditional patriarchal, and four of the six are also patrilineal. Even in the two non-patrilineal societies (New Englanders and Ilocanos), there was considerable inculcation towards conformity with patriarchal gender norms, from the capitalistic wage economy in New England and the influence of Spanish Roman Catholicism in the Philippines. This is important since matrilineal and bilateral descent is consistently associated with elimination or even reversal of purported gender differences in competitiveness versus co-operation. Folklore illustrates another piece of evidence for the diversity of gendered behavioral norms; while the familiar construction of dominant and assertive males vs submissive and nurturing females is replicated frequently in cross-cultural folklore motifs, there are notable exceptions and instances of reversed motifs (dominant and assertive females, submissive and nurturing males) in monogamous or matrilineal cultures like the Kadiweu and the Palikur. Heide Gottner-Abendroth's analysis of matriarchal societies (which she defines as all non-patriarchal societies) further challenges the notion that men are inherently less nurturant and therefore less prone to tending and befriending. In non-patriarchal societies, men are often expected to internalize virtues that western society codes as stereotypically "feminine", and the culturally constructed machismo which prevents men in many parts of the world from participating in child care or nurturing warm and pro-social coalitional relationships does not seem to exist. The tend and befriend model also assumes a lower emotional and psychological quality to male same-sex friendships as compared to those between women, interpreting the former as largely "instrumental" and focused on giving and returning favors, building coalitions or acquiring resources while the latter function as superior means of social support. This claim runs squarely counter to data finding that male friendships are equally if not more valuable to men's psychological well-being and societal adjustment than women's. This tendency to read men's homosocial relationships as somehow inherently "defective" in terms of psychoemotional support compared to women's does not fit with historical or cross-cultural accounts of deep romantic friendships between males and considerable emotional intimacy that male friends exchange in a number of non-western societies. Even in modern times, some quantitative research suggest that in some societies which are not affected by Northern European male anxieties about homosocial intimacy (such as Turkey or Portugal), men are equally or even more likely than women to share emotional hardships with same-sexed friends and to offer and receive emotional support from them. In the past, before globalization and industrialization standardized the modern cultural traits of males disproportionately "projecting inward" by killing themselves or using maladaptive coping mechanisms (such as substance abuse), such homosocial intimacy may have been higher across much of the world. it's worth noting that in eastern societies where heterosexual cross-sex contact is often limited men display as much intimacy in their same sex friendships and self disclose to their same sex friends just as much if not slightly more such as in India and Jordan. See also Coping (psychology) Need for affiliation Peer support Positive psychology References Further reading Aronson, E., Wilson, T.D., & Akert, R.M. (2005). Social Psychology. (5th ed.) Upper Saddle River, NJ: Pearson Education, Inc. Friedman, H.S., & Silver, R.C. (Eds.) (2007). Foundations of Health Psychology. New York: Oxford University Press. Gurung, R.A.R. (2006). Health Psychology: A Cultural Approach. Belmont, CA: Thomson Wadsworth. External links "Tend and Befriend", Nancy K. Dess, Psychology Today Human behavior Psychological stress
Tend and befriend
[ "Biology" ]
4,536
[ "Behavior", "Human behavior" ]
9,196,302
https://en.wikipedia.org/wiki/Sazonov%27s%20theorem
In mathematics, Sazonov's theorem, named after Vyacheslav Vasilievich Sazonov (), is a theorem in functional analysis. It states that a bounded linear operator between two Hilbert spaces is γ-radonifying if it is a Hilbert–Schmidt operator. The result is also important in the study of stochastic processes and the Malliavin calculus, since results concerning probability measures on infinite-dimensional spaces are of central importance in these fields. Sazonov's theorem also has a converse: if the map is not Hilbert–Schmidt, then it is not γ-radonifying. Statement of the theorem Let G and H be two Hilbert spaces and let T : G → H be a bounded operator from G to H. Recall that T is said to be γ-radonifying if the push forward of the canonical Gaussian cylinder set measure on G is a bona fide measure on H. Recall also that T is said to be a Hilbert–Schmidt operator if there is an orthonormal basis } of G such that Then Sazonov's theorem is that T is γ-radonifying if it is a Hilbert–Schmidt operator. The proof uses Prokhorov's theorem. Remarks The canonical Gaussian cylinder set measure on an infinite-dimensional Hilbert space can never be a bona fide measure; equivalently, the identity function on such a space cannot be γ-radonifying. See also References Stochastic processes Theorems in functional analysis Theorems in measure theory
Sazonov's theorem
[ "Mathematics" ]
317
[ "Theorems in mathematical analysis", "Theorems in measure theory", "Theorems in functional analysis" ]
9,196,474
https://en.wikipedia.org/wiki/Open-pan%20salt%20making
Open-pan salt making is a method of salt production wherein salt is extracted from brine using open pans. Virtually all European domestic salt is obtained by solution-mining of underground salt formations, although some is still obtained by the solar evaporation of seawater. Types of open-pan salt production Salt is made in two ways traditionally. Rock salt is mined from the ground. The other type known as white salt is made by the evaporation of brine. Brine is obtained in several ways. Wild brine streams, occurring from the natural solution of rock salt by groundwater, can come to the surface as natural brine springs or can be pumped up to the surface at well, shafts or boreholes. Artificial brine is obtained through solution mining of rock salt with freshwater and is known as 'controlled brine pumping'. A bastard brine used to be made by allowing freshwater to run through abandoned rock salt mines. A salt-on-salt process strengthens brine by dissolving rock salt and/or crystal salt in weak brine or seawater before evaporation. Solar evaporation uses the sun to strengthen and evaporate seawater trapped on the sea-shore to make sea salt crystals, or to strengthen and evaporate brine sourced from natural springs where it is made into white salt crystals. This led to three types of salt production, all of which used a variation of the open-pan salt method: Coastal salt production, involving solar evaporation of seawater, followed by artificial evaporation of salt using the open-pan technique in structures known as 'salterns'. Inland salt production, using brine from natural brine streams flowing over buried salt deposits that were pumped up from the ground and evaporated using the open-pan technique. Salt refining, a large-scale salt industry developed in coastal locations and based on a combination of inland salt mining and coastal salt production. Referred to as salt refining or salt-on-salt, the process combined weak brine from seawater with mined rock salt, and evaporated the brine into a white salt. Inland open-pan salt production Open-pan salt production was confined to a few locations where geological conditions preserved layers of salt beneath the ground. Only five complexes of inland open-pan salt works now survive in the world: Lion Salt Works, Cheshire, United Kingdom; Royal Saltworks at Arc-et-Senans, Salins-les-Bains, France; Saline Luisenhall, Göttingen, Germany; the Salinas da Fonte da Bica, Rio Maior, Portugal; and the Colorado Salt Works, USA. The two French saltworks at Salins-les-Bains and Arc-et-Senans became a UNESCO World Heritage Site in 1982. The earliest examples of pans used in the solution mining of salt date back to ancient times when the pans were made of ceramics known as briquetage and Cheshire VCP (Very Coarse Pottery), a coarse low-fired pottery. In Britain, these materials began to be identified from the early 1980s in the Marches (Herefordshire, Worcestershire, Shropshire and Wales) and later in Northern England. The Romans introduced small (3 ft square) pans made from lead using wood as a fuel. In Britain they established towns for salt production at Droitwich in Worcestershire, and Nantwich, Middlewich and Northwich in Cheshire. In the early Middle Ages these developed into the 'wich' towns of Cheshire. Small 'wich' houses containing several lead pans to evaporate the brine into salt, clustered around brine springs within each of the towns. The open-pan process continued largely unchanged throughout the medieval period. A 17th-century German wood-cut by Georgius Agricola shows the process in detail. Excavated evidence has uncovered wooden rakes to draw salt crystals to the side of the pan, and conical wicker baskets (barrows) in which the wet salt was drained and dried. By the 17th century, the pans started to be made from iron, firstly in pans by . William Brownrigg writing in 1748, in his Book of Common Salt, shows a wood-cut of one of these salt-making pans. The change to iron (from lead) coincided with a change from wood to coal for the purpose of heating the brine. Gradually, the pans increased in size. For example, Christoph Chrysel writing in 1773, in his Remarkable and very useful Information about the present Salt Works and Salt pans in England, noted that salt pans were wide and long and deep. Brine would be pumped into the pans, and concentrated by the heat of the fire burning underneath. As crystals of salt formed these would be raked out and more brine added. By the 19th century, the open-pan salt process had reached its zenith in Britain. Two principal regions of production existed, Worcestershire and Cheshire. Brine shafts were sunk to the level of the brine stream that flowed over the natural rock salt or halite. Brine was pumped from the ground using wind and later, steam-driven beam engines, and redistributed to large iron pans. These fell into two categories: Smaller fine pans were and about wide and about deep. They were housed in pan houses and had associated stove houses. The salt was evaporated in the pan at a high temperature of around . This produced higher quality grades of salt including 'Butter Salt', 'Dairy Salt', 'Calcutta Salt' and Lagos Salt'. After about six hours the salt would crystallise out of the brine solution and fall to the base of the pan. It was then the job of the lumpman to rake-up the salt and skim it into wooden tubs to create lumps, hence the name. The lumps would then be sent to the stove house or 'hothouse' to dry. Here the lumps would be piled up and the recycled heat from the fires beneath the pans used to heat the room before exhausting through a chimney. The salt lumps would be 'lofted' or passed up to a warehouse above by a man called the lofter. The lumps would be sold known as 'hand-it' lumps or processed in a crushing mill and then bagged. The second larger common or fishery pans were long x wide x deep and were built outside. The pans were usually heated by coal and were controlled by a fireman. The larger pans would be heated at a much lower temperature between for several days or even weeks. This would produce a much denser crystal with a variety of sizes known as common or fishery salt. Common salt was used for a variety of reasons but included the chemical industry. Fishery salt was used in the packing and processing of fish. The salt would not be made into lumps but instead was skimmed and turned out onto the wooden platforms around the pans. It was then barrowed in large wooden store houses. Occupations in an open-pan salt works The following are historical names given to occupations in open pan salt works, primarily in Cheshire, England. Lumpman: A lumpman would work on pans that made fine salt crystals, which were known as 'fine pans' or 'lump pans'. The quality of the salt generally depended on the state of the fires which crystallized the salt by forcing off the water. Therefore, each pan had its own individual furnace and chimney, which the lumpman was responsible for controlling. Wooden moulds were filled with salt crystals from the pans to produce a hard block (lump) of fine salt. Lumpmen were paid piecework, and would start at 3 or 4 in the morning, and could expect to work 12- to 16-hour days. Waller: A waller would be under the charge of the lumpman, and was responsible for the initial draining of the salt. Salt was drained by being raked to the side of the pans, and then transferred using skimmers onto the hurdle boards (walkways) around the pans. A waller is an ancient name for a saltmaker. He would have been hired on a daily basis. Fireman: In addition to the fine pans there were other 'common pans', used to make coarser salt. Because the production of common salt required slower burning fires, it was possible for a single fireman to have charge of several common pans, which could be up to long. Pan-smith: This was originally the name given to the man who made the salt-making pans. See also Alberger process Grainer process History of salt in Middlewich Lion Salt Works Saline Luisenhall Salt in Cheshire Salt Museum, Northwich Seawater greenhouse References Industrial processes Salt production
Open-pan salt making
[ "Chemistry" ]
1,794
[ "Salt production", "Salts" ]
9,197,779
https://en.wikipedia.org/wiki/Dambo
A dambo is a class of complex shallow wetlands in central, southern and eastern Africa, particularly in Zambia, Malawi and Zimbabwe. They are generally found in higher rainfall flat plateau areas and have river-like branching forms which in themselves are not very large but combined add up to a large area. Dambos have been estimated to comprise 12.5% of the area of Zambia. Similar African words include mbuga (commonly used in East Africa), matoro (Mashonaland), vlei (South Africa), fadama (Nigeria), and bolis (Sierra Leone); the French bas-fond and German Spültal have also been suggested as referring to similar grassy wetlands. Characteristics Dambos are characterised by grasses, rushes and sedges, contrasting with surrounding woodland such as miombo woodland. They may be substantially dry at the end of the (dry season), revealing grey soils or black clays, but unlike a flooded grassland, they retain wet lines of drainage through the dry season. They are inundated (waterlogged) in the (wet season) but not generally above the height of the vegetation, and any open water surface is usually confined to streams and small ponds or lagoons (small swamps) at the lowest point generally near the centre. The name dambo is most frequently used for wetlands on flat plateaus which form the (River source|headwaters) of streams. The definition for scientific purposes has been proposed as “seasonally waterlogged, predominantly grass covered, depressions bordering headwater drainage lines”. Types The problem with the preceding definition is that the word may also be used for wetlands bordering rivers far from the headwaters, for example the dambo of the Mbereshi River where it enters the swamps of the Luapula River in Zambia, . A 1998 report of the Food and Agriculture Organization distinguishes between ‘hydromorphic/phreatic’ dambos (associated with headwaters) and ‘fluvial’ dambos (associated with rivers), and also referred to five geomorphological types in Zambia’s Luapula Province: upland, valley, hanging, sand dune and pan dambos. Hydrology Dambos are fed by rainfall which drains out slowly to feed streams and are therefore a vital part of the water cycle. As well as being complex ecosystems, they also play a role in the biodiversity of the region. There is a popular idea that dambos act like sponges to soak up the wet season rain which they release slowly into rivers during the dry season thus ensuring a year-round flow, but this is opposed by some research which suggests that in the middle to late dry season the water is actually released from aquifers. Springs are seen in some dambos. Thus it may take a long time—perhaps several years—for water from a heavy rainy season to percolate through hills and emerge in a dambo, creating lagoons there or a flow in downstream rivers which cannot be explained by the previous year's rainfall. Dambos may be involved, for instance, in explaining puzzling variations in water level or flow in Lake Mweru Wantipa and Lake Chila in Mbala. Use Traditionally, dambos have been exploited: as a dry-season water source for rushes used as thatching and fencing material for clay used for building, brick-making and earthenware for hunting (especially birds and small antelope) for growing vegetables and other food crops, which can be vital in drought years since dambo soils usually retain enough moisture to produce a harvest when the rains fail for soaking bitter cassava in dug ponds for fishing (generally using fish traps) in those dambos with streams More recently, they have been used for fish ponds and growing upland rice. Efforts to develop dambos agriculturally have been hampered by a lack of research on the hydrology and soils of dambos, which have proved to be variable and complex. Example A dambo can be seen at (30 km south of Mansa, Zambia) in a forest reserve. Unlike in the neighbouring areas which have been cleared for farming and charcoal-burning, the dambo contrasts well with the undisturbed miombo woodland canopy. Headwater dambos have a branching structure like rivers. Most of the dambos have roughly the same width and form the same sort of pattern. An example of a pan dambo can be seen at (102 km north-west of Mulobezi, Zambia). The water in the pan has dried out, and the grass has been burnt off giving the dark appearance at the centre of the dambo. To the east and west of the pan dambo a series of dambos can be seen along two river courses. References Wetlands Landforms Wetlands of Zambia Wetlands of Zimbabwe
Dambo
[ "Environmental_science" ]
970
[ "Hydrology", "Wetlands" ]
9,198,066
https://en.wikipedia.org/wiki/Role%20conflict
Role conflict occurs when there are incompatible demands placed upon a person relating to their job or position. People experience role conflict when they find themselves pulled in various directions as they try to respond to the many statuses they hold. Role conflict can be something that can be for either a short period of time, or a long period of time, and it can also be connected to situational experiences. Intra-role conflict occurs when the demands are within a single domain of life, such as on the job. An example would be when two superiors ask an employee to do a task, and both cannot be accomplished at the same time. Inter-role conflict occurs across domains of life. An example of inter-role conflict would be a husband and father who is also Chief of Police. If a tornado strikes the small town he is living in, the man has to decide if he should go home and be with his family and fulfill the role of being a good husband and father or remain and fulfill the duties of a "good" Chief of Police because the whole town needs his expertise." Conflict among the roles begins because of the human desire to reach success, and because of the pressure put on an individual by two imposing and incompatible demands competing against each other. The effects of role conflict, as found through case-studies and nationwide surveys, are related to individual personality characteristics and interpersonal relations. Individual personality characteristic conflicts can arise within personality role conflict where "aspects of an individual's personality are in conflict with other aspects of that same individual's personality". Interpersonal relations can cause conflict because they are by definition "having an association between two or more people that may range from fleeting to enduring, which can cause that conflict." Example: "People in modern, high-income countries juggle many responsibilities demanded by their various statuses and roles. As most mothers can testify both parenting and working outside the home are physically and emotionally draining. Sociologists thus recognize role conflict as conflict among the roles corresponding to two or more statuses". The discipline of group dynamics in psychology recognizes role conflict within a group setting. Members of a group may feel that they are responsible for more than one role within this setting and that these roles may become disagreeable with each other. When the expectations of two or more roles are incompatible, role conflict exists. For example, a supervisor at a factory may feel strain due to his or her role as friend and mentor to the subordinate employees, while having to exhibit a stern and professional watchful eye over the employees. Work–family conflict A commonly noted role conflict is that between work and family. Researchers have noticed a declining fertility rate in developed countries. Some studies suggest that this drop may be because more women are pursuing careers and obtaining educations. The research is trying to show that women who have more trouble balancing their work life and family duties go on to have fewer additional children. While some people believe that work–family role conflict only occurs for women, a 2008 study by the Families and Work Institute showed that 49% of employed males with families experienced work–family conflict. The study also showed that work flexibility is the number one concern for employed females with families and the number two or three issue for employed men with families. Flexibility in the workplace can be a huge relief to a person struggling to balance their career and home-life. Having that control is something that could change the relationship between work and family life to better be able to manage role conflict, and if more business participated in this action there could be a possible better outcome for all. Another study was done in France where the same common conflict of work and family life roles were interfering to an extreme. This study found that not even working from home was the solution, but to be able to come in late or leave early, on a flexible schedule is what was working best to be able to handle the role conflict. Having this sort of flex schedule enables people to be able to work with their role conflicts and try to better be able to manage and cope with them. Again this study supports that if businesses do create this sort of flex schedule that this could be a definite possible solution. Role conflict requirements for different roles might compete for a person's limited time, or could occur due to various strains associated with multiple roles. Some people can play one role and play it well while others can play multiple roles and also play them well. For example, the dominant social perspective is to see a father as the provider and the protector and the mother as the housewife, cooking and cleaning. If men who accept this view enter a kitchen and proceed to cook, they might feel inappropriate for that role and the same might apply to women who enter a garage and proceed to fix the car. This attitude is a root cause for the conflict many women feel when they become full-time workers and mothers. Women's rights have evolved greatly in the past forty years and women share most of the same rights as men. While women have stepped up to fill different roles, some feel that men have not stepped in to help balance out the work load. Mothers and fathers in the 2020s are expected by employers to be able have the career capacity of their non-parent counterparts. In addition to this, social pressure exists for modern-day mothers to fulfil the ideal of the mother/wife of the 1950s. Realistically, women have a hard time balancing the two. Many women feel that they are forced to choose between career and family, then are made to feel guilty about their choice by society. Social factors among low-income adults When one has multiple role responsibilities, duties or demands from education, job or family relationships, it can be hard to manage. Additionally, the responsibilities are manageable while other times it can be hard to manage especially when one is living in a low income household. Homeless men and gender Homelessness is a situation that takes a heavy toll on anyone, especially men with children or dependents. Traditional gender roles describe men as being the providers. Homeless men are often unemployed thus lack the means to provide the resources that their family needs. This can cause high levels of distress in men. Homeless men may also become the sole caregiver of their children during homelessness. This can lead to high stress levels in men because they are expected to take on the role of both provider and nurturer. The transition can be very overwhelming. In prisons Role conflict is seen not only in the inmates of the prison, but also the prison personnel. There are two types of prisons: custody prisons and treatment prisons. The main goal of a custody prison is to protect the community by maintaining control over the inmates. The correctional officers are expected to maintain order, enforce rules, and keep custody. A key rule to their job is that interaction between inmates and officers is to remain distant. The main goal of a treatment prison is to protect the community by rehabilitating the inmate. The Officers are expected to respond to inmates in a therapeutic manner and develop ties with the inmates. Currently prisons are combining the two types of custody and the staff is experiencing role conflict. Officers are being asked to do conflicting jobs such as remain socially distant while also building close, supportive relationships with inmates. This emphasis on the combination of custody and treatment often results in two distinguished, mutually antagonistic groups of staff. Prisons are filled predominantly with male inmates. This may be due to higher levels of testosterone. Domination is symbolized by control, independence, heterosexuality, aggressiveness, authority, and a capacity for violence in American culture. When a male finds themselves lacking in one of these areas they may be driven to make up for it in another area; such as when a poor, jobless young man tries to show masculinity by carrying a gun or wearing gang related clothing. When one is in prison, many of the resources use to assert masculinity are not readily available, thus men seek other ways to proclaim their masculinity. Many inmates find in imperative to put on a mask of hyper-masculinity, which may conflict with their normal personality, in order to maintain their status within the prison. This expectation to maintain a certain idea of masculinity "contradict[s] basic human needs and desires for intimacy and emotional expression, creating stress and conflict between men's core selves and social expectations." Role clarity and role ambiguity One of the main causes of role conflict is role ambiguity, which is the lack of certainty in what a certain role in an organization requires. This can be the result or poor communication of job duties or unclear instructions from a supervisor. This can lead to role conflict when there are contradicting ideas as to what tasks are supposed to be accomplished. Team members can then be uncertain of their role and their teammate's roles to the team and team objectives begin to conflict with one another. Within families, an example of role ambiguity is whether a stepchild has the same social or moral obligations to care for a stepparent as a biological child would. The solution to this problem and role conflict as a whole can come from role clarity. As its name suggests, role clarity is clearly defining roles and objectives so as to reduce role conflict and role ambiguity. To do this, employers need to clearly communicate the goals of a project to the employees. Also, employees should be fully aware of their role in the group and their responsibilities. It is helpful if one develops and maintains a working environment where workers have communication and if needed, feedback can be provided. Encouragement is another form of clarity. If one has a question or is not clear about a specific role that has been given to them, workers can communicate. Make sure the person understands their roles and duties to avoid any mistakes that can occur, and ensure that workers have an up-to-date role so they can manage their roles accordingly. Within a workplace Working with groups – especially in a work or committee setting – can sometimes result in role conflict if an individual feels that his or her roles are in opposition. These roles may be in conflict for many reasons. For example, the role taker may misunderstand the role sender's prescribed tasks or the miscommunication can occur the other way, as well. If a role taker is seemingly enthusiastic about taking on many tasks within various roles, this may be communicated to the role sender and he or she may be given conflicting role requirements. Role conflict can pair with role ambiguity – a situation in which the expectations of a role are ill-defined – to create role stress, which is detrimental to workplace performance. Role stress has also been linked to decreased job satisfaction and employee turnover. To avoid role conflict within a work place, managers should outline specifically the duties required by an employee to avoid any miscommunication or confusion. Feedback should also be provided to employees, as this explicitly illustrates if the role-taker is properly performing the role requirements and can assist the role-taker if there are any concerns. Steps should be taken to avoid the crossover of potentially conflicting roles and if two or more roles are required of an employee, these roles should be separated by time and place if possible. Work performance Role conflict can have many different effects on the work-life of an individual as well as their family-life. In a study in Taiwan, it was found that those suffering from role conflict also suffered greatly in their work performance, mainly in the form of lack of motivation. Those with role conflict did not do more than the bare minimum requirements at work. There was also a decline in the ability to assign tasks. Having multiple roles will often lead to job dissatisfaction. Experiencing role conflict within the work place may also lead to workplace bullying. When companies undergo organizational change workers often experience either a loss or a gain in areas of a workers job, thus changing the expectations of the worker. Change is often very stressful for workers. Workers who might have lost a degree of power may feel like they lost their authority and begin to lash out at other employees by being verbally abusive, purposefully withholding work related items, or sometimes even physically to withhold their status. Inter-role Interpersonal role conflict occurs when the source of the dilemma stems from occupancy of more than one focal position. For example, as a husband and a father in a social system a superintendent may think his wife and children expect him to spend most of his evenings with them. However, his school board and P.T.A. groups, he may feel, expect him (as their school superintendent) to spend most of his after-office hours on educational and civic activities. The superintendent usually cannot satisfy both of these incompatible expectations. Intra-role Intra-personal role conflict occurs when an individual in one role believes that others have many different expectations for him/her in regards to that role. "The school superintendent, for example, may feel that the teachers expect him to be their spokesperson and leader, to take their side on such matters as salary increases and institutional policy. However, the superintendent may feel that the school board members expect him to represent them, to "sell" their views to the staff because he is the executive officer and the administrator of school board policies". And an example of mother as well in intra role conflict. Coping "Inter-role conflict results from competing sets of expectations that are aroused by organizational, interpersonal, and personal conflicts" The following strategies assist in modifying and managing these areas. One response to role conflict is deciding that something has to go. More than one politician, for example, has decided not to run for office because of the conflicting demands of a hectic campaign schedule and family life. In other cases, people put off having children in order to stay on the fast track for career success. Even the roles linked to a single status can make competing demands on us. A plant supervisor may enjoy being friendly with workers. At the same time, distance is necessary to evaluate his staff. An individual can alter external, structurally imposed expectations held by others, regarding the appropriate behavior of a person in his or her position. The most effective alteration is change in the workplace. If the job is a "family-friendly" environment, the needs of a parent may be met easier. One of the biggest stress-relievers for working parents is paid time off including family sick days. Parents may feel trapped if they need to stay home with their child but knows that missing a day of work will, in return, dock them a day of pay. If they have a few days of paid leave they will be able to take care of their child and not have to worry about losing money for doing so. Another workplace support of work-family conflict is child care. Some jobs have a daycare facility on site or nearby, assisting parents in knowing their children are well taken care of while they are working. The latter example distributes role expectations to others in order to alleviate role conflict. "Another approach involves changing one's attitude toward and perceptions of one's role expectations, as opposed to changing the expectations themselves. An example is setting priorities among and within roles, being sure that certain demands are always met (for example, the needs of sick children), while others have lower priority (such as dusting furniture)." See also Organizational conflict Organizational expedience Role engulfment Role set Role strain Workplace conflict References Further reading Workplace Role theory Conflict (process)
Role conflict
[ "Biology" ]
3,118
[ "Behavior", "Aggression", "Human behavior", "Conflict (process)" ]
9,198,927
https://en.wikipedia.org/wiki/Richard%20P.A.C.%20Newman
Richard P.A.C. Newman (1955–2000) was a physicist notable for his work in the area of cosmology and general relativity. He completed his PhD in 1979 at the University of Kent at Canterbury under G.C. McVittie with a thesis entitled Singular Perturbations of the Empty Robertson-Walker Cosmologies. He was a research fellow at the University of York 1984-1986. He died in 2000. Selected publications Newman, R. P. A. C., & McVittie, G. C., A point particle model universe, in Gen. Rel. Grav. 14, 591 (1982) Newman, R. P. A. C., Cosmic censorship and curvature growth, in Gen. Rel. Grav. 15, 641 (1983) Newman, R. P. A. C., A theorem of cosmic censorship: a necessary and sufficient condition for future asymptotic predictability, in Gen. Rel. Grav. 16, 175 (1984) Newman, R. P. A. C., Cosmic censorship, persistent curvature and asymptotic causal pathology, in Classical General Relativity, eds. Bonnor, W. B., Islam, J. N., & MacCallum, M. A. H. (Cambridge University Press, 1984) Newman, R. P. A. C., Compact space-times and the no-return-theorem in Gen. Rel. Grav. 18, 1181-6 (1986) Newman, R. P. A. C., Black holes without singularities in Gen. Rel. Grav. 21 981-95 (1989) Joshi, P. S., & Newman, R. P. A. C., General constraints on the structure of naked singularities in classical general relativity, Research report, Mathematical Sciences Research Centre, The Australian National University, Canberra (1987) Kriele, M., & Newman R. P. A. C., Differentiability considerations at the onset of causality violation in Classical and Quantum Gravity, vol. 9, no. 5 (1992) pp. 1329–1334 Newman, R. P. A. C., Conformal singularities and the Weyl curvature hypothesis in Rend. Sem. Mat. Univ. Pol. Tor. 50, 61-67 (1992) Newman, R. P. A. C., On the Structure of Conformal Singularities in Classical General Relativity, in Proc. R. Soc. Lond. A 443 (1993), pp 473–492 Newman, R. P. A. C., On the Structure of Conformal Singularities in Classical General Relativity: II Evolution Equations and a Conjecture of K P Tod, in Proceedings of the Royal Society of London: Mathematical and Physical Sciences, vol. 443, no. 1919 (Dec. 8, 1993), pp. 493–515 Footnotes External links Former staff of University of York 1955 births 2000 deaths Alumni of the University of Kent
Richard P.A.C. Newman
[ "Astronomy" ]
640
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
10,686,369
https://en.wikipedia.org/wiki/Species%20at%20Risk%20Act
The Species at Risk Act (, SARA) is a piece of Canadian federal legislation which became law in Canada on December 12, 2002. It is designed to meet one of Canada's key commitments under the International Convention on Biological Diversity. The goal of the Act is to prevent wildlife species in Canada from disappearing by protecting endangered or threatened organisms and their habitats. It also manages species which are not yet threatened, but whose existence or habitat is in jeopardy. SARA defines a method to determine the steps that need to be taken in order to help protect existing relatively healthy environments, as well as recover threatened habitats, although timing and implementation of recovery plans have limitations. It identifies ways in which governments, organizations, and individuals can work together to preserve species at risk and establishes penalties for failure to obey the law. The Act designates COSEWIC, an independent committee of wildlife experts and scientists, to identify threatened species and assess their conservation status. COSEWIC then issues a report to the government, and the Minister of the Environment evaluates the committee's recommendations when considering whether to add a species to the Schedule 1, which is the official List of Wildlife Species at Risk, or change its status. The Minister will give the list of wildlife species at Risk to the governor in council and will take advice from the Cabinet. The Cabinet is in charge of taking the list of species into account. If a species is listed as extirpated, endangered, or threatened, SARA requires that a Recovery Strategy be prepared by the federal government, in consultation with the relevant provinces and territories, wildlife management boards, and Indigenous organizations. The Recovery Strategy describes the major threats to the species and its habitat, identifies population objectives, and in broad terms, states what will need to be done to stop or reverse the species declines. Proposed Recovery Strategies are posted on the Species at Risk Public Registry, after which public comments are accepted, generally for 60 days. 30 days after the end of the public comment period, the recovery strategy must be finalized. Recent controversies In July 2016, the Government of Canada issued an emergency order to stop the development of a 2 km2 area on the South Shore (Montreal), Quebec to protect the Western Chorus Frog, which by 2009 had seen a 90% decrease in its historical range. This action was opposed by the Government of Quebec, who perceived it as an overstepping of provincial jurisdiction. The emergency order stopped the development of 171 new residences that had been approved by the local municipalities and by the Ministry of Sustainable Development, Environment and Parks (Quebec). 1000 residences are still permitted to be constructed. The original approved plan included 35.5 hectares to be retained for Western Chorus Frog habitat and breeding ponds and for a conservation area. 87 hectares will now be set aside. On March 31, 2022 the government of Canada had decided to revamp the Species at risk act by removing outdated amendments. Bill S-6 will add modernization to the amendments of 29 status which include the species at risk act. See also List of Wildlife Species at Risk (Canada) References Further reading Link To Copy of The Act SARA Homepage Committee on the Status of Endangered Wildlife in Canada website Species at Risk website Nature conservation in Canada Wildlife conservation in Canada Environmental law in Canada 2002 in Canadian law 2002 in the environment Convention on Biological Diversity Canadian federal legislation
Species at Risk Act
[ "Biology" ]
669
[ "Convention on Biological Diversity", "Biodiversity" ]
10,686,498
https://en.wikipedia.org/wiki/Impact%20parameter
In physics, the impact parameter is defined as the perpendicular distance between the path of a projectile and the center of a potential field created by an object that the projectile is approaching (see diagram). It is often referred to in nuclear physics (see Rutherford scattering) and in classical mechanics. The impact parameter is related to the scattering angle by where is the velocity of the projectile when it is far from the center, and is its closest distance from the center. Scattering from a hard sphere The simplest example illustrating the use of the impact parameter is in the case of scattering from a sphere. Here, the object that the projectile is approaching is a hard sphere with radius . In the case of a hard sphere, when , and for . When , the projectile misses the hard sphere. We immediately see that . When , we find that Collision centrality In high-energy nuclear physics — specifically, in colliding-beam experiments — collisions may be classified according to their impact parameter. Central collisions have , peripheral collisions have , and ultraperipheral collisions (UPCs) have , where the colliding nuclei are viewed as hard spheres with radius . Because the color force has an extremely short range, it cannot couple quarks that are separated by much more than one nucleon's radius; hence, strong interactions are suppressed in peripheral and ultraperipheral collisions. This means that final-state particle multiplicity (the total number of particles resulting from the collision), is typically greatest in the most central collisions, due to the partons involved having the greatest probability of interacting in some way. This has led to charged particle multiplicity being used as a common measure of collision centrality, as charged particles are much easier to detect than uncharged particles. Because strong interactions are effectively impossible in ultraperipheral collisions, they may be used to study electromagnetic interactions — i.e. photon–photon, photon–nucleon, or photon–nucleus interactions — with low background contamination. Because UPCs typically produce only two to four final-state particles, they are also relatively "clean" when compared to central collisions, which may produce hundreds of particles per event. See also Distance of closest approach Tests of general relativity References http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/rutsca2.html Classical mechanics
Impact parameter
[ "Physics" ]
483
[ "Classical mechanics stubs", "Mechanics", "Classical mechanics" ]
10,686,602
https://en.wikipedia.org/wiki/Temafloxacin
Temafloxacin (marketed by Abbott Laboratories as Omniflox) is a fluoroquinolone antibiotic drug which was withdrawn from sale in the United States shortly after its approval in 1992 because of serious adverse effects resulting in three deaths. It is not marketed in Europe. History Omniflox was approved to treat lower respiratory tract infections, genital and urinary infections like prostatitis, and skin infections in the United States by the Food and Drug Administration in January 1992. Severe adverse reactions, including allergic reactions and hemolytic anemia, developed in over 100 patients during the first four months of its use, leading to three patient deaths. Abbott withdrew the drug from sale in June 1992. Pharmacokinetics Following oral administration the compound is well absorbed from the gastrointestinal tract. The oral bioavailability is greater than 90%. Temafloxacin has a good tissue penetration in various biological fluids and tissues, particularly in the respiratory tissues, nasal secretions, tonsils, prostate and bone. In these districts the concentrations achieved are equal to or higher than those in serum. The fluoroquinolone has a 7-8 hour half-life. The penetration into the central nervous system (CNS)is less pronounced. The excretion from the body is primarily due to glomerular filtration in the kidneys. Clinical uses The compound was indicated for treating lower respiratory tract infections (community-acquired pneumonia, exacerbations of chronic bronchitis), genital and urinary tract infections (prostatitis, gonococcal and non-gonococcal urethritis, cervicitis), skin and soft tissue infections. See also Quinolones References External links FDA press release June 5, 1992. Fluoroquinolone antibiotics Hepatotoxins Withdrawn drugs Drugs developed by AbbVie 1,4-di-hydro-7-(1-piperazinyl)-4-oxo-3-quinolinecarboxylic acids
Temafloxacin
[ "Chemistry" ]
432
[ "Drug safety", "Withdrawn drugs" ]
10,686,608
https://en.wikipedia.org/wiki/Franek
Franek is the oldest oil shaft in the world, located in the village of Bóbrka, Poland. It was dug in 1854 by hand by Ignacy Łukasiewicz. References Oil wells
Franek
[ "Chemistry" ]
41
[ "Petroleum technology", "Oil wells" ]
10,687,965
https://en.wikipedia.org/wiki/Peoria%20Waterworks
Peoria Waterworks is a building complex built in 1890 for the Peoria, Illinois water system. Architecture The three building site was constructed in 1890 after the publicly owned Peoria Water Company was sold to John T. Moffat and Henry C. Hodgskins. The building was designed in Romanesque Revival style and first supplied water to the city of Peoria on December 1, 1890. The three structures, Pumping Station #1, Pumping Station #2 and the Main Well House, were included on the property's listing on the U.S. National Register of Historic Places on March 18, 1980. The original property was located at NE Adams and Lorentz St.; now the address is 100 Lorentz Ave. It is near the foot of Grandview Drive. The red sandstone buildings feature carvings, stained glass windows, copper flashing, hardwood trim, and turrets. Four gargoyles adorn the corners of the zinc roof of the Main Pumping Station. Notes Buildings and structures in Peoria, Illinois National Register of Historic Places in Peoria County, Illinois Water supply infrastructure on the National Register of Historic Places Industrial buildings and structures on the National Register of Historic Places in Illinois Industrial buildings and structures in Illinois Water treatment facilities Former pumping stations
Peoria Waterworks
[ "Chemistry" ]
249
[ "Water treatment", "Water treatment facilities" ]
10,688,606
https://en.wikipedia.org/wiki/Motorola%20C139
Motorola C139 is a cellular phone designed and manufactured for Motorola by an Original design manufacturer. It addresses people with basic needs, and has limited features. This phone has been offered on AT&T's GoPhone service, TracFone, Cellular One, and Net10. It is primarily focused for prepaid plans, and was claimed by a PC Magazine review to be the cheapest unlocked GSM handset. The Motorola C139 is supported by OsmocomBB. Design flaws This phone is designed so that the LCD screen is only readable with the backlight on. References External links Motorola's C139 Support Page Motorola C139 Manual C139 Mobile phones introduced in 2005
Motorola C139
[ "Technology" ]
141
[ "Mobile technology stubs", "Mobile phone stubs" ]
10,690,566
https://en.wikipedia.org/wiki/Cohesion%20%28geology%29
Cohesion is the component of shear strength of a rock or soil that is independent of interparticle friction. In soils, true cohesion is caused by following: Electrostatic forces in stiff overconsolidated clays (which may be lost through weathering) Cementing by Fe2O3, Ca CO3, Na Cl, etc. There can also be apparent cohesion. This is caused by: Negative capillary pressure (which is lost upon wetting) Pore pressure response during undrained loading (which is lost through time) Root cohesion (which may be lost through logging or fire of the contributing plants, or through solution) Typical values of cohesion Cohesion (alternatively called the cohesive strength) is typically measured on the basis of Mohr–Coulomb theory. Some values for rocks and some common soils are listed in the table below. Apparent cohesion of soil During critical state flow of soil, the undrained cohesion results from effective stress and critical state friction, not chemical bonds between soil particles. All that small clay mineral particles and chemicals do during steady plastic deformation of soft soil is to cause a pore water suction, which can be measured. When we remould soft soil in a classification test, its strength is [(suction) x (friction)], it remains ductile plastic material with constant "apparent cohesion" while it flows at constant volume, because it is at a constant effective stress, and critical state friction is constant. Critical state soil mechanics analyses the bearing capacity of soft clay on the wet side of critical state in terms of a perfectly plastic material with rapid undrained "apparent" cohesion. References Collins internet-linked dictionary of Geology See also Mohr–Coulomb failure criterion Shear strength Soil mechanics
Cohesion (geology)
[ "Physics", "Engineering" ]
364
[ "Structural engineering", "Applied and interdisciplinary physics", "Shear strength", "Soil mechanics", "Mechanical engineering" ]
10,691,989
https://en.wikipedia.org/wiki/Shoulder%20pad%20%28fashion%29
Shoulder pads are a type of fabric-covered padding used in men's and women's clothing to give the wearer the illusion of having broader and less sloping shoulders. In the beginning, shoulder pads were shaped as a semicircle or small triangle and were stuffed with wool, cotton, or sawdust. They were positioned at the top of the sleeve to extend the shoulder line. A good example of this is their use in "leg o' mutton" sleeves or the smaller puffed sleeves which are based on styles from the 1890s. In men's styles, shoulder pads are often used in suits, jackets, and overcoats, usually sewn at the top of the shoulder and fastened between the lining and the outer fabric layer. In women's clothing, their inclusion depends on the fashion taste of the day. Although from a non-fashion point of view they are generally for people with narrow or sloping shoulders, there are also quite a few cases in which shoulder pads will be necessary for a suit or blazer in order to compensate for certain fabrics' natural properties, most notably suede blazers, due to the weight of the material. There are also periods when pads intended to exaggerate the width of the shoulders are favored. As such, they were popular additions to clothing (particularly business clothing) during the 1930s and 1940s; the 1980s (encompassing a period from the late 1970s to the early 1990s); and the late 2000s to early 2010s. 1930 to 1945 In sports, the shoulder pad was invented in 1877 by a Princeton football player and was used in American football. In women's fashion, shoulder pads originally became popular in the 1930s when fashion designers Elsa Schiaparelli and Marcel Rochas included them in their designs of 1931. Though Rochas may have been the first to present them, Schiaparelli was the most consistent in promoting them during the 1930s and '40s and it is her name that came to be most associated with them. Both designers had been influenced by the extravagant shoulder flanges and small waists of traditional Southeast Asian ceremonial dress. Costume designer Travis Banton's broad-shouldered designs for Marlene Dietrich also influenced public tastes. Soon, broad, padded shoulders dominated fashion, seen even in eveningwear and perhaps reaching a peak of variety in 1935-36, when even Vionnet showed them; Rochas presented high, pinched-up shoulders; and Piguet outdid even Rochas by extending his widened shoulders vertically like oars or paddles. Amid all this competing extravagance, the widest shoulders were still said to come from Schiaparelli, who hadn't given them up even when they briefly dropped out of favor with designers in 1933. War was in the air during this entire period, and fashion reflected it in epaulettes and other martial details, but after World War II began in 1939, women's fashions became even more militarised. Jackets, coats, and even dresses in particular were influenced by masculine styles and shoulder pads became bulkier and were positioned at the top of the shoulder to create a solid look that sloped slightly toward the neck. The shoulder-padded style had now become universal, found in all garments except lingerie, so standard that when US designer Claire McCardell wanted to remove them from her garments in 1940, her financiers feared their sales would suffer and insisted that pads be retained. McCardell's innovative response was to put them in with very simple stitching so that they could be easily removed by the wearer, prefiguring the flexibility of the velcro-fastened shoulder pads of the 1980s. The following year, British designer Molyneux also eliminated shoulder pads, part of a prophetic trend in high fashion that would be carried further by Balenciaga in 1945 and culminate in Dior's slope-shouldered 1947 Corolle collection. Big shoulders were still popular in 1945, when Joan Crawford wore a fur coat with wide, exaggerated shoulders, also designed by Adrian, in the film Mildred Pierce. In men's fashion, zoot suits had their own share of popularity. Basically, a zoot suit is based on a "regular" 2-piece suit, yet one or two sizes larger, so it was supposed to be padded During this period, stiff, felt-covered cotton batting was the material used for most shoulder pads, a combination that allowed for easy adjustment but didn't hold its shape very well when washed. 1945 to 1970 Balenciaga's 1945 endorsement of sloped shoulders signaled the direction that fashion was heading, and this was confirmed with Christian Dior's transformative 1947 Corolle collection, characterized by a striking natural shoulder line. The popularity of shoulder pads with the public, too, ultimately tapered off later in the decade, after the war was over and women yearned for a softer, more feminine look. During the late 1940s to about 1951, some dresses featured a soft, smaller shoulder pad with so little padding as to be barely noticeable. Its function seems to have been to slightly shape the shoulder line. By the 1950s, shoulder pads appeared only in jackets and coats—not in dresses, knitwear or blouses as they had previously during the heyday of the early 1940s. Some of the rounded-shoulder, barrel-shaped coats of the late 1950s, particularly those of Balenciaga and Givenchy, contained shoulder pads to widen the rounded line. By the early 1960s, coat and jacket shoulder pads slowly became less noticeable (with Marc Bohan's fall 1963 collection for Dior a notable exception) and midway through the decade, shoulder pads had disappeared. 1970s Shoulder pads made their next appearance in women's clothing in the early 1970s, through the influence of British fashion designer Barbara Hulanicki and her label Biba. Biba produced designs influenced by the styles of the 1930s and 1940s, and so a soft version of the shoulder pad was revived. Ossie Clark was another London designer using shoulder pads at the time, showing forties-revival suits as early as 1968. During the first five years of the 1970s, a number of designers in other fashion capitals also presented padded shoulders with an explicit 1940s inspiration, constituting a minor trend that peaked in 1971. In 1970, Yves Saint Laurent showed forties-themed padded shoulders; in 1971, Angelo Tarlazzi, Yves Saint Laurent, Karl Lagerfeld for Chloé, Marc Bohan for Dior,  Valentino, Jean-Louis Scherrer, Guy Laroche, Michel Goma for Patou, Michele Aujard, Thierry Mugler, and many New York designers; in 1972, Jean-Louis Scherrer and Scott Barrie; in 1973, Valentino, Jean-Louis Scherrer, and Daniel Hechter; and in 1974, Jean-Louis Scherrer and Nino Cerruti. These padded shoulders never reached mainstream acceptance, though; Saint Laurent's forties-revival attempts in particular were widely criticized, and so the look was relatively limited in reach, with designers showing and the public preferring the relaxed, natural, often jeans-based clothing styles typical of the times. During the mid-1970s, Saint Laurent and a few others did show an occasional padded-shoulder jacket scattered among the popular ethnic and peasant looks, but sensibly-proportioned, easy, and contemporary in appearance instead of being part of a forties look, suitable for the standard officewear women were preferring as they entered the workforce in greater numbers during the decade, a look codified with the 1977 publications of John T. Molloy's The Woman's Dress for Success Book and Michael Korda's Success!. The shoulder padding occasionally seen in these business blazers was unobtrusive, no more pronounced than in a standard men's suit jacket, and the most high-fashion versions carried no pads at all, in line with the unconstructed Big Look that dominated the fashion world at the time. Fall 1978 For fall 1978, designers in all fashion capitals suddenly endorsed wide, padded shoulders across the board, introducing the broad-shouldered styles that would characterize the 1980s. There had been some signs of a move toward broader shoulders the previous year, but it would be a January 1978 collection from Yves Saint Laurent that would be cited as the first clear expression of the trend when Saint Laurent showed a handful of jackets with exaggerated shoulder padding over slim trousers. Jean-Louis Scherrer showed somewhat similar square-shouldered designs two days before Saint Laurent, but it was Saint Laurent's shoulders that made an impression on the press. In later years, there would be various claims about who began the eighties big-shoulders trend, with Norma Kamali, Giorgio Armani, and several others variously cited as the exclusive originator, but Saint Laurent was the designer credited by sources at its 1978 inception with launching the trend. When most of the rest of the fashion world showed broad-shouldered looks a couple of months later, there would be two distinct versions of it. The first, favored by Paris designers like Saint Laurent, Karl Lagerfeld for Chloé, Thierry Mugler, Claude Montana, Pierre Cardin, Jean-Claude de Luca, Anne Marie Beretta, France Andrevie, and a number of others, was an explicit but exaggerated 1940s-revival silhouette based largely on tailored suits and dresses, though more a slim-skirted haute couture forties look than the flared-skirt, World War II Utility Suit-inspired shapes flirted with by Saint Laurent in the early seventies, no platform shoes or snoods this time. This first version was referred to as retro and included 1940s accessories, some mid-20th-century sci-fi looks, and military influences. The second was a more contemporary sportswear look in which shoulder pads were added to easy but slimmed-down casualwear, favored largely by US and Italian designers like Perry Ellis, Norma Kamali, Calvin Klein, and Giorgio Armani. This time, the shoulder line was usually continuous from outer edge to neck, without the dip toward the center seen in the 1940s, and the pads used, even when enormous, were much lighter and held their shape better than the ones used in the 1940s, now most often made of foam and other lightweight, well-shaped, moldable materials. As shoulder pads hadn't been this common in womenswear in decades, some in the fashion industry worried that the tailoring skills necessary for them had been lost and measures were taken to train workers in their proper placement. Initially, this big change from the natural shoulder of the sixties and seventies could be extreme, with some designers showing shoulders three feet wide and others presenting pagoda shoulders, and the buying public was strongly resistant. Undeterred, designers continued to present the look, slowly acclimating the public to it until it became one of the most characteristic and popular fashion trends of the 1980s. Most designers did adopt the new trend of padded shoulders, but a few prominent designers, Kenzo, Ralph Lauren, and Emanuel Ungaro among them, refrained, at least at first. Kenzo mostly adhered to his popular, easy, comfortable clothes even during the shoulder-padded eighties. Ralph Lauren continued with his familiar English country classics and devoted his fall 1978 collection to a cowboy theme, his shoulders the same size they had been in previous seasons. He wouldn't adopt the new big-shouldered silhouette until the following year and it would remain only a minor part of his offerings into the eighties. Ungaro would also only resist the new broad-shoulders trend for a season or two, during which he continued to show the easy, seventies Soft Look/Big Look, before enthusiastically adopting big-shoulder styles in 1979 and making the look his signature the following decade. Shoulder Pads in 1970s Menswear Standard, mass-market menswear during the 1970s continued to feature standard, unobtrusive shoulder pads shaping suits and sport jackets, but more high-fashion menswear basically followed the same trajectory as high-fashion womenswear, with a delay of about a season or two. Thus, there was a removal of shoulder pads and other internal structuring during the easy, oversized, unconstructed Big Look or Soft Look era of the mid-seventies, spearheaded in womenswear by Kenzo Takada in 1973-74 and in menswear by Giorgio Armani a couple of years later. When high-fashion womenswear reverted to highly structured garments with big shoulder pads for fall of 1978, high-fashion menswear followed suit the following year, Cardin replicating his women's pagoda shoulders in his men's suits and even Armani adding unusually pronounced shoulder pads to his men's jackets, a trend that would continue during the following decade. 1980s The early 1980s continued a trend begun in the late 1970s toward a resurgence of interest in the ladies' evening wear styles of the early 1940s, with peplums, batwing sleeves and other design elements of the times reinterpreted for a new market. The shoulder pad helped define the silhouette and continued to be made in the cut foam versions introduced in the fall 1978 collections, especially in well-cut suits reminiscent of the World War II era. These styles had initially been resisted by the public at their 1978 introduction, but designers continued to present exaggerated shoulder pads into the eighties so that they saturated the market and women did come to adopt them, with everyone from television celebrities to politicians wearing them. For example, British Prime Minister Margaret Thatcher was internationally noted for her adoption of these fashions as they more and more became the norm. Before too long, these masculinized shapes were adopted by women seeking success in the corporate world, women who in the mid-seventies had worn sensibly-proportioned blazers for the same purpose, and exaggerated shoulder pads later became seen as an icon of women's attempts to smash the glass ceiling, a mission that was aided by their notable appearance in the US TV series Dynasty, whose stars' broad-shouldered, Valentino-inspired outfits were designed by Nolan Miller. As the decade wore on, exaggerated shoulder pads became the defining fashion statement of the era, known as power dressing (a term that had previously been applied to the more sensibly proportioned business blazers of the mid-seventies) and bestowing the perception of status and position onto those who wore them. Some of the exaggerated shoulder pad sizes from the fall 1978 introduction of the trend became accepted and even common among the public by the mid-eighties. Every garment from the brassiere upwards would come with its own set of shoulder pads, with women frequently layering one shoulder-padded garment atop another, a trend launched by designers in 1978. To prevent excessive shoulder padding, velcro was sewn onto the pads so that the wearer could choose how many sets to wear. The ability to remove shoulder pads also helped prevent deforming the pads in the wash, but discomfort could result if the pad wasn't attached securely to the velcro strip and the rough side scratched the skin. Other problems experienced by women as shoulder pads became widespread included slipping and displacement of the pads in oversized garments and interference with purse straps. Prominent designers of big shoulders who had name recognition with the public during this period included Norma Kamali, Emanuel Ungaro, and Donna Karan. Kamali was one of a number of designers who, instead of just reviving highly tailored 1940s-style suits, added large shoulder pads to more contemporary sportswear styles, achieving great fame and influence in 1980 by showing sweatshirt-fabric versions of the flounced, hip-yoked, mini-length skirts she had introduced in 1979 (called rah-rah skirts in the UK) and presenting them with hugely shoulder-padded tops in the same material. Some made the plausible claim that the worldwide success of this collection is what finally made shoulder pads acceptable to the public after two or three years of designers promoting them. Ungaro became perhaps the most commercially successful of the Paris designers of the period by maximizing the use of seductive-looking shirring, ruching, and draping in large-shouldered dresses and suits, reintroducing a Schiaparelli-era trend of Edwardian revival. Donna Karan, who had achieved fame in the 1970s as one of the designers behind the Anne Klein label, opened her own house in the mid-eighties, specializing in versatile separates for working women as she had in the seventies, but with eighties-style big shoulder pads and more formal glamor added to conform to the times. Though distracting to the eye today, exaggerated shoulder pads were so normal during the eighties that the huge shoulders of Karan, Ungaro, and others were often not even commented on by fashion writers. Throughout the Fall 1978-through-1980s big-shoulder-pads period, designers and fashion writers often said that the current year's shoulders were not as big as the previous year's. Often, means besides or in addition to shoulder pads were used to enlarge the shoulder, including puff-top sleeves, tucks and pleats, shoulder flanges, and stiffened ruffles. Yet, pronounced shoulder padding continued in high fashion through the mid-eighties. The most consistent in showing particularly huge ones was probably Claude Montana, who declared in 1985, "Shoulders forever!" Nicknamed "King of the Shoulder Pad," Montana's silhouette designs were credited for defining the 1980s power-dressing era. There were some designers who never really took them up, particularly Japanese designers like Kenzo and Issey Miyake, but by and large, most put them in everything, with almost all creating their own versions of the heavily structured, prominently shoulder-padded eighties suit jacket, even normally independent designers like Mary McFadden, Jean Muir, André Courrèges, and Giorgio di Sant'Angelo. Eighties designers even incorporated big shoulder pads when they were doing revival styles from earlier, non-shoulder-padded eras like the 1950s and 1960s. For instance, a version of the 1950s chemise dress was widely shown by designers from the 1978 inception of the big-shoulder era into the eighties, but with shoulder pads instead of authentic 1950s sloped shoulders. Similarly, when Thierry Mugler did sixties-revival styles in 1985, they included his characteristic enormous shoulder pads. Even sixties-revivalist Stephen Sprouse showed his period-perfect shift and trapeze minidresses in the eighties with broad-shouldered jackets and topcoats. Designers producing more eighties-looking minidresses added shoulder pads because they felt that prominent shoulders helped balance out the increased expanse of leg. During a brief general designer return to a sort of mid-seventies style of long dirndl skirts and shawls for Fall 1981, most shoulders remained broad and padded, very unlike the seventies. All of this had an effect on the public, so that by the end of the era, some mass-market shoulder pads were the size of dinner plates and people were no longer shocked by them as they had been at their 1978 introduction. During the mid-eighties, though, there were clear signs of a move away from big shoulder pads among several prominent designers, with Vivienne Westwood introducing her famous 1985-86 mini-crini specifically to, as she put it, "kill this big shoulder." Christian Lacroix's celebrated mini-pouf skirt collections of 1986-87 were dominated by sloping, fichu shoulders, and even Karl Lagerfeld, who had been an early leader in the 1978 move to huge shoulders, for 1986 took pads from the shoulders and placed them visibly on the outside of the hips. Two years later, he would proclaim that shoulders would now be "tiny." Yves Saint Laurent had initiated the eighties big-shoulder trend in January 1978 and had been a shoulder-pad stalwart throughout the intervening years, but in 1988 even his shoulders, while still padded, had been noticeably narrowed. The two designers most noted for showing huge shoulders at the start of the era, Thierry Mugler and Claude Montana, brought their shoulders down in size somewhat mid-decade, with Montana giving up big shoulders entirely by 1988, when he began showing collections with completely natural shoulders. Avant-garde designers like Adeline André and Marc Audibet had long shown sloped shoulders with no pads, as had Romeo Gigli, who was hailed as the most prophetic designer of the end of the eighties. He showed almost exclusively natural, sloping shoulders, even on tailored jackets. This direction among designers was clear enough that in The Washington Post'''s New Year in/out list for 1989, "Shoulder pads" were listed as out and "Shoulders" were listed as in. The public and retailers, though, had embraced shoulder pads wholeheartedly by the end of the decade, feeling that they filled out their form and gave clothes a more saleable "hanger appeal." Shoulder pad manufacturers were flourishing, with literally millions of pads produced every week. Many women seemed reluctant to give up big shoulder pads as designers began sending new signals in the late eighties. Prominent shoulder pads would not completely disappear until into the nineties. Shoulder Pads in 1980s Menswear In menswear, the exaggerated shoulder pads that had been introduced into high-fashion clothing in 1979 would continue to various degrees throughout the eighties, even becoming mainstream, with many everyday business suits having more pronounced shoulders than had usually been worn in the seventies. High-fashion shoulder pad shapes would vary with the whims of designers, a sharp-edged pad preferred one season, a more rounded pad preferred another. Part of what drove these styles was the increased proliferation of serious working out in the eighties after widespread fitness and health pursuits had emerged in the seventies. Near-bodybuilder physiques became normal sights starting in the eighties for everyday people, both on the streets and in advertising, and jacket shapes seemed to echo this, sometimes by padding the shoulders and shaping the cut even more to a V-shape, other times by leaving out or reducing the pads to allow the newly built-up wearer's own body to give the jacket shape. By the end of the eighties, there was a fad for often brightly colored sport jackets with big shoulders worn over deep-cut, also often brightly colored muscle tank tops or string tank shirts, or even no shirt at all, letting a well-worked-out torso show and sometimes allowing the shoulder-padded jacket to slide off the wearer's own chiseled shoulder, a style that would continue into the early nineties. 1990s The shoulder pad fashion carried over from the late 1980s with continued popularity in the early 1990s, but wearers' tastes were changing due to a backlash against 1980s culture. Some designers continued to produce ranges featuring shoulder pads into the mid-1990s, as shoulder pads were prominent in women's formal suits and matching top-bottom attire, highly exemplified in earlier episodes of The Nanny'' from 1993 and 1994, where costume designer Brenda Cooper outfitted star Fran Drescher in things like late-eighties-style square-shouldered jackets by Moschino and Patrick Kelly. The velcro-fastened shoulder pads of the eighties were still familiar items in the early nineties. In 1993, a US patent was even registered for a removable shoulder pad that contained a hidden pocket to hide valuables. But as the decade wore on, shoulder-padded styles became outdated and were shunned by young and fashion-conscious wearers. Appearances were reduced to smaller, subtler versions augmenting the shoulder lines of jackets and coats. 2000s and 2010s The late 2000s and early 2010s saw the resurgence of shoulder pads. Many young women imitated pop artists, mainly Lady Gaga and Rihanna, who were known for their use of shoulder pads in their stylistic outfits. There was a large presence of shoulder pads on many runways, in fashion designer collections, and a revival of 1980s trends became mainstream among many people who were interested in them. By the 2009-2010 seasons, shoulder pads had made their way back into the mainstream market. By 2010 many retailers like Wal-Mart had shoulder pads on at least half of all women's tops and blouses. The late 2010s saw another resurgence of shoulder pads. With the rise of the Me Too movement and other female empowerment movements, the increase of women being elected to political positions, and a continuing revival of 1980s trends, many are opting to wear clothes with shoulder pads. See also 1930–1945 in fashion 1980s fashion Epaulette References External links Parts of clothing 1930s fashion 1940s fashion Shoulder
Shoulder pad (fashion)
[ "Technology" ]
5,008
[ "Components", "Parts of clothing" ]
10,692,807
https://en.wikipedia.org/wiki/Circumscribed%20halo
A circumscribed halo is a type of halo, an optical phenomenon typically in the form of a more or less oval ring that circumscribes the circular 22° halo centred on the Sun or Moon. As the Sun rises above 70° it essentially covers the 22° halo. Like many other halos, it is slightly reddish on the inner edge, facing the Sun or Moon, and bluish on the outer edge. The shape of the circumscribed halo is strongly dependent on the distance of the Sun or Moon above the horizon. Its top and bottom (i.e., the points directly below and above the Sun or Moon) always lie directly tangential to the 22° halo, but its left and right sides take on different shapes depending on solar (or lunar) elevation. At an elevation between about 35°–50°, the sides form two distinct, downward-drooping "lobes" outside of the 22° halo. As the Sun or Moon rises higher (between c. 50°–70°), the drooping diminishes towards a more regular oval shape. At an elevation of c. 70° or more, the shape of the circumscribed halo approaches a circle, and as such becomes nearly indistinguishable from the 22° halo, only to be identified by its tendency to show more saturated colors than the latter. When the Sun or Moon is at an elevation lower than c. 35°, the circumscribed halo breaks up into the upper tangent and lower tangent arcs. See also Circumzenithal arc Circumhorizontal arc References External links www.paraselene.de – HaloSim Computer simulations of a circumscribed halo. Atmospheric Optics - Circumscribed Halo - solar altitude – an animation showing how the shape of the phenomenon changes as the Sun or Moon rises. Atmospheric optical phenomena
Circumscribed halo
[ "Physics" ]
404
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
10,694,002
https://en.wikipedia.org/wiki/Industrial%20oven
Industrial ovens are heated chambers used for a variety of industrial applications, including drying, curing, or baking components, parts or final products. Industrial ovens can be used for large or small volume applications, in batches or continuously with a conveyor line, and a variety of temperature ranges, sizes and configurations. Such ovens are used in many different applications, including chemical processing, food production, and even in the electronics industry, where circuit boards are run through a conveyor oven to attach surface mount components. Some common types of industrial ovens include: Curing ovens – Designed to cause a chemical reaction in a substance once a specific temperature is reached. Powder coating is one common curing oven use. Drying ovens – Designed to remove moisture. Typical applications are pre-treating and painting. Such ovens are also sometimes known as kilns, though they do not reach the same high temperatures as are used in ceramic kilns. Baking ovens – Combines the function of curing and drying ovens. Reflow Ovens – A reflow oven is a machine used primarily for reflow soldering of surface mount electronic components to printed circuit boards (PCB). The oven contains multiple zones, which can be individually controlled for temperature. Generally, there are several heating zones followed by one or more cooling zones. The PCB moves through the oven on a conveyor belt, and is therefore subjected to a controlled time-temperature profile. Batch ovens – Also called cabinet or Walk-in/Truck-in ovens, batch ovens allow for curing, drying or baking in small batches using wheeled racks, carts or trucks. Ovens such as this are often found in large-volume bakeries in places such as supermarkets. Conveyor or Continuous Ovens – Typically part of an automated conveyor processing line, conveyor ovens allow for higher volume processing. Heat tunnels are an example. Clean room ovens – Designed for applications requiring a cleanroom, such as a semiconductor manufacturing or biotechnology processes. See also List of ovens Gradient oven tester Chandley Ovens References Industrial ovens
Industrial oven
[ "Engineering" ]
430
[ "Industrial ovens", "Industrial machinery" ]