id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
77,094,598
https://en.wikipedia.org/wiki/KL-50
KL-50 (also known as RR-004 by its developer Pharminox Limited) is a drug candidate designed to treat glioblastoma. It functions by alkylating DNA yielding O6-(2-fluoroethyl)guanine which causes DNA interstrand crosslinks. Half of the cancers are unable to repair the DNA damage. The KL-50 molecule was patented by Pharminox Limited in 2013, detailing its potential use in treating MMR-/MGMT- gliomas. References Imidazotetrazines Amides Fluorine compounds Experimental cancer drugs Ureas
KL-50
[ "Chemistry" ]
133
[ "Organic compounds", "Amides", "Functional groups", "Ureas" ]
77,095,011
https://en.wikipedia.org/wiki/Dxcover
Dxcover Limited is a Scottish company which was founded in 16 May 2016. It is based in Glasgow, UK. It combines novel hardwares with artificial intelligence algorithms. Patients' blood samples are analysed by scientists to detect the presence of diseases. It is a clinical stage liquid biopsy company and uses artificial intelligence algorithms for early detection of cancers and other diseases to improve survival rates. Its marketplaces are located in the US, UK and EU. The company aims to detect early stage (Stage I and Stage II) of cancers. History The company was previously named as ClinSpec Diagnostics Limited between 16 May 2016 and 26 April 2021. It is spun out from the University of Strathclyde, Glasgow. After that, it is rebranded as Dxcover Limited in 2021. Technology Dxcover has developed 'Drop, Dry, Detect' technology which technically detects and works in minutes to detect signs of cancer using artificial intelligence trained model. Panoramic platform is a multi-omic spectral analysis (MOSA) technology for various cancers which are brain, advanced adenoma, colorectal, and lung cancer. This technology analyses the signals that were missed by traditional cancer detection methods. For brain tumors, there is no biomarker test available. The test algorithm makes use of the complete biological profile of the patient's serum sample rather than concentrating on a single biomarker for illness. Results are accessible in minutes, and no specialized sample preparation is needed. Funding { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Point", "coordinates": [ -4.254863, 55.86423 ] } } ] }Based on research conducted at Strathclyde University, Dxcover Limited has raised over £10 million for its technology that enables the early diagnosis of numerous malignancies. Existing investors lead by Eos Advisory LLP contributed £7.5 million in Series A funding, with further contributions from the University of Strathclyde, Mercia Fund Management, Scottish Enterprise, Social Investment Scotland Ventures, and Norcliffe Capital. As Dxcover continues to expand its U.S. network, Boston-based life science investor Mark Bamforth of Thairm Bio also joined the round. The European Innovation Council also awarded the firm a £2.2 million grant. References Cancer screening Artificial intelligence companies Companies based in Glasgow Medical technology companies Companies established in 2016 Biotechnology companies of the United Kingdom
Dxcover
[ "Biology" ]
528
[ "Medical technology companies", "Life sciences industry" ]
77,095,044
https://en.wikipedia.org/wiki/Enzymatic%20polymerization
Enzymatic polymerization is a potential area in polymer research, providing a sustainable and adaptable alternative to conventional polymerization processes. Its capacity to manufacture polymers with exact structures in mild circumstances opens up new possibilities for material design and application, helping to progress both research and industry. It is a novel and sustainable method of synthesizing polymers that utilizes the catalytic properties of enzymes to both initiate and regulate the polymerization process. It works under mild circumstances, usually at room temperature and pressure as well as in aqueous environments, in contrast to conventional chemical polymerization techniques that frequently need for harsh conditions and harmful reagents. This approach allows fine control over the structure and functionality of polymers while simultaneously consuming less energy and having a less environmental impact. This polymerization technique has the considerable advantage of being compatible with renewable resources. Many of the monomers utilized in these procedures come from natural sources, which aligns with the ideas of green chemistry and sustainability. This alignment is especially crucial given growing environmental concerns and the quest for more sustainable industrial operations. The potential applications of polymers produced via enzymatic polymerization are vast, spanning the fields of biomedicine, materials science, and environmental engineering. For example, biodegradable polymers produced using this method  are very useful for medical applications such as drug delivery systems, biosensors and tissue engineering scaffolds. Furthermore, enzymatic polymerization opens up fascinating possibilities for the production of innovative biomaterials with tailored characteristics for specific industrial applications. Mechanism of enzymatic polymerization Enzymatic polymerization can happen in a variety of ways, including: Condensation Polymerization: Enzymes such as lipases and proteases catalyze the step-growth polymerization of monomers by establishing ester, amide, or peptide bonds, releasing tiny molecules such as water or alcohol as waste. Addition Polymerization: This method includes radical-mediated processes, in which enzymes such as peroxidases initiate polymerization by producing radical species that propagate the polymer chain. Ring-Opening Polymerization: Enzymes help to open cyclic monomers to produce linear polymers, which is a typical process for synthesizing polyesters and polyamides. Types of enzymes used in polymerization Polymerases, or polymerase enzymes, can catalyze the synthesis of different kinds of polymers. Key enzymes involved include: Lipases are used in the synthesis of polyesters and polyamides, lipases accelerate esterification and transesterification processes, which are required for polymer chain formation. In oxidative polymerization, peroxidases aid in the polymerization of phenolic and aniline derivatives, resulting in the production of conductive polymers. Glycosyltransferases are necessary for polysaccharide formation because they catalyze the transfer of sugar moieties to create glycosidic linkages. Proteases are enzymes that help create peptide bonds, allowing amino acid monomers to be polymerized into polyamides or proteins. References Polymerization reactions
Enzymatic polymerization
[ "Chemistry", "Materials_science" ]
625
[ "Polymerization reactions", "Polymer chemistry" ]
77,095,053
https://en.wikipedia.org/wiki/Metallopeptide
Metallopeptides (also called metal-peptides or metal peptide complexes) are peptides that contain one or more metal ions in their structure. This specific type of peptide are, just like metalloproteins, metallofoldamers. And very similar to metalloproteins, metallopeptide's functionality is attributed through the contained metal ion cofactor. These short structured peptides are often employed to develop mimics of metalloproteins and systems similar to artificial metalloenzymes. A multitude of naturally occurring peptides display biological and chemical activities when bound to various metal ions. Where different metal ion cofactor can lead to different reactivity and even different folding and physical characteristics (e.g. solubility or stability) of the structure. Synthetic equivalents of such peptides are engineered to bind metal ions and display a variety of physical, chemical, and biological reactivity and characteristics. Examples In the last 40 years, there has been a significant amount of research on metal binding peptides and their characteristics, structures, and chemical reactivities. Vincent L. Pecoraro and his group investigate the interaction of peptides with heavy metals in the body; Katherine Franz leads a group studying Cu-binding peptides; Angela Lombardi and her unit focus on the development of artificial metalloenzymes and similar peptide systems, and the group of Peter Faller focuses on redox reactivity of Cu-peptides. Natural Natural metallopeptides with antibiotic, antimicrobial and anticancer properties have been of particular interest to the scientific community (e.g. the divalent bacitracin, histatin and Fe/Cu-bleomycin). At the same time there is an increasing attention to the role of metalloppeptides in disease development. For example, metallochemical interactions in brain tissue can contribute to neurodegenerative conditions due to the naturally high concentration of metal ions in the brain. Hence the metallochemical reactions occurring outside the physiologically healthy concentrations, can contribute to the development of diseases such as Alzheimer's disease. The condition is related to the β-amyloid metallopeptides. Another example are infectious prion polypeptides and specific isoforms of prion protein which contribute to disease transmission and development. Artificial De novo designed peptides which self-assemble in the presence of copper (Cu), forming supramolecular assemblies were presented by Korendovych et al. Additionally there are examples of metallopeptides that are, at least partially, composed of non-natural amino acids with possible applications in drug discovery and biomaterials. Metal coordination Being a type of molecules that are often only activated for biological and chemical function following metal-binding, the specific coordination of metal ions imposes certain restrictions and requirements onto metallopeptides. Usually metal cofactors are coordinated by nitrogen, oxygen or sulfur centers belonging to amino acid residues of the peptide. These donor groups can be introduced by histidine (or the corresponding imidazole), cysteine (thiolate group), as well as carboxylate substituents (e.g by aspartate) but are not limited to these. The other amino acid residues, including non-natural amino acids and the peptide backbone have been shown to bind metal centers and provide donor groups. The research on metal-binding of peptides ranges from coordination of biometals (such as Calcium, Magnesium, Manganese, Zinc, Sodium, Potassium, and Iron) to heavy metals (such as Arsenic, Mercury, and Cadmium). Synthesis and analysis Biosynthesis Peptides are synthesized in living organisms inside the cell analogously to proteins. Chemical synthesis Solid phase peptide synthesis (SPPS) is a well-established method for producing synthetic peptides. SPPS enables the building of a peptide chain by sequential interactions of amino acid derivatives. Analysis The interaction between metal ions and peptides are typically studied in solution using spectroscopic or electrochemical methods. Amongst which are circular dichroism (CD), nuclear magnetic resonance (NMR) spectroscopy, cyclic voltammetry, and mass spectrometry (MS). See also Bioinorganic chemistry Evolution of metal ions in biological systems Biometal (biology) Coenzyme Metalloproteins References Peptides Biochemistry Bioinorganic chemistry Metalloproteins Synthetic biology
Metallopeptide
[ "Chemistry", "Engineering", "Biology" ]
918
[ "Synthetic biology", "Biomolecules by chemical classification", "Biological engineering", "Peptides", "Bioinformatics", "Molecular genetics", "nan", "Molecular biology", "Biochemistry", "Metalloproteins", "Bioinorganic chemistry" ]
77,095,702
https://en.wikipedia.org/wiki/Joshua%20Macabuag
Joshua Macabuag is a civil engineer, who specialises in disaster response. Early life & education Macabuag studied civil engineering at the University of Oxford. Following his graduation, he worked on a project in South Africa with the charity Engineers Without Borders. Career In 2009, Macabuag notably won first prize at the ICE's Graduate and Student Papers competition, for his paper investigating the use of polypropylene straps to earthquake-proof buildings in the country of Nepal. His expertise in disaster response has led him to provide expertise during a number of major global disasters, including the April 2015 Nepal earthquake, Hurricane Irma and others. In 2018, Macabuag published a study in Geosciences into the effect of debris-induced damage in buildings. Macabuag has also published other studies into modelling vulnerabilities during natural disasters. In 2021, Macabuag partnered with the World Bank to work as a disaster risk engineering consultant. Macabuag was a keynote speaker at the 14th Brunel International Lecture series, where he discussed disaster response. In 2023, he became a Royal Academy of Engineering Fellow. References Civil engineers Alumni of the University of Oxford Living people Year of birth missing (living people) Fellows of the Royal Academy of Engineering
Joshua Macabuag
[ "Engineering" ]
262
[ "Civil engineering", "Civil engineers" ]
77,095,997
https://en.wikipedia.org/wiki/Scale-down%20bioreactor
A scale-down bioreactor is a miniature model designed to mimic or reproduce large-scale bio-processes or specific process steps on a smaller scale. These models play an important role during process development stage by fine-tuning the minute parameters and steps without the need for substantial investments in both materials and consumables. Vessel geometry like aspect ratios, impeller designs, and sparger placements should be nearly identical between the small and large scales. For this purpose computer fluid dynamics (CFD) are used as they can be employed to investigate the scalability of mixing processes from small-scale models to larger production scales. Scientists use outcome of these studies on scale down systems to derive and facilitate the transition from laboratory-scale studies to industrial large-scale conditions. Types of scale-down bioreactors Stirred tank bioreactors are systems further developed to two compartment systems to provide a fundamental structure for Scale down bioreactors. Two commonly used developed systems are cells which are circulated between either two stirred tank reactors (STR–STR), or from a STR through a plug flow reactor (STR–PFR). STR-STR The application of coupled stirred-tank reactors in scale-down models is a powerful technical model for simulating and studying the complex conditions of large-scale industrial bioreactors. It provides a controlled environment to replicate non-homogeneous conditions, these models offer valuable insights into optimizing bioprocesses, ensuring consistent product quality, and reducing costs and time in biotechnological production. Co-cultures, meaning that more than two microbes complementing cultivation can be conducted. One such recent study conducted for two compartment bioreactor is the production of Violecin. STR-PFR Scale down reactors can be two compartment bioreactor. In a two-compartment bioreactor setup, the first compartment can be operated as an STR for initial growth/biomass buildup, while the second compartment functions as a PFR for the production phase with a defined residence time. Fusing a mixed stirred tank reactor (STR) with a plug flow reactor (PFR) in a two-compartment system offers significant options in flow characteristics to meet specific process requirements. This configuration allows for precise control over various factors, including improved bioprocess results by enhancing residence time distribution and substrate gradients. The integration of this system results in a portion of the culture being exposed to varying environmental cues, such as altered mixing times, nutrient deprivation, aeration, pH, or temperature, before being recirculated in to the main STR. The formed perturbations simulate transient stresses encountered in large-scale industrial reactors. The residence time in the PFR zone is calibrated to match the typical timescale experienced in large scale industrial bioprocesses. This system is further optimized to explore shorter timescales and they are termed dynamic microfluidic systems. Computational fluid dynamics (CFD) simulations can predict and model the flow patterns in STR-PFR complex systems. Advantages of Scale down bioreactors Efficient Exploration of Operating Conditions During process development, a wide range of operating conditions should be deployed, in order to identify the optimal parameter ranges, and is crucial to achieve successful large scale bioprocesses. However, due to number of experiments in large-scale fermenters can be time-consuming, resource-intensive, and cost. Hence, smaller scale-down systems, in the form of miniaturized bioreactors ranging from micro liters to milliliters in scale. Miniaturized bioreactors enable researchers to conduct numerous experiments simultaneously, exploring various combinations of process parameters such as temperature, pH, agitation rates, and nutrient concentrations. These models facilitate efficient process optimization at a small scale, the insights gained from these experiments can be seamlessly transferred to larger-scale systems. The scalability of the process parameters and operating conditions identified through scale-down models ensures a smooth transition to pilot and commercial-scale production. This high-throughput approach allows for rapid screening and identification of optimal operating conditions, which would be impractical and costly with larger-scale systems. By working at a smaller scale, these miniaturized bioreactors significantly reduce the consumption of raw materials, media components, and other consumables needed for reactor fermentations runs. This resource-efficient approach not only minimizes costs but also aligns with sustainable practices, reducing waste and environmental impact. Bioprocess Engineering strategies are applied to upgrade and enhance the overall productivity of the cultivation experiments. Some important parameters like oxygen transfer rate (OTR), dissolved oxygen concentration, superficial gas velocity, volume‐specific power input P/V, mixing time, could be modified and optimized to obtain high titre formation according to the desired requirements. These titre values could be comparable to values obtained in large scale industrial bioprocesses. Efficient microbial strain testing and characterization Microbial strain Engineering and cell factory engineering is a developing area of interest and important in determining the outcome of large scale fermentation. With the development in metabolic engineering and synthetic biology new strains are constructed, which need to be tested in large scale like conditions. This is an instance where scale down bioreactors could be coupled with microbial strain engineering to broaden the scope of research and bridge the gap between two interdisciplinary fields of studies. Application of computational fluid dynamics By developing and applying computational fluid dynamics simulations, process scientists and engineers can gain valuable insights into the fluid flow patterns and mixing dynamics within various geometries.The ability to run multiple experiments in parallel, combined with the reduced resource requirements, translates into accelerated process development timelines. Researchers can quickly iterate through various conditions, analyze results, and make informed decisions, ultimately shortening the overall development cycle. Two parameters that need to be focused on are the Reynolds number and power number, as non-dimensional values for technical know-how and scaling processes, both upscaling and scale-down processes. By understanding this relationship between power number and reynold's number, it becomes possible to predict the power requirements for a given flow regime and impeller configuration. This knowledge is crucial for designing and operating agitated systems at different scales while maintaining consistent mixing performance. References Bioreactors Biotechnology Biochemical engineering Biological engineering
Scale-down bioreactor
[ "Chemistry", "Engineering", "Biology" ]
1,284
[ "Bioreactors", "Biological engineering", "Chemical reactors", "Chemical engineering", "Biochemical engineering", "Microbiology equipment", "Biotechnology", "nan", "Biochemistry" ]
77,096,028
https://en.wikipedia.org/wiki/NGC%205885
NGC 5885 is an intermediate barred spiral galaxy located in the constellation Libra. Its speed relative to the cosmic microwave background is 2,185 ± 13 km/s, which corresponds to a Hubble distance of 32.3 ± 2.3 Mpc (~105 million ly). NGC 5885 was discovered by German-British astronomer William Herschel in 1784. The luminosity class of NGC 5885 is III and it has a broad HI line. It also contains regions of ionized hydrogen. With a surface brightness equal to 14.39 mag/am2, we can qualify NGC 5885 as a low surface brightness galaxy (LSB). LSB galaxies are diffuse galaxies with a surface brightness less than one magnitude lower than that of the ambient night sky. To date, 11 non-redshift measurements yield a distance of 22.055 ± 5.687 Mpc (~71.9 million ly), which is outside the distance values of Hubble. Note that it is with the average value of independent measurements, when they exist, that the NASA/IPAC database calculates the diameter of a galaxy and that consequently the diameter of NGC 5885 could be approximately 37, 5 kpc (~122,000 ly) if we used the Hubble distance to calculate it. See also List of NGC objects (5001–6000) List of spiral galaxies New General Catalogue References External links NGC 5885 at NASA/IPAC NGC 5885 at SIMBAD NGC 5885 at LEDA Spiral galaxies Barred spiral galaxies Libra (constellation) 5885 Discoveries by William Herschel Astronomical objects discovered in 1784 Low surface brightness galaxies
NGC 5885
[ "Astronomy" ]
338
[ "Libra (constellation)", "Constellations" ]
77,096,209
https://en.wikipedia.org/wiki/NGC%201024
NGC 1024 is a large spiral galaxy of type Sab located in the constellation Aries. Its speed relative to the cosmic microwave background is 3,306 ± 16 km/s, which corresponds to a Hubble distance of 48.8 ± 3.4 Mpc (~159 million light-years). NGC 1024 was discovered by German-British astronomer William Herschel in 1786. NGC 1024 was used in the Atlas of Peculiar Galaxies as an example of a motley galaxy. The luminosity class of NGC 1024 is I-II and it has a broad HI line. With a surface brightness equal to 14.02 mag/am^2, we can qualify NGC 1024 as a low surface brightness (LSB) galaxy. LSB galaxies are diffuse (D) galaxies with a surface brightness less than one magnitude lower than that of the ambient night sky. To date, five non-redshift measurements yield a distance of 46.260 ± 3.155 Mpc (~151 million ly), which is within the Hubble distance range. NGC 1024 Group NGC 1024 is the largest and brightest of a small group of three galaxies named after it. The other two galaxies in the NGC 1024 group are NGC 990 and NGC 1029. On the other hand, NGC 1024 and NGC 1029 form a pair of galaxies. See also List of NGC objects (1001–2000) New General Catalogue External links NGC 1024 at NASA/IPAC NGC 1024 at SIMBAD NGC 1024 at LEDA References 1024 Discoveries by William Herschel Aries (constellation) Spiral galaxies
NGC 1024
[ "Astronomy" ]
332
[ "Aries (constellation)", "Constellations" ]
77,096,431
https://en.wikipedia.org/wiki/Escherichia%20coli%20BL21%28DE3%29
Escherichia coli BL21(DE3) is a commonly used protein production strain of the E. coli bacterium. This strain combines several features that allow for excessive expression of heterologous proteins. It is derived from the B lineage of E. coli. Naming The genotype of this strain is designated with E. coli B F– ompT gal dcm lon hsdSB(rB–mB–) λ(DE3 [lacI lacUV5-T7p07 ind1 sam7 nin5]) [malB+]K-12(λS). Characteristics Decreased proteolysis The proteolysis of heterologously expressed proteins is reduced due to the functional deficiency of two major proteases, Lon and OmpT. Lon is usually present in the cytoplasm of the cell, but in all B strains its production is prevented by an insertion within the promoter sequence. OmpT is located in the outer membrane but is absent in B strains due to deletion. Expression induction While E. coli BL21(DE3) supports the expression of genes under the control of constitutive promoters, it is specifically engineered for IPTG induction of recombinant genes under the control of a T7 promoter. The realized induction strength depends on several factors, including the IPTG concentration and the timing of its supplementation. This function is enabled by the presence of a recombinant λ-prophage (DE3). DE3 carries a T7 RNA polymerase (RNAP) gene under the control of a lacUV5 promoter (lacUV5-T7 gene 1). T7-RNAP is highly specific to the T7 promoter and orthogonal to native E. coli promoters. Therefore the T7-RNAP only transcribes (exogenously introduced) genes that are regulated by a T7 promoter. The LacUV5 promoter is derived from the E. coli wildtype lac promoter but exhibits an increased transcription strength due to two mutations that facilitate its interaction with a native E. coli RNAP σ-factor. In E. coli BL21(DE3) the expression of the T7-RNAP is suppressed by the constitutively expressed LacI repressor. LacI binds the lac operator, which is located downstream of the LacUV5 promoter, preventing the production of the T7-RNAP. However, upon supplementation of IPTG, the LacI repressor dissociates from the lac operator, allowing for the expression of T7-RNAP. Subsequently, T7-RNAP can initiate the transcription of a recombinant gene under T7 promoter control. Other DE3 modifications ensure stable integration of the prophage in the genome and prevent the prophage from entering the lytic cycle (ind1, sam7, and nin5). Facilitated cloning E. coli BL21(DE3) lacks a functional type I restriction-modification system, indicated by hsdS(rB− mB−). Specifically, both the restriction (hsdR) and modification (hsdM) domains are inactive. This enhances transformation efficiency since exogenously introduced unmethylated DNA remains untargeted by the restriction-modification system. The dcm gene is also rendered inactive, preventing the methylation of a cytosine on both strands within the recognition sequence 5'-CC(A/T)GG-3'. This facilitates further processing of purified DNA as Dcm methylation prevents cleavage by certain restriction enzymes. References Escherichia coli
Escherichia coli BL21(DE3)
[ "Biology" ]
756
[ "Model organisms", "Escherichia coli" ]
77,097,363
https://en.wikipedia.org/wiki/Gary%20Patti
Gary J. Patti is an American biochemist known for his research in metabolism and for using mass spectrometry to characterize biological processes. He is the Michael and Tana Powell Professor at Washington University in St. Louis. He is co-founder and Chief Scientific Officer of Panome Bio and an Associate Editor for Clinical & Translational Metabolism. Awards Biemann Medal, 2024 ACS Midwest Award, 2023 Academy of Science Innovation Award, 2016 Edward Mallinckrodt Jr. Scholar Award, 2016 Pew Biomedical Scholars Award, 2015 Alfred P. Sloan Award, 2014 Camille Dreyfus Teacher-Scholar Award, 2014 References External links Year of birth missing (living people) Living people Washington University in St. Louis faculty 21st-century American chemists Mass spectrometrists Biochemistry Metabolism Cancer
Gary Patti
[ "Physics", "Chemistry", "Biology" ]
163
[ "Spectrum (physical sciences)", "Mass spectrometrists", "Mass spectrometry", "Cellular processes", "nan", "Biochemistry", "Biochemists", "Metabolism" ]
77,097,972
https://en.wikipedia.org/wiki/Stanley%20J.%20Cristol
Stanley Jerome Cristol (June 14, 1916 – January 23, 2008) was an American organic chemist. A chemistry professor and long-time faculty member of the University of Colorado Boulder Chemistry and Biochemistry Department, he was named Chair of the American Chemical Society Colorado Section in 1952, was a two-time Guggenheim Fellowship awardee in 1955 and 1980, and was elected to the National Academy of Sciences in 1972. Born in Chicago, Illinois, Cristol began working on isolating insecticide compounds for the United States Department of Agriculture shortly after graduating from University of California, Los Angeles. He taught chemistry in various roles at the University of Colorado Boulder from 1946 to his retirement in 1986, researching elimination and addition reactions and polycyclic compounds. Education He received a Bachelor of Science with highest distinction from Northwestern University in 1937 and received his Ph. D. degree in organic chemistry with honors from University of California, Los Angeles in 1943. At the time of his graduation, he was a member of Phi Beta Kappa, having been elected during his education at Northwestern University. He spent the next year at the University of Illinois Urbana-Champaign as a postdoctoral researcher with Roger Adams, after which he began the first major work of his post-graduation career at the United States Department of Agriculture (USDA) lab in Beltsville, Maryland. Career Cristol started working with insecticides for the USDA at Beltsville in 1944. DDT had been a known chemical for decades at that point, though its insecticidal properties were only discovered in 1939 by Paul Hermann Müller. Cristol was tasked with isolating impurities in insecticide samples, coming to identify the most potent compound as the p,p-DDT isomer. Once his isolation work was completed, Cristol began investigating elimination reactions with DDT and similar analogues in an effort to clarify the mechanism of the insecticides. He did not discover any correlation between rate of reaction and insecticidal activity, but his work was later praised for its "clarity and rigor". By 1946, he had left Beltsville to work at the University of Colorado Department of Chemistry. Cristol was appointed assistant professor at the Department of Chemistry in 1946. In 1950, he was involved in an incident in which the flow of a fume hood was accidentally reversed during an experiment, blowing phosgene gas into the room and hospitalizing four students. He did not receive full professorship until 1955. In that year, he became a Guggenheim fellow at the California Institute of Technology, ETH Zurich, and University College London. His research ranged from elimination reactions involving chloride ions to acid-catalyzed addition reactions and syntheses of polycyclic molecules. While on the board of editors of the Journal of Organic Chemistry, he was a recipient of the first Colorado Section Award in Chemistry, established in 1966 and awarded in 1967. Later, he received the James Flack Norris Award for his research on the chemistry of small ring compounds in 1972, the same year he was elected to the National Academy of Sciences. Personal life Stanley Cristol married Barbara Wright Swingle in 1957. She had 3 children prior to their marriage and 5 in total. The family was known to be interested in skiing. Cristol retired in 1986 and lived in Durango, Colorado until his death in 2008. Selected publications References 1916 births 2008 deaths 20th-century American chemists University of Colorado Boulder faculty American organic chemists Members of the United States National Academy of Sciences Scientists from Chicago
Stanley J. Cristol
[ "Chemistry" ]
708
[ "Organic chemists", "American organic chemists" ]
77,099,254
https://en.wikipedia.org/wiki/Chile%20Architecture%20Biennial
The Chilean Architecture and Urbanism Biennial is a significant event that has been organized by the Chilean Association of Architects since 1977. It aims to create a space for meeting, reflection, and exchange of ideas about architectural work, serving as a showcase for the best architectural and urban projects of the last two years. Expositions Since its inaugural edition, the biennial has been held in various cultural spaces in Santiago. In 2015, the event took place in Valparaíso, making it the first edition outside the Chilean capital. Since 2015, the selection of curators for the event has been conducted through an open call launched by the Chilean Association of Architects. Seven years later, the Ministry of Cultures, Arts, and Heritage joined the selection process of pavilion proposals. The 2022 edition, entitled Vulnerable Habitats (Hábitats vulnerables), was postponed until January 2023 and featured several installations around La Moneda Palace in Santiago. These installations included designs by Smiljan Radić and Nicolás Schmidt, the reconstruction of a pavilion originally designed by Montserrat Palmer in 1972, and temporary structures designed by Jean Araya and Miguel Casassus, as well as Low Estudio. Editions See also Architecture of Chile Chicago Architecture Biennial Architecture Biennial References Architecture festivals Festivals in Chile Festivals established in 1977 Architecture in Chile
Chile Architecture Biennial
[ "Engineering" ]
257
[ "Architecture festivals", "Architecture" ]
77,099,904
https://en.wikipedia.org/wiki/Brahmagupta%20triangle
A Brahmagupta triangle is a triangle whose side lengths are consecutive positive integers and area is a positive integer. The triangle whose side lengths are 3, 4, 5 is a Brahmagupta triangle and so also is the triangle whose side lengths are 13, 14, 15. The Brahmagupta triangle is a special case of the Heronian triangle which is a triangle whose side lengths and area are all positive integers but the side lengths need not necessarily be consecutive integers. A Brahmagupta triangle is called as such in honor of the Indian astronomer and mathematician Brahmagupta (c. 598 – c. 668 CE) who gave a list of the first eight such triangles without explaining the method by which he computed that list. A Brahmagupta triangle is also called a Fleenor-Heronian triangle in honor of Charles R. Fleenor who discussed the concept in a paper published in 1996. Some of the other names by which Brahmagupta triangles are known are super-Heronian triangle and almost-equilateral Heronian triangle. The problem of finding all Brahmagupta triangles is an old problem. A closed form solution of the problem was found by Reinhold Hoppe in 1880. Generating Brahmagupta triangles Let the side lengths of a Brahmagupta triangle be , and where is an integer greater than 1. Using Heron's formula, the area of the triangle can be shown to be Since has to be an integer, must be even and so it can be taken as where is an integer. Thus, Since has to be an integer, one must have for some integer . Hence, must satisfy the following Diophantine equation: . This is an example of the so-called Pell's equation with . The methods for solving the Pell's equation can be applied to find values of the integers and . Obviously , is a solution of the equation . Taking this as an initial solution the set of all solutions of the equation can be generated using the following recurrence relations or by the following relations They can also be generated using the following property: The following are the first eight values of and and the corresponding Brahmagupta triangles: {| class="wikitable" |- ! !! 1 !! 2 !! 3 !! 4 !! 5 !! 6 !! 7 !! 8 |- | || 2 || 7 || 26 || 97 || 362 || 1351 || 5042 || 18817 |- | || 1 || 4 || 15 || 56 || 209 || 780 || 2911 || 10864 |- |Brahmaguptatriangle||3,4,5||13,14,15||51,52,53||193,194,195||723,724,725||2701,2702,2703||10083,10084,10085||37633,37634,37635 |} The sequence is entry in the Online Encyclopedia of Integer Sequences (OEIS) and the sequence is entry in OEIS. Generalized Brahmagupta triangles In a Brahmagupta triangle the side lengths form an integer arithmetic progression with a common difference 1. A generalized Brahmagupta triangle is a Heronian triangle in which the side lengths form an arithmetic progression of positive integers. Generalized Brahmagupta triangles can be easily constructed from Brahmagupta triangles. If are the side lengths of a Brahmagupta triangle then, for any positive integer , the integers are the side lengths of a generalized Brahmagupta triangle which form an arithmetic progression with common difference . There are generalized Brahmagupta triangles which are not generated this way. A primitive generalized Brahmagupta triangle is a generalized Brahmagupta triangle in which the side lengths have no common factor other than 1. To find the side lengths of such triangles, let the side lengths be where are integers satisfying . Using Heron's formula, the area of the triangle can be shown to be . For to be an integer, must be even and one may take for some integer. This makes . Since, again, has to be an integer, has to be in the form for some integer . Thus, to find the side lengths of generalized Brahmagupta triangles, one has to find solutions to the following homogeneous quadratic Diophantine equation: . It can be shown that all primitive solutions of this equation are given by where and are relatively prime positive integers and . If we take we get the Brahmagupta triangle . If we take we get the Brahmagupta triangle . But if we take we get the generalized Brahmagupta triangle which cannot be reduced to a Brahmagupta triangle. See also Brahmagupta polynomials Brahmagupta quadrilateral References Arithmetic problems of plane geometry Types of triangles Eponymous geometric shapes Elementary mathematics Elementary number theory Brahmagupta
Brahmagupta triangle
[ "Mathematics" ]
1,026
[ "Elementary number theory", "Arithmetic problems of plane geometry", "Euclidean plane geometry", "Elementary mathematics", "Planes (geometry)", "Number theory" ]
77,100,550
https://en.wikipedia.org/wiki/John%20Read%20Cronin
John Read Cronin (August 3, 1936 – June 30, 2010) was an American biochemist and organic geochemist renowned for his pioneering research in the field of meteoritic organic chemistry. His work significantly advanced the understanding of the role of extraterrestrial organic molecules in the origin of life. Early life John Read Cronin was born on August 3, 1936, in Marietta, Ohio. He grew up in New Philadelphia, Ohio, where he developed an early interest in science and nature. Cronin's fascination with chemistry and the natural world led him to pursue a career in biochemistry. Education Cronin attended The College of Wooster, where he obtained his undergraduate degree in chemistry. He then went on to earn a Ph.D. in biochemistry from the University of Colorado School of Medicine in Denver. His doctoral research laid the foundation for his later work in organic chemistry and prebiotic chemistry. Career In 1966, Cronin joined the faculty at Arizona State University (ASU) as a professor of biochemistry. At ASU, he became involved in the emerging field of exobiology, focusing on the study of organic materials in extraterrestrial environments. His work at the ASU Center for Meteorite Studies, particularly with carbonaceous chondrite meteorites, positioned him as a leading figure in the field. Cronin's research explored the organic chemistry of meteorites, with a specific focus on carbonaceous chondrites like the Murchison meteorite. His work provided valuable insights into the diversity and complexity of extraterrestrial organic compounds and their potential role in the origin of life on Earth. John worked closely with Sandra Pizzarello with whom he made a number of important discoveries and collaborated extensively. The meteorite center explained the significance of Cronin's findings and contributions. As the world consensus at the time was skeptical about the presence of amino acids in meteorites, John Cronin and his colleagues conducted independent tests using different analytical techniques to detect amino acids in various meteorites, including Murchison, Murray, and Allende. Their findings showed that: Murchison and Murray contained amino acids, while Allende did not, proving that contamination wasn't an issue. This led Cronin and his team to further study the organics present in meteorites. They identified various compounds, including carboxylic acids, complex amino acids, and aliphatic hydrocarbons also using nuclear magnetic resonance. The team also collaborated with Samuel Epstein from Caltech to examine the isotopic signatures of organic molecules in meteorites, which further supported their extraterrestrial origin. Cronin and Sandra Pizzarello discovered the asymmetry of organic molecules before they fell to Earth, which might have originated from the interstellar medium. This research is significant because the exclusively left-handed nature of life's molecules is essential for the structures and functions of terrestrial biopolymers and is assumed to be crucial for the emergence of life. Research contributions Organic compounds in meteorites Cronin's extensive analysis of carbonaceous chondrite meteorites revealed a rich diversity of organic molecules, including amino acids, hydrocarbons, and nucleobases. His research demonstrated that these meteorites contain complex organic compounds that could have been significant in prebiotic chemistry. Key publication: Chirality and enantiomeric excess Cronin's research on the chirality of meteoritic amino acids provided evidence of non-racemic mixtures, suggesting a potential extraterrestrial source of chiral asymmetry. This finding has implications for the development of homochirality in biological molecules on Earth. Key publication: Isotopic composition of organic molecules Cronin conducted isotopic analyses of meteoritic organic compounds, revealing distinct isotopic compositions that supported their non-terrestrial origin. This work provided crucial insights into the extraterrestrial sources of prebiotic molecules. They investigated and published significant work on the Murchison meteorite Key publication: Impact chemistry and prebiotic synthesis Cronin's research explored how meteorite impacts could synthesize organic compounds from simpler precursors, highlighting the potential role of impact-generated environments in prebiotic chemistry. Key publication: Additional selected publications References External links Arizona State University (https://search.asu.edu/profile/47413) Obituary (https://www.legacy.com/us/obituaries/azcentral/name/john-cronin-obituary?id=22397144) Biochemists Arizona State University faculty Meteorites Prebiotic chemistry 1936 births 2010 deaths
John Read Cronin
[ "Chemistry", "Biology" ]
946
[ "Origin of life", "Biochemistry", "Prebiotic chemistry", "Biological hypotheses", "Biochemists" ]
71,141,502
https://en.wikipedia.org/wiki/AP5Z1
AP-5 complex subunit zeta (AP5Z1) is a protein that in humans is encoded by the AP5Z1 gene. Function The protein encoded by this gene is one of two large subunits of the AP5 adaptor complex. Damaging variants in this gene are associated with SPG48, a type of hereditary spastic paraplegia. References
AP5Z1
[ "Chemistry" ]
75
[ "Biochemistry stubs", "Protein stubs" ]
71,143,046
https://en.wikipedia.org/wiki/Stephen%20Childress
William Stephen Childress is an American applied mathematician, author and professor emeritus at the Courant Institute of Mathematical Sciences. He works on classical fluid mechanics, asymptotic methods and singular perturbations, geophysical fluid dynamics, magnetohydrodynamics and dynamo theory, mathematical models in biology, and locomotion in fluids. He is also a co-founder of the Courant Institute of Mathematical Sciences's Applied Mathematics Lab. Published books 1977: Mechanics of Swimming and Flying, online . 1978: Mathematical models in developmental biology with Jerome K. Percus, 1987: Topics in Geophysical Fluid Dynamics: Atmospheric Dynamics, Dynamo Theory, and Climate Dynamics, with M. Ghil. Softcover , eBook . 1995: Stretch, Twist, Fold: The Fast Dynamo with Andrew D. Gilbert, , 2009: An Introduction to Theoretical Fluid Mechanics, . 2012: Natural Locomotion in Fluids and on Surfaces Swimming, Flying, and Sliding. Edited with Anette Hosoi, William W. Schultz, Jane Wang. Hardcover , Softcover , eBook 2018: Construction of Steady-state Hydrodynamic Dynamos. I. Spatially Periodic Fields, Recognition 1976 Guggenheim Fellowship for Natural Sciences, US & Canada 2008 Fellow of American Physical Society References External links William Stephen Childress's home page American mathematicians Fellows of the American Physical Society Year of birth missing (living people) Living people
Stephen Childress
[ "Mathematics" ]
287
[ "Applied mathematics", "Applied mathematics stubs" ]
71,145,893
https://en.wikipedia.org/wiki/Pavel%20Konzbul
Pavel Konzbul (born 17 October 1965 in Brno-Juliánov) is current Bishop of the Roman Catholic Diocese of Brno in the Czech Republic. Pavel Konzbul was born on 17 October 1965. After his high school studies, he graduated from the Faculty of Electrical Engineering and Communication of the Brno University of Technology and worked as a researcher in the field electrical engineering. In 1995 he began his studies at the Cyril and Methodius Theological Faculty of Palacký University in Olomouc, which he completed in 2000. In 2003 he received priestly ordination. He then worked as a parish vicar in Boskovice and Svitávka, in Hustopeče near Brno, Starovičky and an excurrendo administrator in Starovice (all in Moravia). In 2013 he was appointed parish priest of the parish at the Cathedral of Saints Peter and Paul in Brno. On 21 May 2016, he was appointed Titular Bishop of Litomyšl and auxiliary Bishop of Brno by Pope Francis. On 29 June 2016 he received the Episcopal ordination in the Cathedral of Saints Peter and Paul in Brno from his diocesan bishop Vojtěch Cikrle. On 26 May 2022 the Holy Father Francis appointed him to be the 14th Diocesan Bishop of Brno. In the Czech Bishops' Conference, Bishop Pavel Konzbul is a member of the Commission for the Priesthood and the Commission for Catholic Education. He is the author of religious publications, intended primarily for young people. References 1965 births Living people 21st-century Roman Catholic bishops in the Czech Republic Clergy from Brno Czech Roman Catholic writers Electrical engineers Brno University of Technology alumni Palacký University Olomouc alumni
Pavel Konzbul
[ "Engineering" ]
337
[ "Electrical engineering", "Electrical engineers" ]
71,147,390
https://en.wikipedia.org/wiki/Wesseler%20W%2017
The Wesseler W 17 is an agricultural tractor made by H. Wesseler OHG. It is the firm's smallest two-cylinder model and was made from 1954 until 1956. In the Netherlands, the tractor was sold under the Vewema brand. Technical description The W 17 is based on a frameless design with the rear axle and front axle assembly directly flange-mounted to the engine. The rear axle is a ZF A-5/5 (or ZF A-5/6) transmission unit consisting of a portal axle and a constant-mesh gearbox with either five or six forward gears, and a regular reverse gear. The six-speed versions of the gearbox have an additional 1.5 km/h crawler gear below first gear, but are otherwise identical to the five-speed units, i.e. they have the same gearing and thus offer the exact same speeds. The transmission unit is designed for a maximum continuous input torque of and was available with a differential lock and a PTO. Since there is no drive shaft to the front axle, all W 17 tractors are rear-wheel drive only. The front axle is an unsprung stub axle with a central longitudinal pivot point. In front, the tractor has 4.5–16 inch tyres, in back it has 7–24 inch tyres. The rear wheels are fitted with mechanically operated duplex drum brakes. With factory new 7–24 inch rear tyres, the tractor can reach a top speed of at the rated engine speed of 2000/min. Wessler installed an MWM KD 211 Z industrial Diesel engine in the W 17. The KD 211 Z is a two-cylinder, naturally aspirated four-stroke Diesel engine with a bore and a stroke, resulting in a displacement of about . It has a crossflow cylinder head with two overhead valves, and swirl-chamber injection, allowing a low injection pressure of . The water pump is flange-mounted to the head and driven by a belt from the engine's crankshaft. In its default configuration, the engine has a BMEP of ca. , and a rated fuel consumption of 200 g/(PS·h) (ca. 272 g/(kW·h); 0.447 lb/(hp·h)). The rated engine power (DIN 70020) is at 2200/min, and the rated continuous power output (DIN 6270 B) is at 2000/min. The engine can be hand-cranked. References Tractors
Wesseler W 17
[ "Engineering" ]
519
[ "Engineering vehicles", "Tractors" ]
71,147,645
https://en.wikipedia.org/wiki/Harold%20E.%20Cox
Harold Eugene Cox (1931 – 2021) was Professor of History Emeritus and University Archivist at Wilkes University, Pennsylvania serving over 52 years. as department chair of the University Department of History. In 2015, the university renamed one its buildings as Dr. Harold Cox Hall. Cox specialized in the history of 19th and 20th century urban transportation and historical transportation maps. In 1976, he was an editor for the Pennsylvania Historical Association's journal, "Pennsylvania History". In 1996, Cox inquired about 19th century election statistics for Pennsylvania, only to find that the data would cost $1,000 to produce. He then organized and created the Wilkes University Election Statistics Project as a free online resource documenting Pennsylvania political election results dating back to 1796. The project has been cataloged by the Pennsylvania State University Libraries and the Van Pelt Library at the University of Pennsylvania. It has been cited as a source in academic books about the Supreme Court of the United States, Communist politicians in Pennsylvania, and a survey of state-level political parties. Cox served in the United States Army from 1954 to 1984, retiring as a command sergeant major. Cox died in 2021 and was eulogized as a "true renaissance man" with passions and interests ranging from politics to creative writing, trolleys, and LGBTQ rights. He was interred at Spring Hill Cemetery, Lynchburg, Va. Bibliography and collections Journals Cox, Harold E. "Jim Crow in the City of Brotherly Love; The Segregation of Philadelphia Horse Cars." Negro History Bulletin 26.3 (1962): 119–123. Cox, Harold E. "daily Except Sunday:" Blue Laws and the Operation of Philadelphia Horsecars." The Business History Review. 39.2 (1965): 228-242. Print. Cox, Harold E., and John F. Meyers. "The Philadelphia Traction Monopoly and the Pennsylvania Constitution of 1874: The Prostitution of an Ideal." Pennsylvania History: A Journal of Mid-Atlantic Studies 35.4 (1968): 406–423. Cox, Harold E. "The Wilkes-Barre Street Railway Strike of 1915." The Pennsylvania Magazine of History and Biography. 94.1 (1970): 75–94. Print. Books and published materials Cox, Harold E. PCC Cars of North America. Philadelphia: J.W. Boorse Jr., 1963. Print. Cox, Harold E. The Tram Subways of Philadelphia: A History and a Forward Look. Light Railway Transport League, London: W.J. Fowler & Sons, Ltd. 1964. Print. Cox, Harold E. Surface Cars of Philadelphia, 1911-1965. Forty Fort, Pa., 1965. Internet resource. Cox, Harold E. The Birney Car. Forty Fort, Pa., 1966. Internet resource. Charlton, Elbridge H, and Harold E. Cox. Electric Railway Car Trucks. Forty Fort, Pa: H.E. Cox, 1967. Print. Cox, Harold E, and Jack May. The Road from Upper Darby: The Story of the Market Street Subway-Elevated. Forty Fort, PA (80 Virginia Terr., Forty Fort 18704: H.E. Cox, 1967. Foesig, Harry, and Harold E. Cox. Trolleys of Montgomery County, Pennsylvania. Forty Fort, Pa.: Harold E. Cox, 1968. Print. Cox, Harold E. Early Electric Cars of Philadelphia, 1885-1911. Forty Fort - Pa, 1969. Print. Cox, Harold E. The Fairmount Park Trolley: A Unique Philadelphia Experiment. Forty Fort, Pa, 1970. Cox, Harold E. Utility Cars of Philadelphia, 1892-1971. Forty Fort, Pa, 1972. Print. Bowman, Stanley F, and Harold E. Cox. Trolleys of Chester County, Pennsylvania. , 1975. Print. Hudson, Alvin W, and Harold E. Cox. Street Railways of Birmingham. Birmingham, Ala: Printed and sold by Alvin W. Hudson, 1976. Print. Schieck, Paul J, and Harold E. Cox. West Branch Trolleys: Street Railways of Lycoming & Clinton Counties. Forty Fort, Pa: H.E. Cox, 1978. Print. Cox, Harold E. Early Electric Cars of Baltimore. Forty Fort, Pa: The Author, 1979. Cox, Harold E. Philadelphia Car Routes: Horse, Cable, Electric. Forty Fort (Pa.: H.E. Cox, 1982. Print. Sachs, Bernard J, George F. Nixon, and Harold E. Cox. Baltimore Streetcars, 1905-1963: The Semi-Convertible Era. Baltimore: Baltimore Streetcar Museum, 1984. Print. Foesig, Harry, Barker Gummere, and Harold E. Cox. Trolleys of Bucks County, Pennsylvania. Forty Fort, PA: Harold E. Cox, 1985. Print. Cox, Harold E. Wyoming Valley Trolleys: Street Railways of Wilkes-Barre, Nanticoke and Pittston, Pennsylvania. Forty Fort, PA: H.E. Cox, 1988. Print. Cox, Harold E. Diamond State Trolleys: Electric Railways of Delaware. Forty Fort, PA: H.E. Cox, 1991. Print. Cox, Harold E, and Jack May. The Road from Upper Darby: The Story of the Market Street Subway-Elevated. Forty Fort, PA (80 Virginia Terr., Forty Fort 18704: H.E. Cox, 2000. Bailey, David C, Joseph M. Canfield, and Harold E. Cox. Trolleys in the Land of the Sky: Street Railways of Asheville, N.c. and Vicinity. Forty Fort, Pa: Harold E. Cox, 2000. Print. Cox, Harold E. The Barber Car: Electric Traction's Ugly Duckling. Forty Fort, PA (80 Virginia Terrace, Forty Fort 18704: Printed and sold by H.E. Cox, 2002. Print. Cox, Harold E, Mary M. Adams, and Nancy Marion. Hill City Trolleys: Street Railways of Lynchburg, Va. Lynchburg, Va: Blackwell Press, 2018. Print. Reports Cox, Harold E. Pennsylvania Election Statistics, 1789–2000. Wilkes-Barre, Pa.: Wilkes University, 1996. Collections Harold E. Cox transportation collection (Collection 3158), The Historical Society of Pennsylvania. Notes and references External links Hagley Museum and Library: John F. Tucker Transit Collection, Accession 2046. 1931 births 2021 deaths 20th-century American historians Rail transport American male non-fiction writers Railway historians 20th-century American male writers People from Lynchburg, Virginia
Harold E. Cox
[ "Technology" ]
1,334
[ "Science and technology studies", "History of science and technology", "History of technology" ]
71,148,012
https://en.wikipedia.org/wiki/Land%20bridges%20of%20Japan
Due to changes in sea level, Japan has at various times been connected to the continent by , with continental Russia to the north via the Sōya Strait, Sakhalin, and the Mamiya Strait, and with the Korean Peninsula to the southwest, via the Tsushima Strait and Korea Strait. Land bridges also connected the Japanese Islands with each other. These land bridges enabled the migration of terrestrial fauna from the continent and their dispersal within Japan. Geological background Around 25 million years ago, the Sea of Japan began to open, separating Japan from the continent and giving rise to the Japanese island arc system of today. The Sea of Japan as a back-arc basin was open both to the northeast and to the southwest by 14 Ma, while marine transgression further contributed to the isolation and insulation of Japan. Due to the level of tectonic activity in the area and significant subsidence of the Japanese Islands since the Miocene, exact quantification of historic sea level changes is problematic. Northern land bridge Based on current depths, a reduction in sea level would be sufficient to connect Hokkaidō with the mainland. The and — sometimes referred to jointly as the or Sakhalin land bridge — are thus thought to have been in place during most glacial periods. Western land bridge With a minimum depth of and based in part on the appearance in Japan of Proboscidea, the and — sometimes referred to jointly as the Korean land bridge — are understood to have been in place at 1.2 Ma, 0.63 Ma, and 0.43 Ma. Kuril land bridge A has been insufficient to connect Hokkaidō with Kamchatka during the Quaternary. The southern Kuril land bridge that connected Kunashiri and the Lesser Kurils to Hokkaidō during the Early Holocene was insufficient with the rising sea level at around 6,000 BP. Seto land bridges Honshū, Shikoku, and Kyūshū are separated by shallow straits that rarely exceed in depth. Consequently, they were frequently connected together as a single land mass. Tsugaru land bridge The Tsugaru Strait, with a depth in excess of , represents a more significant faunal boundary, known as Blakiston's Line. The most recent age of the is uncertain. Ryūkyū land bridge The Ryūkyū Islands, separated by deeper straits still (the Tokara Gap), have been isolated from the main islands throughout the Quaternary. The was sufficient temporarily to connect Miyako-jima with Taiwan during the late Middle Pleistocene, allowing for the migration of the Steppe mammoth (Mammuthus trogontherii). During this period, the Miyako Strait was sufficient to prevent the land bridge reaching Okinawa Island. See also List of prehistoric mammals of Japan References Landforms of Japan Geology of Japan Historical geology Biogeography
Land bridges of Japan
[ "Biology" ]
570
[ "Biogeography" ]
71,148,331
https://en.wikipedia.org/wiki/Little%20Live%20%22Gotta%20Go%22%20Pets%20%28brand%29
Little Live "Gotta Go" Pets is a sub-brand manufactured by the Australian toy brand Moose Toys under its "Little Live" smart toy branding. The sub-brand is known for its highly interactive, technologically developed toys of plush animals, some of which use the toilet or engage in toilet humor, reportedly to get child buyers interested in potty training. The "Gotta Go" line of toys, consisting of a flamingo and a turtle in psychedelic colours, were immensely popular in 2021, becoming a sought-after Christmas gift by western shoppers. The toys were also controversial and met with criticism, including by Doug Walker of Nostalgia Critic, who called the toys "disgusting" after viewing a television commercial for the "Gotta Go Turdle" variant. "Gotta Go" toy line The Little Live Pets "Gotta Go" toy line began with a large plush "Little Live Gotta Go" flamingo printed in a psychedelic pink fabric with large anime-style eyes. The toy, named Sherbet (after the ice cream), had multiple functions so that its buyer could interact with it. It would "chat" (using smart cloud technology to record, store and play back its owner's voice), sing, dance, and make movements. The toy became more widely known, however, for its function whereby it would pass gas and make alarming noises; when placed on a plastic "potty" device, the toy would emit colourful fake feces into the potty that children could watch through the translucent "potty". These feces would be created from coloured sand that is fed down the toy's throat from a large scoop so that the process can be repeated. The toy was followed up with Shelbert the "Little Live Gotta Go Turdle" (a portmanteau of "turd" and "turtle") that was printed in purple psychedelic fabric and had similar functions to Sherbet the flamingo. Reception The "Gotta Go" toy line was praised by magazines, parenting groups and websites both for its crude humor and for its educational value in getting young children interested in potty training. Parenting website Romper included the toy as its primary potty training toy listed for children, saying of it, "look, no one said potty training was glamorous, and yes, you may find yourself owning a "Turdle" who poops a pink sand-like substance into its very own toilet. Honestly, whatever works." Hello! Magazine listed the toy as one of the best "poop-related toys" on the market for potty training in 2021. Parenting websites Scary Mommy and Made For Mums echoed this sentiment about the toys, with Kristen Mae noting that while she personally found the toy repulsive, it appeared to genuinely have children interested in learning how to use the bathroom. She said of her first experience viewing the flamingo variant of the toy, "It was a toilet. This bright pink flamingo was sitting on a tiny plastic toilet, getting ready to take a dump. Okay, cool, so the flamingo is going to pretend to poop. That’s cute. But WAIT. This … was something coming out of that thing’s butt?! As I watched with a mix of fascination and horror, a beige, gooey substance oozed from the flamingo’s bum and into the miniature toilet, which I only then registered was clear so as to enable the viewing of the expulsion of said gooey substance. The flamingo did it! “He [sic] poopin!!” my friend’s toddler squealed for the camera. That kid was positively fucking delighted." The toy line became one of the most popular 2021 Christmas holiday gifts for children according to New York Magazine, which stated, "In case you don’t remember, last year kids went wild for a pooping pink flamingo that talks, and the year before that, rainbow unicorns that pooped glittery colorful slime were all the rage. This year, according to Appel, we can expect kiddos to go nuts for this turtle named Shelbert that eats, sings, dances, and talks while using the toilet.". Made For Mums and T3 also listed the "Gotta Go" toy line as the best holiday toys of 2021-2022 for Christmas gifts. Criticism The "Gotta Go" toy line was also met with criticism from reviewers due largely to the crude humor aspect of the toys, as well as their bizarre physical appearance. Doug Walker of the web series Nostalgia Critic compared the toy line to other poorly received toy lines of the past that involved grotesque toilet humour, including "Magic Potty Baby" (A 1992 Tyco Toys brand doll that would leave yellow urine in a plastic pot that disappeared when "flushed"). Walker found the "Gotta Go" toy line "disgusting" and said about it, "Even the kid in the commercial is like, 'I have no idea why this is a thing!' Speaking of which, did you catch the name of this literal stinker? Gotta Go Turdle? Because of course it's called that! ... And it's bad enough that this thing drops a deuce but does it have to look like a Franken Berry tumor? It's gross, it's weird, and not in a good way. I hate this." iNews UK agreed with the criticism over the toy, but argued that such "gross-out" crude humour toys helped children to learn about difficult subjects like potty training and excretion. Totally the Bomb said of the toy, "the hottest toy for Christmas this year just may be the most disgusting toy you’ll find in the store this holiday season", and argued that while the toy was gross, children did seem to enjoy it a great deal. Availability The "Gotta Go" toy line is primarily sold on Amazon, while also appearing at Wal-Mart, the Moose Toys website and Canadian retailers Indigo Books and Music and Canadian Tire. References Toilet training Flatulence humor Educational toys Electronic toys Toy franchises Toy brands Toy controversies Toy animals
Little Live "Gotta Go" Pets (brand)
[ "Biology" ]
1,248
[ "Excretion", "Toilet training" ]
71,152,318
https://en.wikipedia.org/wiki/Promethean%20gap
The Promethean gap () is a concept concerning the relations of humans and technology and a growing "asynchronization" between them. In popular formulations, the gap refers to an inability or incapacity of human faculties to imagine the effects of the technologies that humans produce, specifically the negative effects. The concept originated with philosopher Günther Anders in the 1950s and for him, an extreme test case was the atomic bomb and its use at Hiroshima and Nagasaki in 1945, a symbol of the larger technology revolution that the 20th century was witnessing. The gap has been extended to and understood within multiple variationsa gap between production and ideology; production and imagination; production and need; production and use; technology and the body; doing and imagining; and doing and feeling. The gap can also be seen in areas such as law and in the actions of legislatures and policymakers. Various authors use different words to explain , accordingly resulting in Promethean divide, Promethean disjunction, Promethean discrepancy, Promethean gradient, Promethean slope, Promethean decline, Promethean incline, Promethean disparity, Promethean lag, and Promethean differential. Origin Günther Anders (1902–1992), born in Germany and of Jewish descent, attempted to conceptualize the discrepancy between humans and technology based on his observations and hands-on experience as an émigré in the United States, and his general theoretical background in Marxist concepts such as substructure and superstructure. In the United States, he did various jobs. He was a tutor, a factory worker, and even a Hollywood costume designer. By the 1950s conceptualizing this discrepancy had become an important and pervasive part of his writings and would remain a feature of his work until his death. In the 1980s he would go on to call his philosophy a philosophy of discrepancy (). The first published usage of the phrase was in the first volume of Anders's book The Outdatedness of Human Beings () published in the German language in 1956. Gunther uses exaggeration when explaining the concept of the Promethean gap and the associated concepts of Promethean shame (and pride) and states that there is a necessity and urgency for the exaggeration. Human "blindness" amidst the increasing gradient demanded it. The aim then became to expand humans' capacity and ability to imagine. In Burning Conscience (1961), letters between US airman Claude Eatherly and Gunther, Gunther writes, Gunther considered the service members of the US Army Air Forces unit 509th Composite Group, which conducted the atomic bombings of Hiroshima and Nagasaki, and of which Eatherly was a part, as an example of people affected by the Promethean gap. Along with the atomic bombings, Auschwitz (representing the Holocaust) was an example from the same time period, both represented technology enabled conditions of large scale mechanized death, a new era which required conceptualizing as a basis of future prevention. Gunther took these two examples of advances in civilization under the same umbrella of mechanization, taking note that the atomic bombings and Auschwitz differed in a key point of distance between the individuals involved which accordingly influenced his interaction with the atomic bombings. An increasingly networked technologization is seeing increasing sophistication in all forms which our human faculties are unable to keep up with, we are "unable to imagine the things we make", an inversion of before. Prometheus The word "Promethean" has been taken from the Greek myth of Prometheus. There are a number of stories attached to him along with variations of the stories. Prometheus, a Titan and a trickster, created primitive versions of humanity. He created them in the image of the Greek gods, however Zeus limited the powers of humanity. Following this, Prometheus tricked Zeus, at least twice. The first deception by Prometheus resulted in Zeus confiscating fire from humanity. Prometheus, in retaliation, stole fire from Mount Olympus and gave it back to humanity. When humanity flourished once again and Zeus saw that they had been given fire, he eternally punished Prometheus. Anders uses this story as symbolism, where the fire is modern technology and the eternal punishment given to Prometheus the negative consequences. The convergence of the variations in the story is the gift of fire. Through this gift, humanity can now play its own tricks, for both good or bad. In variations of the story, Heracles unchains Prometheus, and the story of Pandora and her jar follows. References Citations and footnotes Works cited Previously published in Primary sources Others External links Concepts in social philosophy Philosophy of technology Philosophical analogies Gap
Promethean gap
[ "Technology" ]
962
[ "Philosophy of technology", "Science and technology studies" ]
71,152,435
https://en.wikipedia.org/wiki/George%20Macdonald%20Medal
The George Macdonald Medal is jointly awarded by the Royal Society of Tropical Medicine and Hygiene and the London School of Hygiene & Tropical Medicine "to recognise outstanding contributions to tropical hygiene." The award was established in 1972 following the death of George Macdonald in 1967. It is awarded every three years and is given to "those in their mid-career or senior leaders in their field." Recipients Source: RSTMH 2020 Alex Ezeh and Sarah Cleaveland 2017 Ann Ashworth and Betty Kirkwood 2014 Richard Hayes and Rosanna Peeling 2011 David Mabey and Robert Snow 2008 Sandy Cairncross 2005 Allen Foster 2002 Anthony Harries 1999 Andrew M. Tomkins 1996 David J. Bradley 1996 Christopher F. Curtis 1993 Tore Godal 1990 Michael P. Alpers and C. E. Gordon Smith 1987 Kelsey A. Harrison 1984 Arnoldo Gabaldon 1984 John Waterlow 1981 Peter Jordan 1978 Leonard J. Bruce-Chwatt 1975 Donald A. Henderson 1972 George Davidson References London School of Hygiene and Tropical Medicine Royal Society of Tropical Medicine and Hygiene Medicine awards
George Macdonald Medal
[ "Technology" ]
213
[ "Science and technology awards", "Medicine awards" ]
71,153,172
https://en.wikipedia.org/wiki/Maserati%20V8%20engine
The Maserati V8 engine family is a series of 90°, four-stroke, naturally-aspirated (later turbocharged), V8 engines, designed, developed and built by Italian manufacturer Maserati for almost 45 consecutive years. A racing variant first appeared in 1939, with the V8RI, and a road-going version was later introduced with the Maserati 5000 GT in 1959, and later ending with the Maserati 3200 GT, in 2002. The engines ranged in displacement from , and production continued until 2002. It was later succeeded by (but not to be confused with) the Ferrari-Maserati engine; a separate engine, completely designed, developed and produced by Ferrari, but used in several Maserati models. Applications Maserati V8RI Maserati 5000 GT Maserati 450S Maserati Ghibli Maserati Bora Maserati Quattroporte I Maserati Quattroporte III Maserati Indy Maserati Mexico Maserati Kyalami Maserati Khamsin Maserati Shamal Maserati 3200 GT References Maserati Engines by model Gasoline engines by model V8 engines
Maserati V8 engine
[ "Technology" ]
244
[ "Engines", "Engines by model" ]
71,160,165
https://en.wikipedia.org/wiki/Lawrence%20Schkade
Lawrence L. Schkade (1930–2017) was an American information systems and management science researcher. Schkade was a native of Port Arthur, Texas, who earned his doctorate at Louisiana State University. He taught at the University of Texas at Austin and the University of North Texas before joining the University of Texas at Arlington. At UTA, he was Ashbel Smith Professor of Information Systems and Management Sciences, later held the Jenkins Garrett Professorship in Information Systems and Operations Management, and also served as dean of the College of Business Administration. Schkade was granted emeritus status in October 2004. He died on 25 November 2017, aged 87. References University of Texas at Arlington faculty 2017 deaths American university and college faculty deans Louisiana State University alumni Information systems researchers 1930 births Management scientists People from Port Arthur, Texas Scientists from Texas University of Texas at Austin faculty Business school deans University of North Texas faculty American business theorists
Lawrence Schkade
[ "Technology" ]
187
[ "Information systems", "Information systems researchers" ]
71,164,067
https://en.wikipedia.org/wiki/Pore%20structure
Pore structure is a common term employed to characterize the porosity, pore size, pore size distribution, and pore morphology (such as pore shape, surface roughness, and tortuosity of pore channels) of a porous medium. Pores are the openings in the surfaces impermeable porous matrix which gases, liquids, or even foreign microscopic particles can inhabit them. The pore structure and fluid flow in porous media are intimately related. With micronanoscale pore radii, complex connectivity, and significant heterogeneity, the complexity of the pore structure affects the hydraulic conductivity and retention capacity of these fluids. The intrinsic permeability is the attribute primarily influenced by the pore structure, and the fundamental physical factors governing fluid flow and distribution are the grain surface-to-volume ratio and grain shape. The idea that the pore space is made up of a network of channels through which fluid can flow is particularly helpful. Pore openings are the comparatively thin sections that divide the relatively large portions known as pore bodies. Other anatomical analogies include "belly" or "waist" for the broad region of a pore and "neck" or "throat" for the constrictive part. Pore bodies are the intergranular gaps with dimensions that are generally significantly smaller than those of the surrounding particles in a medium where textural pore space predominates, such as sand. On the other hand, a wormhole can be regarded as a single pore if its diameter is practically constant over its length. Such pores can have one of three types of boundaries: (1) constriction, which is a plane across the locally narrowest part of the pore space; (2) interface with another pore (such as a wormhole or crack); or (3) interface with solid. Porosity The proportion of empty space in a porous media is called porosity. It is determined by dividing the volume of the pores or voids by the overall volume. It is expressed as a percentage or as a decimal fraction between 0 and 1. Porosity for the majority of rocks ranges from less than 1% to 40%. Porosity influences fluid storage in geothermal systems, oil and gas fields, and aquifers, making it evident that it plays a significant role in geology. Fluid movement and transport across geological formations, as well as the link between the bulk properties of the rock and the characteristics of particular minerals, are controlled by the size and connectivity of the porous structure. Measuring porosity The samples' total volume and pore space volume were measured in order to calculate the porosities. Measuring pore space volume A helium pyrometer was used to calculate the volume of the pores and relied on Boyle's law. (P1V1=P2V2) and helium gas, which easily passes through tiny holes and is inert, to identify the solid fraction of a sample. A sample chamber with a known volume is where the core is put. Pressure is applied to a reference chamber with a known volume. The helium gas may now go from the reference chamber to the sample chamber thanks to the connection between the two rooms. The volume of the sample solid is calculated using the ratio between the starting and final pressures. The pore volume, as calculated by the helium pycnometer, is the difference between the total volume and the solid volume. Pore size and pore size distribution Pore size Typically, the effective radius of the pore body or neck is used to define the size of pores. The position, shape, and connection of pores in solids are only a few of their numerous attributes and the most straightforward aspect of a pore to visualize is likely its size, or its extent in a single spatial dimension. In comparison to other factors like pore shape, it is arguable that pore size has the biggest or broadest impact on the characteristics of solids. Therefore, using pore size or pore size distribution to describe and contrast various porous substances is definitely convenient and valuable. The three main pore size ranges (The current classification of pore size recommended by the International Union of Pure and Applied Chemistry) are as following: Pore size distribution The relative abundance of each pore size in a typical volume of soil is represented by the pore size distribution. It is represented by the function f(r), whose value is proportional to the total volume of all pores whose effective radius is within an infinitesimal range centered on r. And f(r) can be thought to have textural and structural components. Measuring pore size distribution Mercury intrusion porosimetry and gas adsorption are common techniques for determining the pore size distribution of materials and power sources. When studying the pore size distribution using the gas adsorption technique utilizing the nitrogen or argon adsorption isotherm at their boiling temperatures, it is possible to determine the pore size from the molecular level to a few hundred nm. The pressure sensor's precise constraints and the coolant's temperature stability result in a maximum observed pore size of just a little bit more than 100 nm in a realistic environment. Mercury porosimetry determines the pore size distribution and quantifies the associated incursion amount by applying pressure to the non-wetting mercury. The pore size may be readily estimated using this method and ranges from a few nm to 1000 m. The material must be robust enough to withstand the pressure since mercury intrusion requires 140 MPa of pressure for pores smaller than 10 nm. Additionally, it utilizes the idea to determine the pore size of the inkbottle neck. The relation of pore size to pore size distribution The relationship between pore size and pore size distribution in a randomly constructed porous system, is expected to be monotone: bigger pores are connected to larger particles. The relationship between pore size and particle size is complicated by the nonrandom nature of most soils. Big pores may be found in both large and tiny particles, including clays, which promote aggregation and therefore the development of large interaggregate pores. Subdivisions of a pore size distribution in randomly structured media can express more specific characteristics of soils with more complex conceptualizations, such as the hysteresis of soil water retention. Pore morphology The pore morphology is the shape, surface roughness, and tortuosity of pore channels representing the liquid and gaseous phases. Tortuosity of pore channels Tortuosity of pore channels is a unique geometric quantity that is utilized not only to measure the transport characteristics of porous system, but also to express the sinuosity and complexity of internal percolation routes. Toruosity is intimately connected to the transport behavior of electrical conductivity, fluid permeation, molecular diffusion, and heat transfer in geoscience, impacting petrophysical parameters such as permeability, effective diffusivity, thermal conductivity, and formation resistivity factor. Surface roughness The standard definition of surface roughness for porous medium is based on the average measured vertical coordinate value in comparison to a relative surface height, such as root-mean-square roughness or arithmetic roughness. However, the lack of fractal topology consideration led to the relative surface height definition being deemed inadequate in reality. The ratio of "real surface area" to "geometric smooth-surface area" was used as the second definition of surface roughness. This definition has been applied in several research to alter flow equations or measure the fluid-fluid interfacial area. The fundamental idea of fractal geometry is where the third definition of surface roughness comes from, in which either modifies the pore surfaces (two-dimensional) or the whole porous medium (three-dimensional) using fractal dimension adjustments, resulting in larger surface dimensions or reduced media dimensions. The hurst roughness exponent, a similar definition, is occasionally used. This quantity, which spans from 0 to 1, is connected to the fractal dimension. See also Bulk density Filtration Permeability Poromechanics Porous medium Reactive transport modeling in porous media References Further reading J. Bear; (1972) Dynamics of Fluids in Porous Media. (Elsevier, New York) Hillel, D.; (2004) Introduction to environmental soil physics. (Sydney : Elsevier/Academic Press: Amsterdam ;) Leeper GW (1993) Soil science : an introduction. (Melbourne University Press: Carlton, Victoria.) External links Geology Buzz: Porosity Defining Permeability Tailoring porous media to control permeability Permeability of Porous Media Graphical depiction of different flow rates through materials of differing permeability Fundamentals of Fluid Flow in Porous Media Hydrology Soil mechanics Soil science Porous media In situ geotechnical investigations
Pore structure
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
1,831
[ "Hydrology", "Applied and interdisciplinary physics", "Porous media", "Soil mechanics", "Materials science", "Environmental engineering" ]
71,165,099
https://en.wikipedia.org/wiki/Secure%20element
A secure element (SE) is a secure operating system (OS) in a tamper-resistant processor chip or secure component. It can protect assets (root of trust, sensitive data, keys, certificates, applications) against high-level software and hardware attacks. Applications that process this sensitive data on an SE are isolated and so operate within a controlled environment not affected by software (including possible malware) found elsewhere on the OS. The hardware and embedded software meet the requirements of the Security IC Platform Protection Profile [PP 0084] including resistance to physical tampering scenarios described within it. More than 96 billion secure elements were produced and shipped between 2010 and 2021. SEs exist in various form factors, as devices such as smart cards, UICCs, or smart microSD cards, or embedded, or integrated, as parts of larger devices. SEs are an evolution of the chips in earlier smart cards, which have been adapted to suit the needs of numerous use cases, such as smartphones, tablets, set-top boxes, wearables, connected cars, and other internet of things (IoT) devices. The technology is widely used by technology firms such as Oracle, Apple and Samsung. SEs provide secure isolation, storage and processing for applications (called applets) they host while being isolated from the external world (e.g. rich OS and application processor when embedded in a smartphone) and from other applications running on the SE. Java Card and MULTOS are the most deployed standardized multi-application operating systems currently used to develop applications running on SEs. Since 1999, GlobalPlatform has been the body responsible for standardizing secure element technologies to support a dynamic model of application management in a multi-actor model. GlobalPlatform also runs Functional and Security Certification programmes for secure elements, and hosts a list of Functional Certified and Security Certified products. GlobalPlatform technology is also embedded in other standards such as ETSI SCP (now SET) since release 7. A Common Criteria Secure Element Protection Profile has been released targeting EAL4+ level with ALC_DVS.2 and AVA_VAN.5 extension to standardize the security features of a secure element across markets. References Computer security Computer hardware
Secure element
[ "Technology", "Engineering" ]
456
[ "Computer engineering", "Computer hardware", "Computer systems", "Computer science", "Computers" ]
71,165,100
https://en.wikipedia.org/wiki/Jens%20Lindhard
Jens Lindhard (26 February 1922 – 15 October 1997) was a Danish physicist and professor at Aarhus University working on condensed matter physics, statistical physics and special relativity. He was the president of the Royal Danish Academy of Sciences and Letters between 1981 and 1988. He is known for the development of the Lindhard theory, that describes the behaviour of metals under the influence of electromagnetic fields, named in his honour. He is also known for the development of channelling theory, to describe the path of a charged particle in a crystalline solid. Early life Jens Lindhard was born in Tystofte, Denmark in 1922, son of Erik and Agnes Lindhard. He was the youngest son of six, consisting of four girls and two boys. Jens' father was a professor at University of Copenhagen Faculty of Life Sciences but died young in 1928. Jens' older brother was a bomber in England and died during the Invasion of Normandy. Jens went to school at the Metropolitanskolen. Later, during his university studies, Jens joined the Danish Brigade in Sweden and also joined them back in the defence of the Danish-German border. He started his studies in physics at the Niels Bohr Institute and in 1945 he received a Master of Science degree in physics from University of Copenhagen. Research During his university years, he worked under the supervision of Oskar Klein in Sweden on superconductivity, publishing his first major work on the subject in 1944. Later he moved to work with Rudolf Peierls in the University of Birmingham. There, in 1954, he published the first description of the dielectric function of metals in the linear response regime, today known as Lindhard theory. In 1950, he worked in close collaboration with Niels Bohr in Blegdamsvej on the penetration of particles in matter. There Lindhard, Morten Scharff and H. E. Schiøt developed what is now known as the (carrying their initials), which describes the penetration of low-energy ions. He also worked in fundamental problems related to statistical physics and relativity. As a teaching assistant, he helped to verify the formulas and problems in Christian Møller's The Theory of Relativity. Later Lindhard would provide a solution to the controversy related to the transformation of temperature in special relativity. Lindhard moved to Aarhus University in 1957 in collaboration with experimentalist Karl Ove Nielsen, where he created and led a research group to study the penetration of charged particles in crystal lattices. During his time in Aarhus, Lindhard developed what is now known as the classical theory of channelling (sometimes also referred as Lindhard's theory) in continuum models in 1965. Awards and membership Jens Lindhard received several awards including: the Rigmor and Carl Holst-Knudsen Award for Scientific Research in 1965 the H. C. Ørsted Medal in 1974 the Danish Physical Society Physics award in 1988 Honorary Doctorate degrees from Odense University in 1996 and Fudan University, Shanghai, in 1997 Jens Linhard was member of the Royal Danish Academy of Sciences and Letters and a member of the since 1962. We was president of the Royal Society from 1981–1988. He was also member of the Koninklijke Hollandsche Maatschappij der Wetenschappen since 1984. References 1922 births 1997 deaths 20th-century Danish physicists Condensed matter physicists Academic staff of Aarhus University University of Copenhagen alumni
Jens Lindhard
[ "Physics", "Materials_science" ]
702
[ "Condensed matter physicists", "Condensed matter physics" ]
71,165,287
https://en.wikipedia.org/wiki/Non-relativistic%20gravitational%20fields
Within general relativity (GR), Einstein's relativistic gravity, the gravitational field is described by the 10-component metric tensor. However, in Newtonian gravity, which is a limit of GR, the gravitational field is described by a single component Newtonian gravitational potential. This raises the question to identify the Newtonian potential within the metric, and to identify the physical interpretation of the remaining 9 fields. The definition of the non-relativistic gravitational fields provides the answer to this question, and thereby describes the image of the metric tensor in Newtonian physics. These fields are not strictly non-relativistic. Rather, they apply to the non-relativistic (or post-Newtonian) limit of GR. A reader who is familiar with electromagnetism (EM) will benefit from the following analogy. In EM, one is familiar with the electrostatic potential and the magnetic vector potential . Together, they combine into the 4-vector potential , which is compatible with relativity. This relation can be thought to represent the non-relativistic decomposition of the electromagnetic 4-vector potential. Indeed, a system of point-particle charges moving slowly with respect to the speed of light may be studied in an expansion in , where is a typical velocity and is the speed of light. This expansion is known as the post-Coulombic expansion. Within this expansion, contributes to the two-body potential already at 0th order, while contributes only from the 1st order and onward, since it couples to electric currents and hence the associated potential is proportional to . Definition In the non-relativistic limit, of weak gravity and non-relativistic velocities, general relativity reduces to Newtonian gravity. Going beyond the strict limit, corrections can be organized into a perturbation theory known as the post-Newtonian expansion. As part of that, the metric gravitational field , is redefined and decomposed into the non-relativistic gravitational (NRG) fields : is the Newtonian potential, is known as the gravito-magnetic vector potential, and finally is a 3d symmetric tensor known as the spatial metric perturbation. The field redefinition is given by In components, this is equivalent to where . Counting components, has 10, while has 1, has 3 and finally has 6. Hence, in terms of components, the decomposition reads . Motivation for definition In the post-Newtonian limit, bodies move slowly compared with the speed of light, and hence the gravitational field is also slowly changing. Approximating the fields to be time independent, the Kaluza-Klein reduction (KK) was adapted to apply to the time direction. Recall that in its original context, the KK reduction applies to fields which are independent of a compact spatial fourth direction. In short, the NRG decomposition is a Kaluza-Klein reduction over time. The definition was essentially introduced in, interpreted in the context of the post-Newtonian expansion in, and finally the normalization of was changed in to improve the analogy between a spinning object and a magnetic dipole. Relation with standard approximations By definition, the post-Newtonian expansion assumes a weak field approximation. Within the first order perturbation to the metric , where is the Minkowski metric, we find the standard weak field decomposition into a scalar, vector and tensor , which is similar to the non-relativistic gravitational (NRG) fields. The importance of the NRG fields is that they provide a non-linear extension, thereby facilitating computation at higher orders in the weak field / post-Newtonian expansion. Summarizing, the NRG fields are adapted for higher order post-Newtonian expansion. Physical interpretation The scalar field is interpreted as the Newtonian gravitational potential. The vector field is interpreted as the gravito-magnetic vector potential. It is magnetic-like, or analogous to the magnetic vector potential in electromagnetism (EM). In particular, it is sourced by massive currents (the analogue of charge currents in EM), namely by momentum. As a result, the gravito-magnetic vector potential is responsible for current-current interaction, which appears at the 1st post-Newtonian order. In particular, it generates a repulsive contribution to the force between parallel massive currents. However, this repulsion is overturned by the standard Newtonian gravitational attraction, since in gravity a current "wire" must always be massive (charged) -- unlike EM. A spinning object is the analogue of an electromagnetic current loop, which forms as magnetic dipole, and as such it creates a magnetic-like dipole field in . The symmetric tensor is known as the spatial metric perturbation. From the 2nd post-Newtonian order and onward, it must be accounted for. If one restricts to the 1st post-Newtonian order, can be ignored, and relativistic gravity is described by the , fields. Hence it becomes a strong analogue of electromagnetism, an analogy known as gravitoelectromagnetism. Applications and generalizations The two body problem in general relativity holds both intrinsic interest and observational, astrophysical interest. In particular, it is used to describe the motion of binary compact objects, which are the sources for gravitational waves. As such, the study of this problem is essential for both detection and interpretation of gravitational waves. Within this two body problem, the effects of GR are captured by the two body effective potential, which is expanded within the post-Newtonian approximation. Non-relativistic gravitational fields were found to economize the determination of this two body effective potential. Generalizations In higher dimensions, with an arbitrary spacetime dimension , the definition of non-relativistic gravitational fields generalizes into Substituting reproduces the standard 4d definition above. See also Non-relativistic spacetime References General relativity
Non-relativistic gravitational fields
[ "Physics" ]
1,211
[ "General relativity", "Theory of relativity" ]
71,165,334
https://en.wikipedia.org/wiki/Immune%20system%20contribution%20to%20regeneration
Immune system contribution to regeneration of tissues generally involves specific cellular components, transcription of a wide variety of genes, morphogenesis, epithelia renewal and proliferation of damaged cell types (progenitor or tissue-resident stem cells). However, current knowledge reveals more and more studies about immune system influence that cannot be omitted. As the immune system exhibits inhibitory or inflammatory functions during regeneration, the therapies are focused on either stopping these processes or control the immune cells setting in a regenerative way, suggesting that interplay between damaged tissue and immune system response must be well-balanced. Recent studies provide evidence that immune components are required not only after body injury but also in homeostasis or senescent cells replacement. Macrophages Both phenotypes of macrophages (M1 and M2) are one of the most important regenerative components of the immune system as their dysfunction inhibits tissue repair and blastema formation. M1 macrophages are known as pro-inflammatory, (secreting cytokines IL-1, IL-6, TNF- α, and IFN-γ) playing a crucial role in pathogen phagocytosis and cell debris clearance and molecules that promote inflammation in comparison with M2 macrophages (anti-inflammatory macrophages secreting IL-10 and VEGF) that inhibit inflammation and initiate regenerative processes in the site of injury. Both must be polarized correctly and at the right time during the healing processes. T-regulatory cells Skeletal muscle regeneration in the site of injury accumulates T-reg cells as a response to IL-33. T-reg cells directly induce M1/M2 phenotype of macrophages so they change the outcome and manage the processes in time. Another important function od T-regs is their activation of muscle cells precursors and proliferation of these cells by growth factors for example amphiregulin. Scavenger cells Immune components are necessary in cellular debris clearance in order to avoid toxic products of dead or necrotic cells and to create space for the renewal of tissue and its incorporation into the organ. The main cells that are involved in this particular process are M1 macrophages also called scavengers. Phagocytosis of dead tissue can consequently activate the signaling cascade necessary for regeneration. For instance, the macrophages phagocytosis in liver of dead or necrotic hepatocytes induces Wnt expression, which can influence the proliferation and differentiation of hepatic progenitor cells into liver cells. Stem and progenitor cells regulation Immune cells under the control of inflammatory cytokines and setting secrete molecules that can promote proliferation and differentiation of progenitors and stem cells and in certain organism also dedifferentiation of the tissue. ¨ For example, zebrafish regeneration of the nerve tissue is followed by brain injury and inflammation that activate microglia and leukocytes. The secretion of Leukotriene C4 consequently activates the radial glial cells (neural progenitors) and induce regeneration. Also, neutrophils and macrophages in rats secrete growth factor oncomodulin that support axonal regeneration in the CNS. Microglia and macrophages together help in the oligodendrocyte remyelination. Intestinal injury of the epithelia activates macrophages that secrete a wide range of survival and growth progenitor factors which is very similar to muscle regeneration. M1 macrophages induce proliferative environment by secreting cytokines IL6, TNF, IL1, and G-CSF. Dedifferentiation Dedifferentiation is a pathway in which already differentiated tissue come back reversely in the process of differentiation. Cells loose differentiated setting and are becoming progenitor or stem cells again. Afterwards they can differentiate again into other cell types (usually like the tissue of origin). Thus, dedifferentiation displays the ability of regeneration in the absence or scarcity of stem or progenitor cells. Recent studies discovered macrophages as an initial factor that contribute to the dedifferentiation of the cells in the site of injury and promote the formation of the progenitor cell pool during limb regeneration in the salamander. Molecules such as Oncostatin M are considered as a mediator of cardiomyocyte dedifferentiation and morphogenesis factor during myocardial infarction and chronic cardiomyopathy. Angiogenesis Angiogenesis and branching of the veins are dependent on eosinophils, mast cells and myeloid cells during development. During the regeneration, blood flow is necessary to support tissue repair and remodeling. Key ligands expressed by macrophages Wnt5a and Wnt11 enhance the expression of the VEGF inhibitory receptor Flt1 so that blocking this pathway supports vascularization. Other study focuses on heart injury. They found that during the late phase of scar formation M2 macrophages are needed for vascularization together with fibrosis to form a scar. Monocyte depletion impaired heart regeneration due to insufficient neoangiogenesis in mice. Even though there are different types of macrophages involved in wide range of processes that are still uncertain, the study suggests that macrophages promoting human heart regeneration might promote angiogenesis without fibroblasts activation. References Immune system
Immune system contribution to regeneration
[ "Biology" ]
1,104
[ "Immune system", "Organ systems" ]
68,265,416
https://en.wikipedia.org/wiki/Armourstone
Armourstone is a generic term for broken stone with stone masses between (very coarse aggregate) that is suitable for use in hydraulic engineering. Dimensions and characteristics for armourstone are laid down in European Standard EN13383. In the United States, there are a number of different standards and publications setting out different methodologies for classifying armourstone, ranging from weight-based classifications to gradation curves and size-based classifications. Stone Classes European Practice to EN13383 Armourstone is available in standardised stone classes, defined by both a lower and upper value of the stone mass within these classes. For instance, Class 60-300 signifies that up to 10% of the stones weigh less than and up to 30% weigh more than . The standard also mentions values which shouldn't be exceeded by 5% or 3%. For particular applications like a top layer for a breakwater or bank protection, the median stone mass size, known as M50, is frequently required. This pertains to a category A stone. It doesn't relate to category B stone. There are two main groups: HM and LM, standing for Heavy and Light respectively. A stone class might be defined according to EN 13383 as, for instance, HMA300-1000. The accompanying graphs offer an overview of all stone classes. A distribution between the two curves in the graph fulfils the criteria for category B. Furthermore, for category A compliance, the MEM should intersect the short horizontal line. MEM represents the average stone mass, meaning the total sample mass divided by the count of stones in that sample. It's worth noting that in wider ranges, notably 15-300 and 40-400, there's a considerable difference; for the 15-300 class, M50 is 1.57 times the MEM. Additionally, there's a defined stone class called CP (Coarse). Despite its name suggesting otherwise, the class CP is smaller than LM. This naming convention exists because this class corresponds to the coarse category in the standard for fractional stone used as supplemental material (aggregate). For the CP stone class, size isn't denoted in kg, but in mm. Based on the primary data from standard EN13383, the following table is presented: Practice in the United States Several standards and guidelines are identified for classifying armourstone used in coastal and river engineering in the United States, some of which are summarised in the following table: These standards provide different methodologies for classifying armourstone, ranging from weight-based classifications to gradation curves and size-based classifications. Guidance for the use of large armourstone is given in various USACE publications including the Coastal Engineering Manual. Median Stone Mass M50 For fine-grained materials, such as sand, the size is typically represented by the median diameter. This measurement is ascertained by sieving the sand. However, for armourstone, producing a sieve curve isn't feasible because the stones are too large for sieving. Therefore, the M50 measurement is employed. It is calculated by obtaining a sample of stones, determining the mass of each stone, arranging these masses by size, and then creating a cumulative mass curve. Within this curve, one can identify the M50 value. It's essential to note that the term median stone mass is technically inaccurate, as the stone with mass M50 doesn't necessarily represent the median stone in the sample. To illustrate, consider a sample of 50 stones sourced from a quarry in Bulgaria. The blue rectangle is of A4 size. Every stone's weight is individually recorded, and their masses are illustrated in the attached graph. The horizontal axis represents the individual stone mass, while the vertical axis denotes the cumulative mass as a percentage of the entire sample's mass. At the 50% mark, the M50 value is discerned to be 24kg. The true median for this sample is the mean mass of the 25th and 26th stones. In this specific instance, the M50 closely matches the median mass, which is 26kg. This sample meets the criteria for LMA5-40. However, it's important to note that the sample size is insufficient. According to EN13383, such a sample should comprise at least 200 stones. Nominal Diameter Many design formulas do not account for stone mass but rather for diameter. As a result, a method for conversion is required. This method is identified as the nominal diameter. Essentially, it represents the size of a cube's edge that weighs the same as the stone. The formula for this is: Often, the median value is utilised for this purpose, represented as dn50. Typically, the following relationship can be used for conversion: Here, Fs represents the shape factor. The shape factor can vary substantially, typically ranging between 0.7 and 0.9. Referring to the aforementioned example from Bulgaria, the dn50 was also determined. Given the local stone's density (which is limestone) stands at 2284 kg/m³, the dn50 is calculated to be 22cm. It may be observed that the stones in the sample appear much larger to the eye. This visual misperception can be attributed to a few particularly large stones within the sample, which distort the overall impression. Additional Parameters The EN13383 standard elaborates on numerous parameters that define the quality of armourstone. This includes attributes like the shape parameter (measured as Length/Thickness), resistance to fracturing, and the capacity for water absorption. It's pivotal to understand that while the standard delineates how to characterise the quality of armourstone, it doesn't specify the requisite quality for a given application. Such specifics are typically found in design manuals and guidelines, including the Rock Manual. Establishing the Necessary Stone Weight When determining the weight of stone required under the influence of waves, one might utilise the (now dated) Hudson Formula or the Van der Meer formula. For computations pertaining to stone weight in flows, the Izbash formula is advisable. References Hydraulic engineering
Armourstone
[ "Physics", "Engineering", "Environmental_science" ]
1,253
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
68,265,421
https://en.wikipedia.org/wiki/Alexander%20Moiseevich%20Olevskii
Alexander Moiseevich Olevskii (Russian: Александр Моисеевич Олевский, born February 12, 1939, in Moscow) is a Russian-Israeli mathematician at Tel Aviv University, specializing in mathematical analysis. As of July 2021, he is a professor emeritus. He graduated in 1963 with a Candidate of Sciences degree (PhD) from Moscow State University. There he received in 1966 a Russian Doctor of Sciences degree (habilitation). At the Moscow Institute of Electronics and Mathematics, he was from 1988 to 1992 head of the department of algebra and analysis. In the spring of 1996 he was at the Institute for Advanced Study. He has held visiting appointments at universities or institutes in several countries, including France, Australia, Germany, Italy, and the United States. In 1986 Olevskii was an invited speaker at the International Congress of Mathematicians in Berkeley, California. He was a member of the 2013 Class of Fellows of the American Mathematical Society (announced in 2012). In 2014 he was an invited speaker at the European Congress of Mathematics in Kraków. His doctoral students include Gady Kozma. Selected publications } 2004 References External links (publication list) 1939 births Living people Soviet mathematicians Israeli mathematicians Moscow State University alumni Academic staff of the Moscow Institute of Electronics and Mathematics Academic staff of Tel Aviv University Functional analysts Mathematical analysts Operator theorists Soviet Jews Soviet emigrants to Israel Israeli people of Russian-Jewish descent Fellows of the American Mathematical Society
Alexander Moiseevich Olevskii
[ "Mathematics" ]
304
[ "Mathematical analysis", "Mathematical analysts" ]
68,265,746
https://en.wikipedia.org/wiki/Ira%20N.%20Levine
Ira N. Levine (February 12, 1937 – December 17, 2015) was an American author, scientist, professor and faculty member in the chemistry department at Brooklyn College. He widely acknowledged for his research in the field of microwave spectroscopy, and for several widely known textbooks in physical chemistry and quantum chemistry. Biography Levine was born in Brooklyn, New York. He was graduated from Erasmus Hall High School. In 1952, Levine was graduated with an honorary degree in chemistry and named as top scholastic honorary graduate in the College of Engineering and Science at Carnegie Mellon College of Engineering. In 1959, he went on to graduate school in the field of physical chemistry and mathematical physics at Harvard University. In 1963, he was awarded a PhD in chemistry under the guidance of Professor E. Bright Wilson by Harvard University. He started his academic career at Brooklyn College in 1964 where he taught first-year courses in general chemistry as well as advanced courses in physical and quantum chemistry. He became a full-time professor in 1978. Levine is recognized for several textbooks he authored and for his research in physical chemistry, quantum chemistry and microwave spectroscopy. Levine textbooks include Quantum Chemistry (7th ed.), Physical Chemistry (6th ed.), Solutions Manual to Physical Chemistry (5th ed.), and a textbook on Molecular Spectroscopy. His textbooks have been translated into many languages, including Arabic, Chinese, Czech, Hungarian, Polish, and Spanish, and they have been used by many Chemistry departments in the US and elsewhere. He died on December 17, 2015. References 1937 births 2015 deaths 20th-century American chemists American physical chemists Theoretical chemistry American quantum physicists Brooklyn College faculty Harvard University alumni Carnegie Mellon University College of Engineering alumni Erasmus Hall High School alumni
Ira N. Levine
[ "Chemistry" ]
344
[ "Theoretical chemistry", "nan" ]
68,270,636
https://en.wikipedia.org/wiki/Linearized%20augmented-plane-wave%20method
The linearized augmented-plane-wave method (LAPW) is an implementation of Kohn-Sham density functional theory (DFT) adapted to periodic materials. It typically goes along with the treatment of both valence and core electrons on the same footing in the context of DFT and the treatment of the full potential and charge density without any shape approximation. This is often referred to as the all-electron full-potential linearized augmented-plane-wave method (FLAPW). It does not rely on the pseudopotential approximation and employs a systematically extendable basis set. These features make it one of the most precise implementations of DFT, applicable to all crystalline materials, regardless of their chemical composition. It can be used as a reference for evaluating other approaches. Introduction At the core of density functional theory the Hohenberg-Kohn theorems state that every observable of an interacting many-electron system is a functional of its ground-state charge density and that this density minimizes the total energy of the system. The theorems do not answer the question how to obtain such a ground-state density. A recipe for this is given by Walter Kohn and Lu Jeu Sham who introduce an auxiliary system of noninteracting particles constructed such that it shares the same ground-state density with the interacting particle system. The Schrödinger-like equations describing this system are the Kohn-Sham equations. With these equations one can calculate the eigenstates of the system and with these the density. One contribution to the Kohn-Sham equations is the effective potential which itself depends on the density. As the ground-state density is not known before a Kohn-Sham DFT calculation and it is an input as well as an output of such a calculation, the Kohn-Sham equations are solved in an iterative procedure together with a recalculation of the density and the potential in every iteration. It starts with an initial guess for the density and after every iteration a new density is constructed as a mixture from the output density and previous densities. The calculation finishes as soon as a fixpoint of a self-consistent density is found, i.e., input and output density are identical. This is the ground-state density. A method implementing Kohn-Sham DFT has to realize these different steps of the sketched iterative algorithm. The LAPW method is based on a partitioning of the material's unit cell into non-overlapping but nearly touching so-called muffin-tin (MT) spheres, centered at the atomic nuclei, and an interstitial region (IR) in between the spheres. The physical description and the representation of the Kohn-Sham orbitals, the charge density, and the potential is adapted to this partitioning. In the following this method design and the extraction of quantities from it are sketched in more detail. Variations and extensions are indicated. Solving the Kohn-Sham equations The central aspect of practical DFT implementations is the question how to solve the Kohn-Sham equations with the single-electron kinetic energy operator , the effective potential , Kohn-Sham states , energy eigenvalues , and position and Bloch vectors and . While in abstract evaluations of Kohn-Sham DFT the model for the exchange-correlation contribution to the effective potential is the only fundamental approximation, in practice solving the Kohn-Sham equations is accompanied by the introduction of many additional approximations. These include the incompleteness of the basis set used to represent the Kohn-Sham orbitals, the choice of whether to use the pseudopotential approximation or to consider all electrons in the DFT scheme, the treatment of relativistic effects, and possible shape approximations to the potential. Beyond the partitioning of the unit cell, for the LAPW method the central design aspect is the use of the LAPW basis set to represent the valence electron orbitals as where are the expansion coefficients. The LAPW basis is designed to enable a precise representation of the orbitals and an accurate modelling of the physics in each region of the unit cell. Considering a unit cell of volume covering atoms at positions , an LAPW basis function is characterized by a reciprocal lattice vector and the considered Bloch vector . It is given as where is the position vector relative to the position of atom nucleus . An LAPW basis function is thus a plane wave in the IR and a linear combination of the radial functions and multiplied by spherical harmonics in each MT sphere. The radial function is hereby the solution of the Kohn-Sham Hamiltonian for the spherically averaged potential with regular behavior at the nucleus for the given energy parameter . Together with its energy derivative these augmentations of the plane wave in each MT sphere enable a representation of the Kohn-Sham orbitals at arbitrary eigenenergies linearized around the energy parameters. The coefficients and are automatically determined by enforcing the basis function to be continuously differentiable for the respective channel. The set of LAPW basis functions is defined by specifying a cutoff parameter . In each MT sphere, the expansion into spherical harmonics is limited to a maximum number of angular momenta , where is the muffin-tin radius of atom . The choice of this cutoff is connected to the decay of expansion coefficients for growing in the Rayleigh expansion of plane waves into spherical harmonics. While the LAPW basis functions are used to represent the valence states, core electron states, which are completely confined within a MT sphere, are calculated for the spherically averaged potential on radial grids, for each atom separately applying atomic boundary conditions. Semicore states, which are still localized but slightly extended beyond the MT sphere boundary, may either be treated as core electron states or as valence electron states. For the latter choice the linearized representation is not sufficient because the related eigenenergy is typically far away from the energy parameters. To resolve this problem the LAPW basis can be extended by additional basis functions in the respective MT sphere, so called local orbitals (LOs). These are tailored to provide a precise representation of the semicore states. The plane-wave form of the basis functions in the interstitial region makes setting up the Hamiltonian matrix for that region simple. In the MT spheres this setup is also simple and computationally inexpensive for the kinetic energy and the spherically averaged potential, e.g., in the muffin-tin approximation. The simplicity hereby stems from the connection of the radial functions to the spherical Hamiltonian in the spheres , i.e., and . In comparison to the MT approximation, for the full-potential description (FLAPW) contributions from the non-spherical part of the potential are added to the Hamiltonian matrix in the MT spheres and in the IR contributions related to deviations from the constant potential. After the Hamiltonian matrix together with the overlap matrix is set up, the Kohn-Sham orbitals are obtained as eigenfunctions from the algebraic generalized dense Hermitian eigenvalue problem where is the energy eigenvalue of the j-th Kohn-Sham state at Bloch vector and the state is given as indicated above by the expansion coefficients . The considered degree of relativistic physics differs for core and valence electrons. The strong localization of core electrons due to the singularity of the effective potential at the atomic nucleus is connected to large kinetic energy contributions and thus a fully relativistic treatment is desirable and common. For the determination of the radial functions and the common approach is to make an approximation to the fully relativistic description. This may be the scalar-relativistic approximation (SRA) or similar approaches. The dominant effect neglected by these approximations is the spin-orbit coupling. As indicated above the construction of the Hamiltonian matrix within such an approximation is trivial. Spin-orbit coupling can additionally be included, though this leads to a more complex Hamiltonian matrix setup or a second variation scheme, connected to increased computational demands. In the interstitial region it is reasonable and common to describe the valence electrons without considering relativistic effects. Representation of the charge density and the potential After calculating the Kohn-Sham eigenfunctions, the next step is to construct the electron charge density by occupying the lowest energy eigenstates up to the Fermi level with electrons. The Fermi level itself is determined in this process by keeping charge neutrality in the unit cell. The resulting charge density then has a region-specific form i.e., it is given as a plane-wave expansion in the interstitial region and as an expansion into radial functions times spherical harmonics in each MT sphere. The radial functions hereby are numerically given on a mesh. The representation of the effective potential follows the same scheme. In its construction a common approach is to employ Weinert's method for solving the Poisson equation. It efficiently and accurately provides a solution of the Poisson equation without shape approximation for an arbitrary periodic charge density based on the concept of multipole potentials and the boundary value problem for a sphere. Postprocessing and extracting results Because they are based on the same theoretical framework, different DFT implementations offer access to very similar sets of material properties. However, the variations in the implementations result in differences in the ease of extracting certain quantities and also in differences in their interpretation. In the following, these circumstances are sketched for some examples. The most basic quantity provided by DFT is the ground-state total energy of an investigated system. To avoid the calculation of derivatives of the eigenfunctions in its evaluation, the common implementation replaces the expectation value of the kinetic energy operator by the sum of the band energies of occupied Kohn-Sham states minus the energy due to the effective potential. The force exerted on an atom, which is given by the change of the total energy due to an infinitesimal displacement, has two major contributions. The first contribution is due to the displacement of the potential. It is known as Hellmann-Feynman force. The other, computationally more elaborate contribution, is due to the related change in the atom-position-dependent basis functions. It is often called Pulay force and requires a method-specific implementation. Beyond forces, similar method-specific implementations are also needed for further quantities derived from the total energy functional. For the LAPW method, formulations for the stress tensor and for phonons have been realized. Independent of the actual size of an atom, evaluating atom-dependent quantities in LAPW is often interpreted as calculating the quantity in the respective MT sphere. This applies to quantities like charges at atoms, magnetic moments, or projections of the density of states or the band structure onto a certain orbital character at a given atom. Deviating interpretations of such quantities from experiments or other DFT implementations may lead to differences when comparing results. On a side note also some atom-specific LAPW inputs relate directly to the respective MT region. For example, in the DFT+U approach the Hubbard U only affects the MT sphere. A strength of the LAPW approach is the inclusion of all electrons in the DFT calculation, which is crucial for the evaluation of certain quantities. One of which are hyperfine interaction parameters like electric field gradients whose calculation involves the evaluation of the curvature of the all-electron Coulomb potential near the nuclei. The prediction of such quantities with LAPW is very accurate. Kohn-Sham DFT does not give direct access to all quantities one may be interested in. For example, most energy eigenvalues of the Kohn-Sham states are not directly related to the real interacting many-electron system. For the prediction of optical properties one therefore often uses DFT codes in combination with software implementing the GW approximation (GWA) to many-body perturbation theory and optionally the Bethe-Salpeter equation (BSE) to describe excitons. Such software has to be adapted to the representation used in the DFT implementation. Both the GWA and the BSE have been formulated in the LAPW context and several implementations of such tools are in use. In other postprocessing situations it may be useful to project Kohn-Sham states onto Wannier functions. For the LAPW method such projections have also been implemented and are in common use. Variants and extensions of the LAPW method APW: The augmented-plane-wave method is the predecessor of LAPW. It uses the radial solution to the spherically averaged potential for the augmentation in the MT spheres. The energy derivative of this radial function is not involved. This missing linearization implies that the augmentation has to be adapted to each Kohn-Sham state individually, i.e., it depends on the Bloch vector and the band index, which subsequently leads to a non-linear, energy-dependent eigenvalue problem. In comparison to LAPW this is a more complex problem to solve. A relativistic generalization of this approach, RAPW, has also been formulated. Local orbitals extensions: The LAPW basis can be extended by local orbitals (LOs). These are additional basis functions having nonvanishing values only in a single MT sphere. They are composed of the radial functions , , and a third radial function tailored to describe use-case-specific physics. LOs have originally been proposed for the representation of semicore states. Other uses involve the representation of unoccupied states or the elimination of the linearization error for the valence states. APW+lo: In the APW+lo method the augmentation in the MT spheres only consists of the function . It is matched to the plane wave in the interstitial region only in value. As an alternative implementation of the linearization the function is included in the basis set as an additional local orbital. While the matching conditions result in an unphysical kink of the basis functions at the MT sphere boundaries, a careful consideration of the kink in the construction of the Hamiltonian matrix suppresses it in the Kohn-Sham eigenfunctions. In comparison to the classical LAPW method the APW+lo approach leads to a less stiff basis set. The outcome is a faster convergence of the DFT calculations with respect to the basis set size. Soler-Williams formulation of LAPW: In the Soler-Williams formulation of LAPW the plane waves cover the whole unit cell. In the MT spheres the augmentation is implemented by replacing up to the angular momentum cutoff the plane waves by the functions and . This yields basis functions continuously differentiable also in the channels above the angular momentum cutoff. As a consequence the Soler-Williams approach has reduced angular momentum cutoff requirements in comparison to the classical LAPW formulation. ELAPW: In the extended LAPW method pairs of local orbitals introducing the functions and are added to the LAPW basis. The energy parameters are chosen to systematically extend the energy region in which Kohn-Sham states are accurately described by the linearization in LAPW. QAPW: In the quadratic APW method the augmentation in the MT spheres additionally includes the second energy derivative . The matching at the MT sphere boundaries is performed by enforcing continuity of the basis functions in value, slope, and curvature. This is similar to the super-linearized APW (SLAPW) method in which radial functions and/or their derivatives at more than one energy parameter are used for the augmentation. In comparison to a pure LAPW basis these approaches can precisely represent Kohn-Sham orbitals in a broader energy window around the energy parameters. The drawback is that the stricter matching conditions lead to a stiffer basis set. Lower-dimensional systems: The partitioning of the unit cell can be extended to explicitly include semi-infinite vacuum regions with their own augmentations of the plane waves. This enables efficient calculations for lower-dimensional systems such as surfaces and thin films. For the treatment of atomic chains an extension to one-dimensional setups has been formulated. Software implementations There are various software projects implementing the LAPW method and/or its variants. Examples for such codes are Elk Exciting Flair FLEUR HiLAPW Wien2k References Electronic structure methods Computational chemistry Computational physics Condensed matter physics
Linearized augmented-plane-wave method
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,327
[ "Quantum chemistry", "Phases of matter", "Quantum mechanics", "Materials science", "Computational physics", "Theoretical chemistry", "Electronic structure methods", "Computational chemistry", "Condensed matter physics", "Matter" ]
68,272,140
https://en.wikipedia.org/wiki/Scientific%20Committee%20on%20Oceanic%20Research
The Scientific Committee on Oceanic Research (SCOR) is an interdisciplinary body of the International Science Council. SCOR was established in 1957, coincident with the International Geophysical Year of 1957-1958. It sought to bring scientists together to answer key ocean science questions and improve opportunities for marginalised scientists. From 1959 through to 1988 SCOR organised a sequence of Joint Oceanographic Assemblies. Following these, SCOR has focused its efforts on targeted scientific working groups. These small international groups are designed to address narrowly focused scientific topics based on proposals from independent groups of scientists, national committees for SCOR, other scientific organizations, or previous working groups. The working groups last typically for three to four years. SCOR activity, often through the efforts of working groups, has helped support the development of many large-scale ocean research projects. SCOR-associated programs IIOE International Indian Ocean Expedition (IIOE) resulted from the first annual SCOR meeting, held at Woods Hole Oceanographic Institution in 1957. The meeting identified the Indian Ocean as the least known component in the global ocean system and so that a campaign of focused observations would be of great benefit. The initiative commenced in 1959 and observational work carried on until 1965. TOGA the Tropical Ocean-Global Atmosphere Study (TOGA) was coordinated by the World Climate Research Programme (WCRP) and made great observation-based advances in understanding of El Niño and improved skills in predicting the occurrence of El Niño events. WOCE The World Ocean Circulation Experiment (WOCE) ran from 1990-2002 and aimed to gather more ocean observations in a way that enabled improved modelling tools. GEOTRACES The GEOTRACES programme was solely sponsored by SCOR and continues to advance knowledge of the oceanic contribution to global biogeochemical cycles of trace elements and their isotopes. JGOFS The Joint Global Ocean Flux Study focused on the role of the ocean in the global carbon cycle and completed its work in 2003. JGOFS was co-sponsored by SCOR and the International Geosphere-Biosphere Programme (IGBP). GLOBEC The Global Ocean Ecosystem Dynamics project, completed in 2009, focused on the relationship between physical and biological variability in the ocean and how global change might impact the structure and functioning of marine ecosystems, with particular emphasis on important fisheries. GLOBEC was co-sponsored by SCOR, IGBP, and IOC. IMBER SCOR and IGBP developed the Integrated Marine Biogeochemistry and Ecosystem Research (IMBER) project that promotes integrated marine research through a range of research topics towards sustainable, productive and healthy oceans at a time of global change, for the benefit of society. SOLAS Surface Ocean – Lower Atmosphere Study (SOLAS) is sponsored by SCOR, IGBP, the World Climate Research Programme (WCRP) and the Commission on Atmospheric Chemistry and Global Pollution (CACGP). It is global and multidisciplinary in its approach to understanding the key biogeochemical-physical interactions and feedbacks between the ocean and the atmosphere. Additionally, SOLAS seeks to link ocean-atmosphere interactions with climate and people. GEOHAB Global Ecology and Oceanography of Harmful Algal Blooms project examines the ecological and oceanographic conditions that cause harmful algal blooms and promote their development. It is supported by SCOR and IOC. IQOE International Quiet Ocean Experiment (co-sponsored by the Partnership for Observation of the Global Oceans) and designed to examine questions around human activities affecting the global ocean soundscape compared with natural changes over geologic time. IIOE-2 Second International Indian Ocean Expedition (co-sponsored by IOC and the Indian Ocean GOOS program) was a major global scientific program which will engage the international scientific community in collaborative oceanographic and atmospheric research from coastal environments to the deep sea over the period 2015-2020. SOOS The Southern Ocean Observing System (SOOS) facilitated by both SCOR and the Scientific Committee on Antarctic Research, supports observations, the associated science community and data access, with a focus on the Southern Ocean. References International Science Council Oceanography Oceanographic organizations Scientific organizations established in 1957
Scientific Committee on Oceanic Research
[ "Physics", "Environmental_science" ]
828
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
68,272,527
https://en.wikipedia.org/wiki/Argentation%20chromatography
Argentation chromatography is chromatography using a stationary phase that contains silver salts. Silver-containing stationary phases are well suited for separating organic compounds on the basis of the number and type of alkene groups. The technique is employed for gas chromatography and various types of liquid chromatography, including thin layer chromatography. Analytes containing alkene groups elute more slowly than the analogous compounds lacking alkenes. Separations are also sensitive to the type of alkene. The technique is especially useful in the analysis of fats and fatty acids, which are well known to exist in both saturated and unsaturated (alkene-containing) forms. For example, trans fats, undesirable contaminants in ultra-processed foods, are quantified by argentation chromatography. Theory Silver ions form alkene complexes. The binding is reversible, but sufficient to impede the elution of the alkene-containing analytes. References Chromatography
Argentation chromatography
[ "Chemistry" ]
221
[ "Chromatography", "Separation processes" ]
68,274,287
https://en.wikipedia.org/wiki/Moessner%27s%20theorem
In number theory, Moessner's theorem or Moessner's magic is related to an arithmetical algorithm to produce an infinite sequence of the exponents of positive integers with by recursively manipulating the sequence of integers algebraically. The algorithm was first published by Alfred Moessner in 1951; the first proof of its validity was given by Oskar Perron that same year. For example, for , one can remove every even number, resulting in , and then add each odd number to the sum of all previous elements, providing . Construction Write down every positive integer and remove every -th element, with a positive integer. Build a new sequence of partial sums with the remaining numbers. Continue by removing every -st element in the new sequence and producing a new sequence of partial sums. For the sequence , remove the -st elements and produce a new sequence of partial sums. The procedure stops at the -th sequence. The remaining sequence will correspond to Example The initial sequence is the sequence of positive integers, For , we remove every fourth number from the sequence of integers and add up each element to the sum of the previous elements Now we remove every third element and continue to add up the partial sums Remove every second element and continue to add up the partial sums , which recovers . Variants If the triangular numbers are removed instead, a similar procedure leads to the sequence of factorials References External links Number theory
Moessner's theorem
[ "Mathematics" ]
285
[ "Mathematical theorems", "Theorems in number theory", "Mathematical problems", "Number theory" ]
68,274,488
https://en.wikipedia.org/wiki/Van%20der%20Meer%20formula
The Van der Meer formula is a formula for calculating the required stone weight for armourstone under the influence of (wind) waves. This is necessary for the design of breakwaters and shoreline protection. Around 1985 it was found that the Hudson formula in use at that time had considerable limitations (only valid for permeable breakwaters and steep (storm) waves). That is why the Dutch government agency Rijkswaterstaat commissioned Deltares to start research for a more complete formula. This research, conducted by Jentsje van der Meer, resulted in the Van der Meer formula in 1988, as described in his dissertation. This formula reads and In this formula: Hs = Significant wave height at the toe of the construction Δ = relative density of the stone (= (ρs -ρw)/ρw) where ρs is the density of the stone and ρw is the density of the water dn50 = nominal stone diameter α = breakwater slope P = notional permeability S = Damage number N = number of waves in the storm ξm = the Iribarren number calculated with the Tm For design purposes, for the coefficient cp the value of 5,2 and for cs the value 0,87 is recommended. The value of P can be read from attached graph. Until now, there is no good method for determining P different than with accompanying pictures. Research is under way to try to determine the value of P using calculation models that can simulate the water movement in the breakwater (OpenFOAM models). The value of the damage number S is defined as where A is the area of the erosion area. Permissible values for ''S are: References Coastal engineering Equations Coastal erosion
Van der Meer formula
[ "Mathematics", "Engineering" ]
354
[ "Coastal engineering", "Civil engineering", "Mathematical objects", "Equations" ]
78,413,970
https://en.wikipedia.org/wiki/Refugee%20workers%20in%20Vichy%20France
Refugee workers in Vichy France describes the work and lists the expatriates from several countries who assisted refugees in Vichy France during World War II, mostly from 1940 to 1942. As most European countries and British commonwealth countries such as Canada and Australia were engaged in the war, Americans and American humanitarian organizations became prominent in the task of providing aid to refugees fleeing Nazi Germany and German-controlled countries and seeking safety. Prior to the U.S. entry into World War II, "an American passport gave most Americans abroad a reasonably justified sense of invulnerability." Organizations from neutral Switzerland also assisted refugees. The refugee organizations employed or took on volunteers of many nationalities, including French people resident in Vichy. Many of the international refugee organizations came to France to aid people interned in several refugee camps in southern France. By 1942 the refugee organizations realized that Jews were the most endangered group among the many nationalities and ethnicities that made up the refugee population. Refugee workers and organizations became involved in helping refugees escape France. The protection and escape, legal or illegal, of Jewish children became the top priority of many organizations. Six thousand children, mostly Jewish, were sheltered by French families or in group homes and survived the war. Among the people who helped refugees were diplomats of several countries who issued visas, often against the regulations of their home countries, to refugees enabling them to leave France for safety in other countries. Background France's traditional view of itself as the "home of universal rights and the refuge for the persecuted in Europe" eroded in the 1930s as a result of large numbers of refugees fleeing communist rule in the Soviet Union, Nazi rule in Germany, and the defeat of the Republican faction in the Spanish Civil War. In the 1920s, after the Russian Revolution, seventy to eighty thousand Russians resettled in France. In 1933, during the first year of Nazi rule in Germany, 59,000 refugees fled Germany for France of which 85 percent were estimated to be Jewish. The outflow of both Jews and anti-Nazis from Germany continued. By summer 1940, the Jewish population of France was estimated at 350,000 of which less than one-half were French citizens. The refugee population was augmented by La Retirada in 1939 in which more than 400,000 Spanish Republican refugees fled to France after their defeat in the Spanish Civil War. The Spanish refugees anticipated a better reception than they received. Most returned to Spain or were resettled elsewhere but, at the end of 1939, between 160,000 and 180,000 Spanish refugees remained in France. Additional anti-Nazi refugees also arrived in France from countries coming under control of Germany such as Austria, Czechoslovakia, and Poland. The large-scale flows of refugees, the economic hardships and unemployment of the Great Depression, and antisemiticism contributed to more restrictive policies by the French government during the 1930s. Right wing political parties grew in influence. In 1938, several decrees by the French government denied rights to refugees and authorized the government to set up internment camps for "undesirable" foreigners. During World War II after the defeat of France by Germany in June 1940, the collaborationist Vichy government of southern France enacted nationalistic and antisemitic laws. The aim of Vichy was to reinvigorate the country and exclude those, especially foreigners, Jews, Romani (gypsies), homosexuals, and communists, who it considered harmful to the renewal of what they saw as the traditional values of France. The collaboration of the Vichy French with Nazi Germany led to the deportation of tens of thousands of refugees, mostly Jews, from Vichy and their deaths in German concentration camps. The legal process of getting a refugee out of France was complex. Many refugee workers spent most of their time with paperwork rather than clandestine adventures. To leave France required an exit permit from the Vichy government, entry visas from Spain and Portugal and a visa to an onward destination, most commonly the United States. All of this required both time and money. Several individuals and organizations chose to smuggle refugees out of France with false documents. The refugee workers and organizations had a wide range of philosophies ranging from strict-neutrality and abiding by Vichy laws to anti-Nazi activists who helped vulnerable people escape France by any means possible. Some diplomats were obstructive, following the letter of the law and bureaucratic and slow in issuing visas; others from several countries were more creative and skirted the laws of their own countries. Some of the refugee workers and organizations focused on providing aid to the internees in squalid camps scattered around Vichy France; others had the objective of rescuing people vulnerable to persecution by Vichy and its German overlords. With the deportation to Germany of Jewish refugees beginning in 1942, the plight of refugee children separated by choice or chance from their parents who had been deported or soon would be deported became the top priority of some organizations. About six thousand children were housed in group homes or given false names and histories and lived with cooperating French families. As the Germans intensified their hunt for Jews in Vichy France, many of the children were smuggled into Spain. Regarding the Jews, historian Julian Jackson said, "For 150 years the Jews of France had looked to the State to protect them if necessary from the anti-Semitic outbursts of civil society; in the Occupation it was civil society that helped protect the Jews from the State. The rescue of Jews was among the first faint glimmerings of resistance to German rule in France. American Donald A. Lowrie, working with the YMCA in Vichy, said in September 1942: "...it must be noted that for the first time since the Armistice [June 1940], deep public feeling has united all the decent elements in France...this feeling gives each one something he can do, and the doing, i.e. aid to hunted Jews, involves resistance to the authorities at Vichy." The widening of World War II led to the winding down of international efforts to assist refugees. In April 1942, the U.S. embassy in France advised Americans to leave the country. In November 1942, the Germans occupied Vichy France and its semi-independence ended. In January 1943, the Germans and their French collaborators rounded up the few remaining Americans in the country and interned them in Baden-Baden, Germany. In 1944, the interned Americans were exchanged for interned Germans in the United States and were returned to the U.S.. A few of the refugee organizations were able to continue their work with staff recruited locally or from neutral countries or from countries occupied by Germany. Refugees and internment camps Historian Julian Jackson listed 31 refugee and internment camps established in Vichy France from 1940 to 1942. Some were small, holding only a few dozen people, others were large with thousands of refugees and internees. People in the camps included "Communists and other dissidents, Jews, foreigners, gypsies, black-marketeers (from June 1941), abortionists (from February 1942), and prostitutes (from August 1943)." In November 1940, the camps contained 40,000 people, that number increasing to 50,000 in early 1941. Seventy percent of those in the camps were Jews. About 40,000 foreign, i.e. non-French Jews, were detained in 1940 and 1941 and placed in the camps. There was a constant flux and flow in and out of the camps, especially beginning in August 1942 when deportations of Jews to German concentration camps started. In the years 1940 through 1942, more than 100,000 refugees of all nationalities, religions, and political persuasions left France, either legally with a visa to another country, most commonly the United States, or illegally by crossing the border into Spain or Switzerland. International humanitarian organizations Historian Christopher R. Browning said that 29 humanitarian organizations belonged to the Nimes Committee. Members of the Nimes Committee, chaired by the YMCA's Donald A. Lowrie, coordinated relief efforts for refugees and internees in the camps and secured the release of many internees from the camps. The Nimes Committee members included international humanitarian organizations, both Jewish and non-Jewish, and French organizations. The international humanitarian organizations operating in Vichy France included the following: American Federation of Labor (AFL) American Friends Service Committee (Quakers) (AFSC) American Joint Distribution Committee (Joint) (JDC) American Red Cross (ARC) Comité Inter-Mouvements Auprès des Evacués (CIMADE) Emergency Rescue Committee (ERC) Hebrew Immigrant Aid Society (HIAS, HICEM) International Committee of the Red Cross (ICRC) Mennonite Central Committee (MCC) Oeuvre de Secours aux Enfants (OSE) Organisation Réconstruction Travail (ORT) Unitarian Service Committee (USC) Swiss Red Cross Young Men's Christian Association (YMCA) Expatriate refugee workers Richard Allen. ARC. Leon (Dick) Ball. ERC. An American resident in France, he led groups of refugees across the Pyrenees to Spain. He disappeared, fate unknown. Hiram "Harry" Bingham IV was an American Vice Consul in Marseilles from 1939 to 1941. During the 13 months he was the visa officer he issued between 7,500 and 10,000 visas to the United States, the majority of them to refugees. He violated a Department of State directive that visa officers should investigate applicants before granting visas. Moreover, Bingham issued many visas to "non-famous" refugees rather than the cultural elite who were the more usual clients of American escape organizations. Bingham was transferred to Lisbon in the summer of 1941. Denied promotion, he resigned from the Foreign Service of the United States in 1946. Frank Bohn was the American Federation of Labor (AFL) representative in Marseilles from August 1940 until October 1940. His mission was financed by the Jewish Labor Committee. In Marseilles, Bohn and Varian Fry agreed that Bohn would smuggle labor leaders, mostly socialists and Jews from eastern European countries, out of France while Fry would focus on helping intellectuals. Bohn's scheme to smuggle labor leaders out by ship failed and he left France after being warned by the U.S. Department of State that it did not approve of his activities. Gilberto Bosques Saldívar. Mexican Consulate General. Howard L. Brooks. USC. Miriam Davenport. ERC. Robert Dexter. USC. Marian Ebel. ERC. Mary Elmes. (1909-2003) AFSC. Charles Fawcett was the doorman for the ERC, supervising the long lines of visitors seeking help. Called "Shar-lee' he spoke no French. He was well-liked by the refugees, especially the women. Noel Field. UFC. Lisa Fittko. ERC. Fittko and her husband Hans led refugees over the Pyrenees to Spain, a hazardous undertaking. Varian Fry. ERC. Fry was the leader of the ERC in Marseille. Mary Jayne Gold. ERC. Lois Gunden. (1915-2005) MCC. Albert O. Hirschman. ERC. Helga Holbek. AFSC. Joseph Hyman. JDC. Charles Joy. USC. Herbert Katzki. JDC. Gertrude Kershner. AFSC. Howard Kershner. AFSC. Donald A. Lowrie. YMCA. Helen Lowrie. YMCA, USC. Marjorie McClelland. AFSC. Roswell McClelland. AFSC. Lindsley Noble. AFSC. Clarence Pickett. AFSC. Alice Resch. AFSC. Andrée Saloman. OSE. Martha Sharp. USC. Waitstill Sharp. USC. Myles Standish was a Vice Consul in the Visa Section of the American Consulate General in Marseilles. Like Hiram Bingham IV he was generous in giving visas to refugees from Nazism. Transferred, as Bingham was, probably for violating State Department visa policy, he resigned from the Foreign Service in 1942. Tracy Strong, Jr. YMCA, European Student Relief Fund (ESRF). Vladamir Vochoc. Consulate of Czechoslovakia. Vochoc issued hundreds of Czech passports to non-Czech refugees to enable them to leave France. Joseph Weill. OSE. References Vichy France The Holocaust in France German occupation of France during World War II Refugees Forced migration Rescue of Jews during the Holocaust Jewish emigration from Nazi Germany Aid for Jewish refugees from Nazi Germany Humanitarian aid organizations Migration-related organizations
Refugee workers in Vichy France
[ "Biology" ]
2,508
[ "Rescue of Jews during the Holocaust", "Behavior", "Altruism" ]
78,416,162
https://en.wikipedia.org/wiki/Leszek%20S.%20Czarnecki
Leszek S. Czarnecki (born May 14, 1939) is a Polish-American electrical engineer and professor emeritus at Louisiana State University (LSU). He is known for his work on power theory in electrical circuits, particularly the development of the Currents' Physical Components (CPC) Power Theory. Early life and education Czarnecki was born in Poland and earned his Ph.D. in 1969 and later D.Sc. in 1984 from the Silesian University of Technology in Gliwice. His academic foundation laid the groundwork for his later contributions to electrical engineering. Academic career From 1984 to 1986, he was a Visiting Research Officer at the National Research Council of Canada. From 1987 to 1989, he served as Associate Professor of Electrical Engineering at the University of Zielona Góra, Poland. In 1989, he joined LSU's Department of Electrical and Computer Engineering. He became a full professor in 1999 and was named Distinguished Professor in 2005. In 2006, he was awarded the title of Professor of Technological Sciences by the President of Poland. Research contributions Czarnecki's research focuses on power theory and compensation methods in circuits with nonsinusoidal currents. He is the creator of the CPC Power Theory, which has significantly advanced understanding and practical applications in energy transfer in electrical systems. In 2023, his book, Powers and Compensation in Circuits with Nonsinusoidal Currents, was published by Oxford University Press, encapsulating decades of research. Awards and honors Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE) (1996) Knight's Cross of the Order of Merit of the Republic of Poland (1999) Named among the top 2% of world faculty by Stanford University (2021) Public activity From 1981, when the Marshal Law was imposed in Poland, and the Solidarity movement was de-legalized, Czarnecki was involved in an underground activity aimed at terminating the communist regime and restoring democracy. In 1998, Czarnecki initiated and supervised with his wife, Maria, the process of the adaptation by the Louisiana Parliament of a Resolution that urged the Senate of the United States to include Poland, the Czech Republic, and Hungary into NATO. This Resolution motivated Senators of Louisiana to the US Congress to vote in favour of that inclusion. For this activity, Czarnecki and his wife Maria were decorated by the President of Poland with the Knight Cross of the Medal of Merit of the Republic of Poland Personal life Czarnecki is married to mathematician, Maria, and has two sons, Jakub and Tomasz. He is also an mountaineer and scuba diver. He was on the team that accomplished first traverse of Rwenzori ridge in Africa in 1975, made a ski traverse od Svalbard in 1977, as well as climbed peaks including Lhotse in the Himalayas in 1979, Cordillera Huayhuash in Andes in 1981, and solo climb of Denali in Alaska 1999. Selected Publications Czarnecki, L. S. (2023). Powers and Compensation in Circuits with Nonsinusoidal Currents. Oxford University Press. Czarnecki, L. S. (1983). Additional discussion to Reactive power under nonsinusoidal conditions. IEEE Trans. on Power and Systems, Vol. PAS-102, No. 4. Czarnecki, L. S. (1984). Considerations on the reactive power in nonsinusoidal situations. IEEE Trans. Instr. Measurement, Vol. IM-34. Czarnecki, L. S. (1987). What is wrong with the Budeanu concept of reactive and distortion power and why it should be abandoned. IEEE Transactions on Instrumentation and Measurement, 36(3), 834-837. Czarnecki, L. S. (1988). Orthogonal decomposition of the current in a three-phase non-linear asymmetrical circuit with nonsinusoidal voltage. IEEE Trans. IM., Vol. IM-37, No. 1. Czarnecki, L. S. (1989). Reactive and unbalanced currents compensation in three-phase circuits under nonsinusoidal conditions. IEEE Trans. Instr. Measur., Vol. IM-38, No. 3. Czarnecki, L. S., Swietlicki T. (1990). Powers in nonsinusoidal networks, their analysis, interpretation, and measurement. IEEE Trans. Instr. Measur., Vol. IM-39, No. 2. Czarnecki, L. S. (1997). Budeanu and Fryze: Two frameworks for interpreting power properties of circuits with nonsinusoidal voltages and currents. Archiv fur Elektrotechnik, (81), N. 2. Czarnecki, L. S. (1999). Energy flow and power phenomena in electrical circuits: illusions and reality. Archiv fur Elektrotechnik, (82), No. 4. Czarnecki, L. S. (2004). On some misinterpretations of the Instantaneous Reactive Power p-q Theory. IEEE Trans. on Power Electronics, Vol. 19, No.3. Czarnecki, L. S. (2009). Effect of supply voltage harmonics on IRP-based switching compensator control. IEEE Trans. on Power Electronics, Vol. 24, No. 2. Czarnecki, L. S., Haley P. M. (2015). Unbalanced power in four-wire systems and its reactive compensation. IEEE Trans. on Power Delivery, Vol. 30, No. 1, Czarnecki, L. S. (2017). Critical comments on the Conservative Power Theory (CPT). Przegląd Elektrotechniczny, R3, No. 1. Bhattarai P.D., Czarnecki, L. S. (2017). Currents’ Physical Components (CPC) of the Supply Current of Unbalanced LTI Loads at Asymmetrical and Nonsinusoidal Voltage. Przegląd Elektrotechniczny, R. 93, No. 9. References External links LSU Engineering News IEEE Xplore Author Details 1939 births Living people Electrical engineers
Leszek S. Czarnecki
[ "Engineering" ]
1,299
[ "Electrical engineering", "Electrical engineers" ]
78,417,136
https://en.wikipedia.org/wiki/Paper%20genocide
Paper genocide is the systemic removal of a group of people from historical records, such as censuses, which gives the impression that that group has disappeared or become extinct. A 2023 article published by Cultural Survival defines the term as "intentional destruction of documents and records related to a particular group of people, usually with the intent of erasing their histories and cultures", while a 2019 article in National Geographic characterizes the term thusly: "Paper genocide means that a people can be made to disappear on paper". The term is often used to refer to government policies regarding Native Americans in the United States and the indigenous peoples of the Caribbean, primarily the Taíno. According to Cultural Survival, paper genocide can lead to generational and historical trauma for the communities affected. Examples Taíno and indigenous peoples of the Caribbean A common example of a paper genocide is that of the Taíno, an indigenous peoples of the Caribbean. Following the first voyage of Christopher Columbus in 1492, the Taíno population began to significantly decline in the ensuing years, primarily due to virgin soil epidemics and the enslavement and harsh treatment of the Taíno by Spanish colonizers in such labor-intensive fields as gold mining and the cultivation of sugarcane. Estimates for the Taíno population on the island of Hispaniola range from 60,000 to 8 million in 1492, with contemporary writer Bartolomé de las Casas claiming a population of around 3 million. However, by 1542, this number had declined to around 200. According to a 2019 article in National Geographic, shortly after a 1565 census that showed only 200 "Indians" living on Hispaniola, the Taíno were declared extinct. Similarly, on the island of Puerto Rico, where there were an estimated 1 million indigenous people in 1493, a 1787 census recorded only 2,300 non-mixed-race indigenous people were recorded. In the next census conducted on Puerto Rico in 1802, no indigenous people were recorded, and according to National Geographic, historical records after this point indicate that no indigenous people remained in the Caribbean. Despite the apparent elimination of indigenous peoples from the Caribbean, several historians note that a paper genocide may have obscured the continued existence of groups such as the Taíno. A 2022 article in the Brown Political Review notes that, in Puerto Rico, Spanish Catholic priests, who were responsible for birth registry, may have been inclined to classify people with some Taíno ancestry as "mestizo" or "mulatto", in part to diminish the representation of Taíno people on the island. A 2019 article in National Geographic also notes possible undercounting of Taíno people due to the classification of people born to Spanish fathers and Taíno mothers. That same article also mentions that, following the abolition of legal slavery of indigenous peoples of the Americas by the Spanish monarchy in 1533, many slaveholders in the Caribbean may have been inclined to simply reclassify their enslaved people as African rather than grant them their manumission. Additionally, according to the magazine, many censuses in Latin America did not provide an option for indigenous peoples, instead requiring respondents to identify as either "Hispanic", "white", "black", or mixed-race. In Puerto Rico, this practice continued after the United States gained control of the island. In the early 1990s, descendants of indigenous peoples of the Caribbean began a revival of indigenous cultures and language, including participating in powwows and other festivals, and openly refuted the historical narrative that indigenous peoples in the region had been eliminated. In the 2010s, genetic research, including the construction of the genome of a Taíno person who had lived between the 8th and 10th century, found that a significant population of the current Caribbean population have traces of DNA from indigenous peoples. In 2016, 164 Puerto Ricans were tested and all were found to have traces matching the Taíno DNA. That same year, a National Geographic study indicated that 61 percent of Puerto Ricans have indigenous mitochondrial DNA. Jorge Estevez, a Taíno activist, said of the results, "It shows that the true story is one of assimilation, certainly, but not total extinction". In 2020, after options for identifying as "Indian or indigenous" were added to the United States census for Puerto Rico, over 92,000 Puerto Ricans identified as such. Additionally, as of the 2020s, several Taíno advocacy groups exist, such as the United Confederation of Taíno People, the Taíno Jatibonícu Tribe of Boriken, and the Taíno Nation of the Antilles. Native Americans in the United States In a 2020 blog for the Law School Survey for Student Engagement at Indiana University, Vickie Sutton, a law professor at Texas Tech University and member of the Lumbee Tribe of North Carolina, described the "policies of first Great Britain and then the United States against the indigenous population in America" as "genocidal", both physically and in paper form. Regarding the latter, Sutton states that both nations "[eliminated] references to Native Americans in property records, census records, birth and death records in a paper genocidal policy". Difficulties in gaining federal recognition The paper genocide of Native American tribes can have an impact on gaining federal recognition, an important aspect of tribal sovereignty in the United States. For example, in the state of Rhode Island, the Narragansett people spent several centuries attempting to gain federal recognition, which was granted in 1983. Starting in the late 18th century, government officials in the state began to record Narragansett people as "black", "colored", or "negro" on official documents, a practice that was upheld in the 1793 Rhode Island Supreme Court case Aldrich v. Hammer. This was part of an effort by the state to do away with indigenous identity and force cultural assimilation onto the Narragansett. In a similar case, the Mashpee Wampanoag Tribe of Massachusetts began seeking federal recognition in the 1970s, but their efforts were hurt due to inconsistent data from the United States Census Bureau. It wasn't until 2007 that the tribe became federally recognized. In 2009, the Supreme Court of the United States ruled in Carcieri v. Salazar that the federal government could only hold land in trust for tribes that were federally recognized in 1934, when the Indian Reorganization Act was passed. Critics have stated that the ruling could have negative consequences on tribes that have gained recognition since then, in some cases because of a lack of adequate historical documentation. Blood quantum laws Blood quantum is a system of measuring Native American ancestry based on the ancestry of an individual's parents, such that, for example, a child who is the offspring of a father with a Native American blood quantum level of one-fourth and a mother whose level is one-half would have a blood quantum level of three-eighths. The system is used by some tribes to determine eligibility for membership, and related concepts have appeared in several treaties between the federal government and tribes, such as in the 1825 Osage Treaty with the Osage Nation. The system was further codified by the federal government in acts such as the 1887 Dawes Act and the 1934 Indian Reorganization Act. Blood quantum levels for Native Americans can be recorded by the Bureau of Indian Affairs, who issue Certificates of Degree of Indian Blood to individuals that are used in tribal recognition. Regarding the concept, Jill Doerfler, the head of the University of Michigan's American Indian and Indigenous Studies Department, said in 2021, "What blood quantum does is racialize American Indian identity. It is an outside concept used to disenfranchise Native people and tribes from their legal and political status. And it’s the best way to eliminate ongoing treaty obligations". In an article for Voice of America's website entitled "Some Native Americans Fear Blood Quantum is Formula for 'Paper Genocide' " Doerfler further elaborated that the system could be used by the federal government to deprive tribes of land and recognition in what she termed a "paper genocide". United States censuses Starting with the first federal census in 1790, indigenous people were not often recorded. Between 1790 and 1850, Native Americans were largely excluded from the survey, with a major exception occurring in the 1850 census, when Puebloans in the New Mexico Territory were recorded as "Copper". The 1860 census was the first in which Native Americans living alongside white people and free people of color in the general population were recorded. Even then, the conductors of the census were instructed to only record "families of Indians who have renounced tribal rule, and who under state or territory laws exercise the rights of citizens". With the 1880 census, and continuing over the next several censuses, the Census Bureau introduced a rubric for recording the racial identification of Native American respondents, primarily utilizing the blood quantum system, but also allowing some discretion on the part of the surveyor with regards to other factors, such as how the individual is perceived in their community. The 1890 census was the first to record both Native Americans living among the general population as well as in tribal communities, but due to a fire that destroyed many of the documents, the 1900 census is typically considered the oldest one to give an inclusive count of the country's Native American population. The passage of the Indian Citizenship Act in 1924 affected how indigenous peoples from Latin America who were living in the Southwestern United States were recorded. As the Census Bureau was concerned that laborers from Mexico would attempt to portray themselves as Native Americans, many indigenous people from Latin America were recorded only as Hispanic or Latino. Starting with the 1960 census, the Census Bureau allowed for individuals to self-report their race, leading to many mixed-race individuals who may have previously been recorded by surveyors as another race to report themselves as Native American. Additionally, starting with the 2000 census, respondents could record more than one race, again leading to an increase in mixed-race individuals recording at least one of their races as Native American. Continued undercounting A 2019 article published by Rewire News Group listed "American Indians and Alaska Natives" as the most undercounted group in the United States. According to a report from the Census Bureau, Native Americans living on Indian reservations were undercounted by 12.2 percent in the 1990 census. This figure declined to 4.9 percent in the 2010 census. According to the bureau, approximately 26 percent of Native Americans in the United States live in "hard-to-count" census tracts, and American Indians and Alaska Natives are categorized by the bureau as "hard-to-count populations". According to Judy Shapiro, a Native American civil rights attorney, census data is often used by the federal government for "gatekeeping" federal recognition, saying, "Through the federal recognition process, they determine who is Native, who continues to exist, and who they are responsible for [maintaining trusts and treaties]". Racial Integrity Act of 1924 In 1924, the government of Virginia enacted the Racial Integrity Act. The law both prohibited interracial marriages and codified strict racial distinctions, with all people in the state being recorded as either "white" or "colored". This meant that all Native Americans in Virginia were officially categorized alongside black people as "colored", and instances of "Indian" being used on birth certificates issued prior to 1924 were updated to read "colored" instead. The law was in effect until 1967, when the United States Supreme Court struck it down as unconstitutional in their landmark case of Loving v. Virginia. According to the National Park Service, the law's "strict definitions of whiteness and blackness led to a mass erasure of Virginia Indian identity. As a result ... Virginia Indians often have difficulty in proving an unbroken lineage, one of the many requirements to becoming a federally recognized Tribe". Chief Stephen Adkins of the Chickahominy Tribe referred to the act as "paper genocide", a sentiment echoed by leaders of other tribes in Virginia, such as the Monacan Indian Nation. In 2018, the United States Congress passed legislation extending federal recognition to six tribes in Virginia whose records had been affected by the law: the Chickahominy, the Eastern Chickahominy, the Monacan Indian Nation, the Nansemond Indian Nation, the Rappahannock Tribe, and the Upper Mattaponi Tribe. In March 2024, on the centennial of the act's passage, leaders from several tribes in Virginia hosted a panel at the Library of Virginia to discuss the act and its continued legacy. Exclusion from statistical studies In 2020, Sutton wrote about the exclusion of Native Americans from surveys and statistical studies as a form of paper genocide. As examples of Native American exclusion, Sutton pointed to a 2019 incident in which a spokesperson for the American Institute of Architects announced during a meeting of the American Indian Council of Architects and Engineers that they would no longer collect data on Native American architects due to it being such a small group, as well as a decision by The Princeton Review to cease collection on data regarding Native American university enrollment. Additionally, Sutton highlighted a report issued by the NALP Foundation and the Center for Women in Law at the University of Texas School of Law titled "Women of Color: A Study of Law School Experiences" that did not include Native Americans as a distinct group. While the researchers said in the introduction that the paper "analyzes the experiences reported by women of color by Asian/Pacific Islander, Black/African-American, and Hispanic/Latina students", Native American women were simply included in a catch-all "women of color" category. Speaking of the responsibility of researchers, Sutton wrote: Chukchansi tribe disenrollment and accusations of "paper genocide" In 2024, television station KGPE of Fresno, California, reported that the Picayune Rancheria of Chukchansi Indians, a tribe that owns and operates the Chukchansi Gold Resort & Casino, had disenrolled several members of the tribe after requesting proof of heritage and allotment papers. Several of those who had been disenrolled said that the process was unjust, with Claudia Gonzales, a former member of the tribal council, saying, "If we knew this was coming, then we would have taken precautions and measures to protect the general membership from those few that decided they want to try to create paper genocide". See also Cultural genocide Genocide of indigenous peoples Native American name controversy Racism against Native Americans in the United States Stereotypes of Indigenous peoples of Canada and the United States References Sources Further reading Discrimination Genocide Indigenous peoples of the Americas Racism
Paper genocide
[ "Biology" ]
2,943
[ "Behavior", "Aggression", "Discrimination" ]
78,417,826
https://en.wikipedia.org/wiki/Arcadium%20Lithium
Arcadium Lithium plc is a specialty chemicals manufacturing company focused on lithium production. It was formed in January 2024 from the merger of Livent, an American special chemicals company, and Allkem, an Australian lithium mining company. The new company, Arcadium Lithium, became the world's third-largest lithium producer. History Arcadium Lithium plc was created in January 2024 with the merger of Livent and Allkem. Livent's origins traced back to 1944, when the Lithium Corporation of America was created. In May 1985, it was acquired by the FMC Corporation. Livent was created in 2018, when FMC spun off its lithium division. Allkem was originally known as Orocobre Limited, and was listed on the Australian Securities Exchange in December 2007. After merging with Galaxy Resources in 2021, Orocobre was rebranded as Allkem. In September 2024, Arcadium announced it would place its Mt Cattlin mine into care and maintenance by mid-2025 due to low spodumene prices. In August, Arcadium acquired Li-Metal Corp's lithium metal production business for US$11 million, gaining its intellectual property on refining lithium carbonate and a pilot lithium refinery. On October 9, 2024, Rio Tinto agreed to buy Arcadium for US$6.7 billion, in an all-cash deal for US$5.85 per each Arcadium Lithium share. The deal was unanimously approved by the boards of both companies and is expected to close in mid-2025. References External links Companies listed on the New York Stock Exchange Companies listed on the Australian Securities Exchange Lithium Chemical companies Announced mergers and acquisitions American companies established in 2024
Arcadium Lithium
[ "Chemistry" ]
352
[ "Chemical companies" ]
78,417,866
https://en.wikipedia.org/wiki/Chloronitramide%20anion
The chloronitramide anion, also known as chloro(nitro)azanide, is a recently (2024) identified chemical byproduct of the disinfectant chloramine. It is present in the tap water of about 113 million people in the United States of America in varying concentrations. Its toxicity has not yet been determined, although it may be removable with an activated carbon filter. The chloronitramide anion was first observed and determined to be a degradation byproduct of chloramine in the early 1980s. Its molecular formula and structure were disclosed in a paper published in November 2024. Research Early research The chloronitramide anion was first detected as a UV absorbance interference during monitoring of chloramine and dichloramine in 1981. It was then shown to form during the decomposition of both chemicals. It was shown to likely be an anion in 1990. In the 1980s and 1990s methods of producing it in high concentrations were identified, and the molecule was shown through destruction to contain both nitrogen and chlorine. According to Julian Fairey, research on the compound slowed down in the mid-1990s after attempts to identify it were unsuccessful. Identification of structure The structure of the molecule was finally identified in 2024 using a combination of techniques, first identifying the molecular formula, then creating a candidate structure, then confirming it. Ion chromatography, a method of separating ions and ionizable polar molecules, was used to separate the chloronitramide anion from the many salts present in water samples containing it, which otherwise made it difficult to use mass spectrometry; the water salinity was higher than that of saltwater. Mass spectrometry was sufficient to determine the molecular mass of the ion, but it was too small for structure determination from the fragmentation pattern. The ion was found to have the molecular formula ClN2O2−1 (containing two oxygen atoms, two nitrogen atoms, and one chlorine atom) by electrospray ionisation mass spectrometry. A candidate structure was confirmed by 15N NMR spectroscopy and infrared spectroscopy. Future research Research investigating the toxicity of the chloronitramide anion, as well as the reasons for its formation in high or low concentration in different places, is expected. Formation The identifying paper proposes that the chloronitramide anion is formed through the reaction of chloramine (or dichloramine, which forms in chloramine solution) with NO2+, one of its degradation products. The formation of NO2+ begins when dichloramine (NHCl2) is hydrolyzed to form nitroxyl (HNO), which then reacts with dissolved oxygen (O2) to form the unstable peroxynitrite (ONOOH). NO2+ is one of the several reactive nitrogen species formed when peroxynitrite decomposes. The chloronitramide formed in this way then dissociates, losing the hydrogen, to form the corresponding anion. References Inorganic compounds Water treatment Drinking water Nitroamines Chlorides Anions
Chloronitramide anion
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
647
[ "Matter", "Chlorides", "Inorganic compounds", "Anions", "Water treatment", "Salts", "Water pollution", "Environmental engineering", "Water technology", "Ions" ]
78,417,895
https://en.wikipedia.org/wiki/Theory%20of%20functional%20connections
The Theory of Functional Connections (TFC) is a mathematical framework designed for functional interpolation. It introduces a method to derive a functional— a function that operates on another function—capable of transforming constrained optimization problems into equivalent unconstrained problems. This transformation enables the application of TFC to various mathematical challenges, including the solution of differential equations. Functional interpolation, in this context, refers to constructing functionals that always satisfy given constraints, regardless of the expression of the internal (free) function. From interpolation to functional interpolation To provide a general context for the TFC, consider a generic interpolation problem involving constraints, such as a differential equation subject to a boundary value problem (BVP). Regardless of the differential equation, these constraints may be consistent or inconsistent. For instance, in a problem over the domain , the constraints and are inconsistent, as they yield different values at the shared point . If the constraints are consistent, a function interpolating these constraints can be constructed by selecting linearly independent support functions, such as monomials, . The chosen set of support functions may or may not be consistent with the given constraints. For instance, the constraints and are inconsistent with the support functions, , as can be easily verified. If the support functions are consistent with the constraints, the interpolation problem can be solved, yielding an interpolant--a function that satisfies all constraints. Choosing a different set of support functions would result in a different interpolant. When an interpolation problem is solved and an initial interpolant is determined, all possible interpolants can, in principle, be generated by performing the interpolation process with every distinct set of linearly independent support functions consistent with the constraints. However, this method is impractical, as the number of possible sets of support functions is infinite. This challenge was addressed through the development of the TFC, an analytical framework for performing functional interpolation introduced by Daniele Mortari at Texas A&M University. The approach involves constructing a functional that satisfies the given constraints for any arbitrary expression of , referred to as the free function. This functional, known as the constrained functional, provides a complete representation of all possible interpolants. By varying , it is possible to generate the entire set of interpolants, including those that are discontinuous or partially defined. Function interpolation produces a single interpolating function, while functional interpolation generates a family of interpolating functions represented through a functional. This functional defines the subspace of functions that inherently satisfy the given constraints, effectively reducing the solution space to the region where solutions to the constrained optimization problem are located. By employing these functionals, constrained optimization problems can be reformulated as unconstrained problems. This reformulation allows for simpler and more efficient solution methods, often improving accuracy, robustness, and reliability. Within this context, the Theory of Functional Connections (TFC) provides a systematic framework for transforming constrained problems into unconstrained ones, thereby streamlining the solution process. TFC addresses univariate constraints involving points, derivatives, integrals, and any linear combination of these. The theory is also extended to accommodate infinite and multivariate constraints and applied to solving ordinary, partial, and integro-differential equations. The consistency problem, which pertains to constraints, interpolation, and functional interpolation, is comprehensively addressed in. This includes the consistency challenges associated with boundary conditions that involve shear and mixed derivatives. The univariate version of TFC can be expressed in one of the following two forms: where represents the number of linear constraints, is the free function, and are user-defined, linearly independent support functions. The terms are the coefficient functionals, are switching functions (which take a value of 1 when evaluated at their respective constraint and 0 at other constraints), and are projection functionals that express the constraints in terms of the free function. A rational example To show how TFC generalizes interpolation, consider the constraints, and . An interpolating function satisfying these constraints is, as can be easily verified. Because of this interpolation property, the derivative of the function, vanishes at and , for \textit{any} function, . Therefore, by adding to , a functional is obtained that still satisfies the constraints, no matter what is. Due to this property, this functional is referred to as constrained functional. The key requirement for the functional to work as intended is that the terms and are defined. Once this condition is met, the functional is free to take on any arbitrary values beyond the specified constraints, thanks to the infinite flexibility provided by . Importantly, this flexibility is not limited to the specific constraints chosen in this example. Instead, it applies universally to any set of constraints. This universality illustrates how TFC performs functional interpolation: it constructs a function that satisfies the given constraints while simultaneously allowing complete freedom in behavior elsewhere through the choice of . In essence, this example demonstrates that the constrained functional captures all possible functions that meet the given constraints, showcasing the power and generality of TFC in handling a wide variety of interpolation problems. Applications of TFC TFC has been extended and employed in various applications, including its use in shear-type and mixed derivative problems, the analysis of fractional operators, the determination of geodesics for BVP in curved spaces, and in continuation methods. Additionally, TFC has been applied to indirect optimal control, the modeling of stiff chemical kinetics, and the study of epidemiological dynamics. TFC extends into astrodynamics, where Lambert's problem is efficiently solved. It has also demonstrated potential in nonlinear programming and structural mechanics and radiative transfer, among other areas. An efficient, free Python TFC toolbox is available at https://github.com/leakec/tfc. Of particular note is the application of TFC in neural networks, where it has shown exceptional efficiency, especially addressing high-dimensional problems and in enhancing the performance of physics-informed neural networks by effectively eliminating constraints from the optimization process, a challenge that traditional neural networks often struggle to address. This capability significantly improves computational efficiency and accuracy, enabling the resolution of complex problems with greater ease, as proved by the University of Arizona. TFC has been employed with physics-informed neural networks and symbolic regression techniques for physics discovery of dynamical systems. Difference with spectral methods At first glance, TFC and spectral methods may appear similar in their approach to solving constrained optimization problems. However, there are two fundamental distinctions between them: Representation of solutions: Spectral methods represent the solution as a sum of basis functions, whereas TFC represents the free function as a sum of basis functions. This distinction allows TFC to analytically satisfy the constraints, while spectral methods treat constraints as additional data, approximating them with an accuracy dependent on the residuals. Computational approach in BVP: In linear BVPs, the computational strategies of the two methods differ significantly. Spectral methods typically employ iterative techniques, such as the shooting method, to reformulate the BVP as an initial value problem, which is simpler to solve. Conversely, TFC directly addresses these problems through linear least-squares techniques, avoiding the need for iterative procedures. Both methods can perform optimization using either the Galerkin method, which ensures the residual vector is orthogonal to the chosen basis functions, or the Collocation method, which minimizes the norm of the residual vector. Difference with Lagrange multipliers technique The Lagrange multipliers method is a widely used approach for imposing constraints in an optimization problem. This technique introduces additional variables, known as multipliers, which must be computed to enforce the constraints. While the computation of these multipliers is straightforward in some cases, it can be challenging or even practically infeasible in others, thereby adding significant complexity to the problem. In contrast, TFC doesn't add new variables and enables the derivation of constrained functionals without encountering insurmountable difficulties. However, it is important to note that the Lagrange multiplier method has the advantage of handling inequality constraints, a capability that TFC currently lacks. A notable limitation of both approaches is their propensity to produce solutions that correspond to local optima rather than guaranteed global optima, particularly in the context of non-convex problems. Consequently, supplementary verification procedures or alternative methods may be required to assess and confirm the quality and global validity of the obtained solution. In summary, while TFC does not entirely replace the Lagrange multipliers method, it serves as a powerful alternative in cases where the computation of multipliers becomes excessively complex or infeasible, provided the constraints are limited to equalities. References Functions and mappings
Theory of functional connections
[ "Mathematics" ]
1,812
[ "Mathematical analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
78,418,408
https://en.wikipedia.org/wiki/Hopcroft%27s%20problem
In computational geometry, Hopcroft's problem is the problem of testing, for a given system of points and lines in the Euclidean plane, whether at least one of the points lies on at least one of the lines. More generally, one may ask for the number of point–line incidences. Both versions of the problem can be solved in time , where is the total number of points and lines. This time bound matches the bound of on the total number of point-line incidences given by the Szemerédi–Trotter theorem. Hopcroft's problem is named after John Hopcroft, who posed it in the early 1980s. Its computational complexity is closely connected to the complexity of several other problems in computational geometry, including that of three-dimensional Euclidean minimum spanning trees. Algorithms One way of solving the problem involves a geometric divide-and-conquer algorithm. For a given system of points and lines, it is possible to use the theory of epsilon-nets to subdivide the plane, for a given parameter into triangular subproblems each crossed by a fraction of the lines and each containing a fraction of the points. Alternatively, applying the same technique to the projective dual system of dual lines and dual points, one can obtain subproblems each involving a fraction of the input lines and a fraction of the points. Doing this once each way, for , would reduce the problem to subproblems of constant size, which can be solved directly. This idea (with a more careful choice of the parameter ) leads to a time bound of . Here, the extra logarithmic factor comes from the overhead of assigning points and lines to the subproblems that are generated in this way. The same two-step subdivision process, with a choice of that is smaller by a logarithmic factor, can reduce the given problem to subproblems whose size is a polylogarithmic function of , in time . Repeating this process recursively until the input is reduced to subproblems of constant size, would lead to a time bound of the form . Here denotes the iterated logarithm. In a 2024 paper, Chan and Zheng showed that there exist algebraic decision trees for Hopcroft's problem whose depth is . These cannot be used directly to solve Hopcroft's problem, because they have exponential size and cannot be constructed efficiently; however they can be used in combination with the divide-and-conquer methods. Following a suggestion credited to David Eppstein by Jiří Matoušek, Chan and Zheng describe an algorithm that performs a constant number of levels of the recursive algorithm, reducing the problem to many subproblems whose size is an iterated logarithm of the original input size. This is small enough for an optimal decision tree to be constructed by a brute-force search, and then used to solve each subproblem. The result is an algorithm for Hopcroft's problem with total time . In a 2024 preprint, Andrejevs, Belovs, and Vihrovs have announced a quantum algorithm for Hopcroft's problem that runs in time , where the notation hides logarithmic factors. This is substantially below the best known time bound for a classical algorithm. Lower bounds A natural limitation on Hopcroft's algorithm is the number of point–line incidences, which could be by the Szemerédi–Trotter theorem. This would also provide a lower bound on algorithms for listing all point–line incidences, but not for detecting whether there is an incidence or counting the incidences. The divide-and-conquer algorithms for Hopcroft's algorithm operate purely by subdividing the plane in which the input points and lines are given and its dual projective plane. Jeff Erickson proved a lower bound, according to which algorithms that operate in this way must take time . However, methods based on algebraic decision trees do not fit this model, and Erickson's lower bound does not apply to algebraic decision trees. If there exist algebraic decision trees for the problem with depth , they could be used in the same way to solve Hopcroft's problem in time . References Computational geometry
Hopcroft's problem
[ "Mathematics" ]
861
[ "Computational geometry", "Computational mathematics" ]
78,418,521
https://en.wikipedia.org/wiki/Snow%20bleaching
Snow bleaching is a technique used in traditional Japanese textile industry to bleach the fabric using the ozone evaporating from snow. This technique is used to bleach Echigo-jofu, a type of kimono. This method is based on the fact that ozone is released when snow evaporates due to sunlight. References Chemical processes
Snow bleaching
[ "Chemistry" ]
74
[ "Chemical process engineering", "Chemical processes", "nan" ]
78,418,688
https://en.wikipedia.org/wiki/CPC%20theory
Currents' Physical Components (CPC) Theory is an advanced power theory in electrical engineering that provides a comprehensive framework for analyzing and compensating electrical systems with non-sinusoidal voltages and currents. Developed by Professor Leszek S. Czarnecki in 1983, CPC theory addresses the limitations of traditional power theories in handling modern electrical systems characterized by harmonic distortion and unbalanced loads. Background Traditional power theories, such as those proposed by Budeanu and Fryze, were primarily designed for systems with sinusoidal waveforms. However, the increasing prevalence of non-linear loads and power electronic devices has introduced significant harmonic distortions, rendering these classical theories inadequate for accurate power analysis and compensation. Development In response to the evolving complexities of electrical systems, Leszek S. Czarnecki introduced the CPC theory in 1983. This theory decomposes load currents into distinct physical components, each corresponding to specific power phenomena within the system. By identifying and analysing these components, CPC theory offers a more precise understanding of power flow and facilitates effective compensation strategies. Key concepts CPC theory decomposes the load current into the following components: Active Current (): Corresponds to the active power consumed by the load. Reactive Current (): Associated with the reactive power due to energy storage elements like inductors and capacitors. Harmonic Current (): Represents the distortion caused by harmonics in the system. Unbalanced Current (): Pertains to unbalanced loads in three-phase systems. By analyzing these components, engineers can design targeted compensation methods to improve power quality and system efficiency. Applications CPC theory has been applied in various areas, including: Design of reactive power compensators Power analysis Harmonic filtering in power systems. Analysis of unbalanced three-phase systems. Development of power quality standards and measurement techniques. References External Links Leszek S. Czarnecki's Personal Website Electrical engineering
CPC theory
[ "Engineering" ]
391
[ "Electrical engineering" ]
78,420,204
https://en.wikipedia.org/wiki/List%20of%20mobile%20virtual%20network%20operators%20in%20South%20Korea
This is a list of mobile virtual network operators (MVNOs) in South Korea, which lease wireless telephone and data spectrum from the major mobile network operators (MNOs) SK Telecom (SKT), KT Corporation, and LG U+, and resell it to customers. Active operators References Mobile virtual network operators Telecommunications lists Mobile phone companies Mobile phone companies of South Korea
List of mobile virtual network operators in South Korea
[ "Technology" ]
78
[ "Mobile phone companies", "Mobile technology companies" ]
78,421,468
https://en.wikipedia.org/wiki/Corsehill%20%28stone%29
Corsehill stone is a type of building stone, extracted from Corsehill Quarry in Annandale, Dumfries and Galloway, Scotland. It is a red sandstone of Triassic age, used extensively for buildings in the 19th and 20th centuries. Quarry On November 8th, 1993, the United States Senate passed a resolution calling for the construction of a memorial to honour the victims of the Lockerbie Bombing. Blocks of red sandstone from the Corsehill Quarry were used to build the Lockerbie Bombing cairn in Arlington National Cemetery. References Building materials
Corsehill (stone)
[ "Physics", "Engineering" ]
112
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
78,421,855
https://en.wikipedia.org/wiki/Motorola%20Edge%2050%20Fusion
The Motorola Edge fusion 50 is Android smartphone developed by Motorola Mobility, a subsidiary of Lenovo. This phone falls in the Motorola Edge series. Specifications Hardware The Edge 50 fusion uses the Snapdragon 7s Gen 2 chipset processor with the Adreno 710 GPU. It has 12 GB of RAM and a microSD card slot for expandable storage. It has a 6.67-inch pOLED endless edge display with a full HD+ resolution featuring an optical (under-screen) fingerprint scanner. It has a pixel density of 395 ppi and supports a refresh rate of 144 hz with 10-bit colour depth, 100% DCI-P3 wide colour gamut, and a peak brightness of 1,600 nits, having a sustained high brightness mode of 1200 nits. The battery capacity is 5000 mAh, coupled with 68W TurboPower fast charging support. The camera setup consists of a main camera with 50MP Sony LYT-700C sensor with an f/1.8 aperture and 1.0 µm pixel size, while the secondary camera is a 13MP ultra-wide lens with an f/2.2 aperture and a 120-degree field of view. The front camera is a 32MP sensor. It features a stereo speaker system, which supports Dolby Atmos surround sound. Software The Edge 50 fusion launched on a near-stock version of Android 14 with Motorola's Hello UI skin, minimal bloatware, and some pre-installed apps that cannot be uninstalled. The company also provides 3 years of Android updates and 4 years of security updates to this phone. References Motorola smartphones
Motorola Edge 50 Fusion
[ "Technology" ]
337
[ "Mobile technology stubs", "Mobile phone stubs" ]
78,422,234
https://en.wikipedia.org/wiki/S5%202007%2B777
S5 2007+777 is a classical BL Lacertae object located in the constellation of Draco. This object has a redshift of (z) 0.342 and was first discovered in 1981 as a flat-spectrum astronomical radio source. It has characteristics of different Fanaroff-Riley classes on both sides of its active nucleus making it a rare type of Hybrid morphology radio sources (HYMORs). It has an estimated V magnitude of 16.5. Description S5 2007+777 is classified as a blazar showing variability across the electromagnetic spectrum with amplitudes rising steadily along with frequency. It is also an Intraday Variable (IDV) source exhibiting variations as a whole as well as in polarized intensity on time scales ranging from 2 to 6 days at centimeter (cm) wavelengths. In dereddened B and I-band light curves during observations conducted in 2001, S5 2007+777 shows a smaller amplitude variation of 10%. Subsequent observations conducted in both 2002 and 2004, shows the object having minimum to maximum variations of order 30-40% on 2-4 day time scales. Although mostly in a quiescent state, one outburst was detected in S5 2007+777 between 1991 and 1992 with the peak flux of this source reaching 3.69 Jansky at 14.5 GHz. A gamma ray flare was detected in February 2016 during an observation from the Foligno Observatory via a 30 cm telescope. According to radio imaging of S5 2007+777 made by Very Long Baseline Array, a one-sided core-jet structure is found with one of the components exhibiting proper motion and greater flux density. Imaging by Very Large Array and Very Long Baseline Interferometry, shows the object as a core-dominated source instead, consisting of a bright radio lobe on the eastern side and a long jet on the western side of the nucleus which terminates without a clear hot spot upon reaching at 10 arcseconds from the nucleus. This jet is known to show superluminal motion, being aligned 24° to the line of sight with its 4.9 GHz luminosity calculated to be 1032 erg s−1 Hz−1. The radio emission of the jet in S5 2007+777 shows several unique radio knots with the brightest one located midstream. The jet itself imaged at 1.49 GHz, has an extended structure in linear direction which it bends west at a changed position angle of 20°. An extended X-ray jet was also found by Chandra on kiloparsec-scales, making S5 2007+777 only one of the four BL Lacertae objects to have this feature. References External links S5 2007+777 on SIMBAD S5 2007+777 on NASA/IPAC Database BL Lacertae objects Draco (constellation) Blazars Quasars Active galaxies Astronomical objects discovered in 1981
S5 2007+777
[ "Astronomy" ]
597
[ "Constellations", "Draco (constellation)" ]
78,422,700
https://en.wikipedia.org/wiki/Quinmerac
Quinmerac is a chemical herbicide first manufactured by BASF in 1993. Its formula is , and it is a quinolinemonocarboxylic acid that includes chlorine and methyl groups as substituents. Use Quinmerac is used as a herbicide to control various pests, such as chickweed, that affect cereals, rape, and sugar beets. In a 2015 survey of herbicides, quinmerac was rated as having the sixth-highest market share out of the most popular herbicides used with sugar beet crops with a 7.6% share. Regulation Quinmerac was approved by the European Commission in 2010 to be added to the list of Authorised Plant Protection Products. See also HRAC classification References Herbicides Quinolines Carboxylic acids Chloroarenes
Quinmerac
[ "Chemistry", "Biology" ]
174
[ "Herbicides", "Carboxylic acids", "Biocides", "Functional groups" ]
78,422,706
https://en.wikipedia.org/wiki/Bitcoin%20buried%20in%20Newport%20landfill
In mid-2013, James Howells disposed of a laptop hard drive containing the private key for 8,000 Bitcoin in the Docksway landfill in Newport, Wales. Howells subsequently assembled a team of specialists and secured funding to excavate the site, but the Newport City Council refused permission, citing the cost and environmental impact of the search. If the coins are discovered, Howell proposes distributing 30% of the proceedings among the council and the population of Newport. As of January 2025, the missing Bitcoins were worth (). In December, Howells sued the council for £495 million, with the council contesting that the device is now its property. The attempted recovery of the missing Bitcoin has been likened to a digital treasure hunt. Howells and his team are confident that retrieval of the data remains possible, while the council continues to profess its scepticism. Following a hearing, the High Court dismissed Howells’ claim in January 2025 ruling that it had no prospect of success. Background Creation of Bitcoin Bitcoin, the first decentralized cryptocurrency, was announced in October 2008 with the whitepaper Bitcoin: A Peer-to-Peer Electronic Cash System by Satoshi Nakamoto, followed by the implementation of the protocol as a peer-to-peer network in January 2009. It is a significant digital currency that uses cryptography to verify blockchain transactions and record them in a public ledger. James Howells James Howells, a Welsh computer engineer from Newport, was influenced at a young age by his mother, who was involved in the production of microchips. By his teens, he was a regular user of the internet. Howells began building computers at the age of 13 and became a Napster user around the time of Bitcoin's inception. Working various IT jobs, he learnt about encryption while working on a Bowman communications system. Howells taught himself about Bitcoin in December 2008 and began studying the concept a month later. After the 2008 bank bailout, Howells considered fiat currencies a scam, favouring the vision of Bitcoin inventor Nakamoto instead. He became an early adopter of the technology in 2009. By 2013, Howells was living in Newport with his three children and then partner Hafina Eddy-Evans. The two later broke up with Eddy-Evans leaving with the children. In 2021, he worked from home maintaining emergency-response systems in Wales. A year later, he described himself as a cryptocurrency and blockchain project manager. Early Bitcoin mining On 15 February 2009, James Howells, started mining Bitcoin with a Dell XPS laptop. He recalled mining 400–800 Bitcoin intermittently overnight for two months, which caused his device to overheat. Howells later damaged the device and dismantled it for parts, selling some on eBay. The laptop, containing 32 kilobytes worth of Bitcoin private keys, was also used for gaming, and held music, e-mails and family photographs. The Telegraph considers Howells one of the earliest miners on the Bitcoin network, with The New Yorker further identifying him as one of only five miners at the time of his participation. Disposal of hard drive Between 20 June and 10 August 2013, Howells accidentally disposed of an encrypted hard drive, mistaking one device for another. The disposed hard drive contained the cryptographic private key for 8,000 Bitcoin valued at at the time. According to reports, Hafina Eddy-Evans, Howells's partner at the time, took the trash with the hard drive to the tip (landfill). According to Eddy-Evans, Howells "begged" her to take the unwanted items to the tip. She denies fault, while Howells said he "subconsciously blames" her for the loss of the hard drive. By November 2013, the device was approximately three to five feet underground in Docksway landfill, Newport, South Wales, with an approximate value of £4 million. At the time, Howells accepted that the coins were lost for good. Newport City Council notes that the hard drive was likely "buried under 25,000 cubic meters of waste and earth", weighing approximately 110,000–200,000 tonnes, with CNN reporting the challenge in finding the device as near to impossible. The former manager of the landfill site says the hard drive is located in a 15,000 tonne section named Cell 2, where waste was buried between August and November 2013. The site holds 1.4 million tonnes of waste in total. Search attempts In December 2017, Wired reported that the Newport City Council refused to allow Howells to search for the hard drive. According to Howells, the proposed search involves the first case, unrelated to a criminal investigation, of excavating a landfill site in the United Kingdom. The council cited cost concerns, environmental impact, galvanic corrosion of the device, and the potential of "treasure hunters" breaking the law. Initially the council took a soft approach to the situation, indicating that they would return the device if found, but later took a tougher stance, and stated that searching for the hard drive would be against the law. In January 2021, after repeatedly requesting access to search for the device, Howells offered the council 25% of the proceedings then valued at approximately £200 million. He offered to donate £52.5 million () to the council which would go to the 316,000 people of Newport, equivalent to £175 per person. The council refused, claiming Howells's offer was in breach of licensing regulations. Howells believes the drive is still functional due to its protective casing and the anti-corrosive cobalt layer coating the glass disk. To support his search efforts, Howells acquired financial support from a hedge fund with whom he would split 50% of the proceedings. He believes that using council waste records they can identify the location of the device. After they identify and find the hard drive, a team of data recovery specialists would help recover the Bitcoin. The council estimates that the excavation would cost millions of pounds, with Howells budgeting £5 million for a 9–12 month operation. In August 2022, Howells expanded his search plan to include the use of artificial intelligence using a mechanical arm to scan waste to identify the hard drive; the plan also called for using drones, and Boston Dynamics robotic dogs for security, as well as recruiting an AI specialist and an environmental team to the project. His team includes eight experts in landfill excavation, and a data recovery advisor who helped recover the black box from the Space Shuttle Columbia disaster. The budget also increased to £10–11 million with the help of venture capitalists who would retain 30% of the proceedings along with Howell. Additionally, Howells now intended to develop a community-owned mining facility on the landfill site with the proceedings. The facility would use solar or wind power. That same year, Richard Hammond produced a short documentary on Howells's quest to retrieve the drive involving the recovery team, and by September 2023, the team doubled in size. Litigation Howells's legal team published an open letter to the Newport City Council on 6 September 2023 with his intention to sue. The letter sought to prevent future works on the site while seeking £446 million in damages and a judicial review of the council's decision refusing access to the site. Two months later, his legal team wrote again to the council requesting access prior to pursuing a case in court. By October 2024, the contents of the hard drive were estimated at $750 million. Howells sued the council for £495 million, setting a date for a commerce court in Cardiff in December 2024, arguing for intellectual property rights among other claims. According to Wales Online, Howells was represented by the same team of barristers that also represented some of the alleged victims against Mohamed Al-Fayed. In response, the council argued that they legally own the device as the property was deposited to the site; Howells's barristers denied such claim based on intent. The council requested a High Court hearing on 3 December with the intent to have the case dismissed. The Judge postponed the verdict until a later date. Council barristers argued Howells attempted to "bribe the council" by offering a percentage of the Bitcoin to the local community. Howells's legal team contested, arguing that their client was entitled to search for his missing hard drive. In a judgement issued on 9 January 2025, Judge Keyser (KC) dismissed Howells’ claim, saying that it had "no realistic prospect of succeeding". Howells told Wales Online he was "very disappointed" but that he had a new plan to start a cryptocurrency based on his inaccessible Bitcoin wealth. Opinion Howells said that he would have sold 30–40% of the Bitcoin in 2013 if he had access to it, believing that reaching $100,000 was a "conservative figure" for the cryptocurrency. By 2017, he described cryptocurrencies as the "new gold, oil and water combined". He focused on Bitcoin Cash and Ethereum, anticipating an increase in Bitcoin's value on the hard drive rising to between $500 million and $1 billion. In 2024, approximately seven years later, the value of the hard drive increased to $749 million. Howells believed that the chances of retrieving the device increased as the value increased. By 2022, Howells was 80–90% confident that he could successfully recover the hard drive data. At the same time, he acknowledged that the device was "a needle in a haystack" and that the investment from venture capitalists was extremely high-risk. In 2024, Howells's legal team stated in court that the metaphorical haystack was theoretically "much, much smaller", due to the "considerable expertise" involved in planning an excavation. Attempts to recover the device have been described as a digital treasure hunt. Notes See also Buried treasure Missing treasure References External links $250 MILLION of Bitcoin lost, a short documentary presented by Richard Hammond on YouTube Bitcoin Treasure Treasure 2013 in Wales Newport, Wales
Bitcoin buried in Newport landfill
[ "Physics" ]
2,110
[ "Lost objects", "Physical objects", "Matter" ]
78,423,249
https://en.wikipedia.org/wiki/75%20Seasons%3A%20The%20History%20of%20the%20NFL
75 Seasons: The History of the NFL is a software CD-ROM from RealTime Sports and NFL Films. Summary The CD is narrated by Fox TV commentator Pat Summerall, the story is divided into 30 chapters each between 30 seconds and 5 minutes in length and contains hundreds of game clips and interviews. The archive can be searched alphabetically and by team and player. Development 75 Seasons: The History of the NFL was developed by RealTime Sports, a company founded in 1993, and produced by Steve Sabol. Reception CNETs Matt Rosoff wrote "While the video clips of 75 Seasons might impress you, the text is mostly recycled and the interface seems thrown together without much thought. Unless you're a football fanatic, you could do better for the price." The Charlotte Observers Steven L. Kent said "Though this documentary CD ultimately might have scored stronger with a bit less video and a superior line of statistics, it does a good job of showing the unique force that defines the National Football League". Vince Caputo, Tom Hedden, and Dave Robidoux won an Emmy for their score of the CD. References 1995 software Multimedia software NFL Films Windows software
75 Seasons: The History of the NFL
[ "Technology" ]
237
[ "Multimedia", "Multimedia software" ]
78,423,454
https://en.wikipedia.org/wiki/Kuratowski%20Prize
The Kuratowski Prize (Polish: Nagroda im. Kazimierza Kuratowskiego) is a Polish annual mathematics award conferred jointly by the Polish Academy of Sciences (PAN) and the Polish Mathematical Society (PTM) for contributions in the field of mathematics granted to individuals under the age of 30. It is named in honour of Polish mathematician and logician Kazimierz Kuratowski (1896–1980). Description and history The prize was established in 1981 on the initiative of physician and politician Zofia Kuratowska, who was personally the daughter of Kazimierz Kuratowski. It is presented annually by the Institute of Mathematics of the Polish Academy of Sciences and the Polish Mathematical Society (Polskie Towarzystwo Matematyczne). The Kuratowski Prize ceremony takes place during the scientific session of the Polish Mathematical Society and the laureate of the prize is invited to give a speech on a chosen subject. It is considered the most prestigious award for young mathematicians in Poland. In 2015, Joanna Kułaga-Przymus became the first woman to be awarded the prize. As of 2024, there have been two women mathematicians to receive this recognition, the other being Agnieszka Hejna (2023). Notable laureates of the prize have included two Prize of the Foundation for Polish Science winners: Mariusz Lemańczyk (1987) and Tomasz Łuczak (1997); Erdős Prize winner Wojciech Samotij (2013); and Stefan Banach Prize winner Jerzy Weyman (1984). Laureates The list of recipients of the Kuratowski Prize: Borys Kuca (2024) Agnieszka Hejna (2023) Jakub Skrzeczkowski (2022) Wojciech Górny, Marcin Sroka (2021) Mateusz Wasilewski (2020) Joachim Jelisiejew (2019) Piotr Pokora (2018) Adam Kanigowski (2017) Piotr Achinger (2016) Joanna Kułaga-Przymus, Mateusz Michałek (2015) Kamil Kaleta (2014) Wojciech Samotij (2013) Mateusz Kwaśnicki (2012) Piotr Przytycki (2011) Sławomir Dinew (2010) Radosław Adamczak (2009) Adam Skalski (2008) Mikołaj Bojańczyk (2007) Krzysztof Krupiński (2006) Grzegorz Bobiński, Tomasz Schreiber (2005) Dariusz Buraczewski (2004) Piotr Śniady (2003) Adrian Langer (2002) Grzegorz Zwara (2001) Krzysztof Oleszkiewicz (2000) Stanisław Kasjan (1999) Michał Kwieciński (1998) Rafał Latała (1997) Jacek Zienkiewicz (1996) Piotr Hajłasz, Jerzy Marcinkowski (1995) Jacek Graczyk (1994) Waldemar Hebisch (1993) Zbigniew Jelonek (1992) Tomasz Łuczak (1991) Jarosław Wiśniewski (1990) Adam Parusiński (1989) Piotr Biler (1988) Mariusz Lemańczyk (1987) Krzysztof Ciesielski (1986) Marusz Wodzicki (1985) Piotr Pragacz, Jerzy Weyman (1984) Józef H. Przytycki (1983) Ryszard Frankiewicz (1982) Feliks Przytycki (1981) See also Prize of the Foundation for Polish Science Timeline of Polish science and technology References Mathematics awards Polish awards Polish science and technology awards
Kuratowski Prize
[ "Technology" ]
800
[ "Science and technology awards", "Mathematics awards" ]
78,423,710
https://en.wikipedia.org/wiki/NGC%204273
NGC4273 is a barred spiral galaxy in the constellation of Virgo. Its velocity with respect to the cosmic microwave background is 2727 ± 24km/s, which corresponds to a Hubble distance of . However, 20 non-redshift measurements give a distance of . It was discovered by German-British astronomer William Herschel on 17 April 1786. According to A.M. Garcia, NGC 4273 is one of the galaxies in the NGC 4235 group (also known as LGG 281). This galaxy group contains at least 29 members, of which 18 appear in the New General Catalogue and 4 in the Index Catalogue. Supernovae Two supernovae have been observed in NGC 4273: SN 1936A (typeII, mag. 14.5) was discovered by Edwin Hubble and Glenn Moore on 21 January 1936. [Note: Some sources incorrectly cite the discovery date of SN 1936A as 2 January 1936.] SN 2008N (type II, mag. 17.8) was discovered by Alex Filippenko, D. Winslow, and W. Li on 17 January 2008. See also List of NGC objects (4001–5000) References External links 4273 039738 +01-32-008 07380 12173+0537 Virgo_(constellation) 17860417 Discoveries by William Herschel barred spiral galaxies Virgo Cluster
NGC 4273
[ "Astronomy" ]
292
[ "Virgo (constellation)", "Constellations" ]
78,424,168
https://en.wikipedia.org/wiki/Synergetic%20theory
Synergetic theory, also known as "synergy" and referred to by some as a pseudoscientific theory, was developed by René-Louis Vallée and first disseminated in 1971 with the publication of his book L'énergie électromagnétique matérielle et gravitationally (Material and Gravitational Electromagnetic Energy). The magazine Science et Vie published several articles on the subject, and in 1975, it reported on an experiment that allegedly generated more energy than was input into the system. This sparked a long-standing controversy over the discovery of "free energy." The following year, La Recherche examined Vallée's book and, under his guidance, commissioned physicists to conduct a rigorous test to verify or refute the initial claims. The results were negative: no excess energy was observed. A critical examination of the theory in question reveals a multitude of inconsistencies. It becomes evident that the author's work is based on his personal beliefs, formulating a set of disconnected equations. Vallée opposed modern physics, viewing the theoretical advancements of the 20th century as overly intricate and incompatible with reality. Vallée was affiliated with the Alexandre Dufour Physics Circle and supported by free-energy enthusiasts until the early 2000s, which enabled him to achieve a certain degree of media presence before synergetics faded from public discourse. History René-Louis Vallée (1926–2007), a 1951 graduate of Supélec, was employed by Alsthom from 1953 to 1958 and subsequently by the French Atomic Energy Commission (CEA) until 1976. Between 1970 and 1974, he authored several books about his professional expertise. Vallée, a student of Louis de Broglie during his studies, developed a lifelong interest in physics and an alternative theory aimed at unifying the four fundamental forces. This culminated in the publication of his book, L'énergie électromagnétique matérielle et gravitationnelle, in 1971. As a member of the Alexandre Dufour Physics Circle, Vallée promoted his book and theory during a 1972 lecture. The main objective of the circle was to challenge the tenets of modern physics. Despite presenting his critiques of physics objectively in his written works, Vallée expressed clear anti-relativist sentiments in his speeches, rejecting what he described as a "blind worship of revealed relativity." In the context of the initial oil crisis, Vallée advanced his theory as a radical departure from conventional wisdom, asserting that global capitalism had hastily embraced relativity with the fervor of a religious doctrine. He leveled accusations against several physicists, characterizing their views as those of members of a "discreet philosophical-scientific sect." In a letter to La Recherche, he referred to Richard Feynman as a "man in black." Following his dismissal from the CEA in 1976, Vallée engaged in a public debate with Industry Minister André Giraud on the radio program Le téléphone sonne in 1979. He challenged the prevailing preference for nuclear power, advocating instead for his concept of "free energy." In February 1974, Science et Vie commenced coverage of synergistic theory, subsequently revisiting the topic in January 1975. During this latter period, Vallée employed his theory to elucidate the failure of nuclear fusion experiments in tokamaks, asserting that conventional scientific paradigms lacked rational explanations. In 1976, forming a support committee for Vallée resulted in the establishment of the SEPED (Society for the Study and Promotion of Diffuse Energy), which was operational between 1976 and 1984. In the context of the 1978 French nuclear debate, anti-nuclear activist René Barjavel expressed support for Vallée's Lettre ouverte aux vivants et à ceux qui veulent le rester (An open letter to the living and those who wish to remain living), citing the latter's theory as a disruption to the habits of thought and work that had been based on relativity for approximately half a century. Barjavel further noted that Einstein and his contemporaries had asserted that space vehicles could never exceed the speed of light, a claim that he regarded as untenable. Vallée expressed dismay at the apparent lack of interest in synergistic, suggesting that there were conspiracies at play, involving official science, global capitalism, and even the World Zionist Organization, which were preventing scientific progress. In a letter to Prime Minister Jacques Chirac dated May 21, 1986, he reiterated these accusations. Vallée was unable to disseminate his theory through conventional channels; therefore, he turned to the Internet as a means of sharing his ideas. By the year 2000, he had become a member of the New York Academy of Sciences and had written the preface for a scientific document in which he discussed his theory, which was subsequently renamed GUST (Grand Unified Synergetic Theory). While Vallée's work receded from public consciousness, the concept of free energy endured. In 2003, the Swiss-based GIFNET (Global Institute for New Energy Technologies) was established, with its French director Jean-Luc Naudin espousing the tenets of the synergistic theory. The institute has since ceased to maintain an online presence. Definitions In 1973, Vallée submitted the terms "synergy" and "synergetic theory" to the French Academy's Committee for Technical Terms. In his 1971 book, he introduced the concept of "synergetic potential", which he defined as follows: : The square of the propagation speed of electromagnetic waves in a vacuum filled with matter. Science et Vie widely adopted the terms " synergistic" and "synergetic theory", which described the "synergetic generator" or "battery" as a device purportedly capable of harnessing the diffuse energy present in the universe. Vallée discussed the potential for harnessing "diffuse electromagnetic energy that traverses the immensity of the Universe", which could be realized with a more profound comprehension of the characteristics of matter, particularly within "diffuse energetic environments." This central notion of synergetic theory inspired the designation of the SEPED (Society for the Study and Promotion of Diffuse Energy), which was operational for less than a decade. Synergy René-Louis Vallée offered a critique of the complexity of special relativity but drew extensively upon its conceptual framework. He put forth a concept of "synergy", or total energy, which he expressed through the formula S=mc2. This formula is identical to Albert Einstein's, but it incorporates not only the system's energy but also the diffuse energy of the medium surrounding it. However, his other theoretical propositions diverged from the prevailing consensus. For Vallée, space was Euclidean, time was universal, and the speed of light was not constant. He also proposed that gravitation is a force of electromagnetic origin, which contrasts with Einstein's advancements in the field, which deepened understanding but lacked definitive explanations. Laws Synergetics puts forth two hypotheses regarding the conversion between energy and matter. The first is the "law of materialization", which posits that energy can be transformed into matter. The second is the "upper limit value of the electric field", which suggests that matter can be transformed into energy when a field reaches a value of 39 × 1015 V/m. These laws are, in fact, hypotheses. Vallée characterizes the law of materialization as "a fundamental law of nature that was missing from the known laws of physics." He claimed to have discovered an "inexhaustible source of cosmic energy" available everywhere, asserting that matter is a localized form of this diffuse energy, explained through his upper limit field hypothesis. Diffuse energy Vallée put forth the theory that the universe is permeated by a vast, hitherto undiscovered form of "diffuse energy", which can account for all observable physical phenomena. According to this hypothesis, elementary particles represent distinct manifestations of this energy. In Chapter 9 of his book, Vallée posited that "gravitation and cosmic radiation have a common origin in diffuse electromagnetic energy." He put forth the hypothesis that the speed of light is variable and dependent on the diffuse medium through which it propagates, deriving a formula for the "energy equivalence of gravitational fields:" A simple calculation shows that a cubic metre of empty space on the Earth's surface contains 57,000 Megajoules less energy than a cubic metre of interstellar space. This formula is, apart from one factor, the same as that for gravitational potential energy. The added constant, , is the energy density of the matter-free diffuse medium, but the theory doesn't know how to calculate it, and Vallée merely gives orders of magnitude. Experiments In 1975, Belgian scientist Eric d'Hoker conducted the inaugural experiment based on synergetic theory in Mortsel. The results indicated that the generated energy was four times the input. The experiment entailed charging a capacitor with a battery and then discharging the current through a graphite rod. Vallée ascribed the surplus energy to a reaction in which a carbon-12 atom transformed a radioactive boron-12 atom, which subsequently reverted to carbon via beta decay, thereby releasing additional energy. A second experiment was conducted on January 23, 1976, at the Physics Faculty of Paris 7, by Francis Kovacs, to validate the aforementioned findings. The experiment was designed to confirm the energy surplus and convert it into usable electric current, using parameters provided by Vallée. A capacitor was used to discharge a current through a glass tube filled with powdered graphite, surrounded by a coil that recovered a secondary current, which was then visualized on an oscilloscope. Tests were conducted in three configurations: no magnetic field, a field aligned with the electric current, and a field opposed to it. In all cases, the results matched the predictions made by the Lenz-Faraday law, showing no "synergetic" effects. Criticism Vallée posited that synergetic theory could be used to harness limitless energy from any point in space using a simple, inexpensive device. In November 1975, Science et Vie published an article endorsing Vallée's theory based on a single experiment and critiqued the lack of interest from physicists. In the wake of the 1976 verification, which yielded negative results, drew parallels between synergetics and a contemporary iteration of perpetual motion, noting that both promised the generation of free energy from seemingly unlimited sources. The individual who conducted the counter-experiment, Jean-Marc Lévy-Leblond, was highly critical of Vallée, who postulated a conspiracy against his theory. Lévy-Leblond argued that the principles of synergetics were not susceptible to refutation, as they were not formalized and predictive, and therefore not scientific. He described Vallée's theoretical framework as incomprehensible, likening it to the peculiar calligraphy of Saul Steinberg, composed of recognizable symbols but lacking an intelligible whole. Vallée's apparent objective was to develop a comprehensive theoretical framework, which he referred to as a "theory of everything." He sought to portray synergetics as a "quantum and gravitational energy theory" that would restore objectivity to science by making it "accessible to the general public." Nevertheless, this assertion of accessibility proved to be illusory upon examination of the text in question. Furthermore, Vallée himself never subjected his theories to empirical testing, thereby rendering them inherently unverifiable. Legacy Synergetic theory, which was promoted by free-energy advocates from the 1970s to the early 2000s, enjoyed a brief period of popularity between its coverage in Science et Vie and its definitive refutation in La Recherche. It has since become an example of "alter-science", according to , and a scientific imposture, per . Moatti drew a parallel between Vallée and Maurice Allais, who developed an interest in physics at a relatively advanced age and published his inaugural theory on the subject at the age of 86. Allais is renowned for challenging the prevailing theories of Newton and Einstein. He shares similarities with Vallée in this regard. Nevertheless, the term "synergetics" is more frequently linked with Nikola Tesla and his notion of free energy. In the early 20th century, renowned engineer Nikola Tesla sought to transmit electricity wirelessly and harness cosmic radiation energy. Despite the discovery of X-rays in 1895, Tesla rejected the concept of energy contained within matter. In 1931, he claimed to have constructed a "cosmic energy receiver" and used it to power a vehicle. Like Vallée, Tesla rejected overly theoretical science, dismissed the theory of relativity as false, and announced a "unified theory of gravitation" that explained this force simply and denied Einstein's concept of curved space. René-Louis Vallée bibliography Reference Work Presentations and writings Notes References Bibliography Articles from the circle of friends of Vallée or SEPED Pseudoscience Energy (physics) Conspiracy theories in France
Synergetic theory
[ "Physics", "Mathematics" ]
2,689
[ "Energy (physics)", "Wikipedia categories named after physical quantities", "Quantity", "Physical quantities" ]
78,425,036
https://en.wikipedia.org/wiki/NGC%204246
NGC4246 is an unbarred spiral galaxy in the constellation of Virgo. Its velocity with respect to the cosmic microwave background is 4064 ± 24km/s, which corresponds to a Hubble distance of . However, 20 non-redshift measurements give a distance of . It was discovered by German-British astronomer William Herschel on 13 April 1784. It was also observed by German astronomer Arnold Schwassmann on 30 October 1899 and listed in the Index Catalogue as IC 3113. According to the SIMBAD database, NGC4246 is a LINER galaxy, i.e. a galaxy whose nucleus has an emission spectrum characterized by broad lines of weakly ionized atoms. NGC 4246 along with NGC 4235 and NGC 4247 are listed together as Holm 359 in Erik Holmberg's A Study of Double and Multiple Galaxies Together with Inquiries into some General Metagalactic Problems, published in 1937. Supernovae Two supernovae have been observed in NGC 4273: SN 1975C (type unknown, mag. 18) was discovered by American astronomer Charles Kowal on 15 March 1975. SN 1984U (type unknown, mag. 18) was discovered by L. E. Gonzalez at the Cerro El Roble Observatory on 2 March 1984. See also List of NGC objects (4001–5000) References External links 4246 039479 +01-31-041 IC objects 07334 Virgo_(constellation) 17840413 Discoveries by William Herschel Unbarred spiral galaxies LINER galaxies
NGC 4246
[ "Astronomy" ]
315
[ "Virgo (constellation)", "Constellations" ]
78,425,611
https://en.wikipedia.org/wiki/Isidoro%20Orlanski
Isidoro Orlanski (born 1939) is an Argentine-American atmospheric physicist, meteorologist, and ocean scientist. He is known for his contributions to the dynamics of weather systems and ocean currents, especially his work on mesoscale meteorology. Orlanski is currently an emeritus professor at Princeton University. Early life and education Isidoro Orlanski was born in Rivera, Buenos Aires, in 1939 to Jewish immigrants Samuel and Sara Orlanski, who fled Wolkowysk, Poland during the early 20th century pogroms. With the help of the Jewish Colonization Association, which enabled Jewish immigrants from Eastern Europe to farm in Argentina, the Orlanski family settled in rural Argentina before moving to Buenos Aires in the early 1940s. In 1959, Orlanski enrolled in the Faculty of Exact and Natural Sciences (Spanish: Facultad de Ciencias Exactas) at the University of Buenos Aires, where he studied physics. In 1964, Orlanski earned a degree in physics from University of Buenos Aires. In 1965, he received a grant to pursue graduate studies at the Massachusetts Institute of Technology, where he met Jule Charney. Under Dr. Charney's supervision, Orlanski completed his PhD in 1967. His thesis, titled Instability of Frontal Waves, earned the Carl Gustav Rossby Award for best thesis in the Atmospheric and Oceanic Sciences program. Career Before his return to Argentina, Orlanski followed Charney's advice to spend a year in Washington, D.C., joining the Geophysical Fluid Dynamics Laboratory (GFDL), where he worked with Dr. Joseph Smagorinsky. The GFDL, under the leadership of Smagorinsky, was developing numerical models for weather forecasting and climate assessment. Orlanski decided to spend his career at GFDL. He relocated with the lab to Princeton University in New Jersey. At Princeton, Orlanski became a lecturer in Atmospheric and Oceanic Sciences, a collaboration between the lab and the university. By 1980, GFDL had grown to 134 staff members, with Orlanski being appointed the lab's first Deputy Director. While on sabbatical in Argentina in 1985, Orlanski established an organization for numerical modeling that became the Centro de Investigaciones para el Mar y la Atmósfera (CIMA). Orlanski retired from GFDL in 2007 but continued teaching at Princeton University until 2017, retiring as a lecturer with the rank of Full Professor. Research Orlanski's work had a significant impact in the field of mesoscale meteorology. He introduced the terms meso-alpha, meso-beta, and meso-gamma to classify the horizontal scales of atmospheric processes, widely used in limited area modeling. The primary purpose of Orlanski’s classification of mesoscale phenomena was to assist modelers in designing limited-area models for mesoscale prediction. This framework was used in the design of field experiments for mesoscale observations, as well as in defining the spatial and temporal scales necessary for forecast models. Moreover, it took over two decades for both numerical models and observational technologies to achieve an acceptable level of accuracy in this domain. Orlanski's research on boundary conditions for unbounded hyperbolic flows has applications beyond meteorology, influencing fields like hydrology and flow chemistry. Awards and honors Carl Gustav Rossby Award (MIT, 1968) – For best PhD thesis in the Atmospheric and Oceanic Sciences program. NOAA Administrator's Award (1985) – For outstanding mesoscale research, scientific leadership, and administrative accomplishments. RAICES Prize (2011) – Awarded by the Ministry of Science, Technology and Innovation of Argentina for contributions to science and scientific development in Argentina. Fellow of the American Meteorological Society. Selected publications References 1939 births Living people American meteorologists University of Buenos Aires alumni Massachusetts Institute of Technology alumni Princeton University faculty American atmospheric scientists Computational physicists People from Buenos Aires Province Fellows of the American Meteorological Society
Isidoro Orlanski
[ "Physics" ]
816
[ "Computational physicists", "Computational physics" ]
78,426,865
https://en.wikipedia.org/wiki/HD%2077361
HD 77361 is an orange-hued star in the southern constellation of Pyxis. With an apparent magnitude of 6.187, it can be faintly seen by the naked eye from Earth. As such, it is listed in the Bright Star Catalogue as HR 3597. It is located at a distance of according to Gaia DR3 parallax measurements. The star is notable for its unusually high lithium content. Physical properties This is an aging red-giant branch star (RGB) at the RGB bump, with the spectral type K1III. This means that it has evolved past the main-sequence stage after exhausting its core hydrogen, causing it to bloat into a red giant. It has now reached a point where a discontinuity in hydrogen abundance produced by deep stellar convection results in a short-term decline in energy production, hampering its ascent of the RGB. The "CNII" in its spectral type indicates a strong cyanogen signature in the star's outer atmosphere, as strong as that of a normal K1 bright giant (luminosity class II). According to a 2020 study, the star has a mass of 1.78 , an effective temperature of roughly , and radiates 74.1 times the luminosity of the Sun from its photosphere. Some earlier publications, however, present smaller values for the mass ( or ), luminosity (45.7 ), and temperature (). The star is slightly poorer in iron than the Sun, with a metallicity of [Fe/H]= (10−0.02 ≈ 95% solar abundance). Anomalous abundances The star is considered a super Li-rich star, a star so enhanced in lithium that its existence cannot be explained by the standard stellar evolution theory. It is thought that the lithium is actively being generated within the star, as unstable beryllium-7 atoms produced in the inner layers well up to the upper atmosphere via an unknown mechanism and then decay into stable lithium-7. The star also has a very small 12C/13C ratio of , compared to of the Sun. It was the first population I super Li-rich low-luminosity low-mass K giant discovered to have such a small 12C/13C ratio. Similar stars The K-type giant star TYC 3251-581-1 is similar to HD 77361 in several aspects; namely, both stars have an extremely high lithium abundance and a low 12C/13C ratio, are currently at the RGB bump phase, and belong to the thin disk stellar population. References K-type giants Pyxis 077361 CD-26 06647 J09011142-2639493 044290 3597
HD 77361
[ "Astronomy" ]
576
[ "Pyxis", "Constellations" ]
66,828,251
https://en.wikipedia.org/wiki/HD%2077887
HD 77887 (HR 3610) is a solitary star located in the southern circumpolar constellation Volans. It has an apparent magnitude of 5.87, making it faintly visible to the naked eye if viewed under ideal conditions. The star is situated at a distance of about 760 light years but is receding with a heliocentric radial velocity of . HD 77887 is an ageing M-type giant that is currently on the asymptotic giant branch. At present it has 1.12 times the mass of the Sun but has expanded to 56.73 times its girth. It shines at from its enlarged photosphere at an effective temperature of , which gives it a red glow. HD 77887 is suspected to be a slow irregular variable whose brightness fluctuates at a tenth of a magnitude. Koen and Eyer examined the Hipparcos data for the star, and found that it varied periodically, with an amplitude of 0.012 magnitudes, and a period of 4.4649 days. References M-type giants 077887 044283 Durchmusterung objects Suspected variables 3610 Volans Asymptotic-giant-branch stars
HD 77887
[ "Astronomy" ]
254
[ "Volans", "Constellations" ]
66,828,292
https://en.wikipedia.org/wiki/Amphinema%20byssoides
Amphinema byssoides (cratered duster) is a species of corticioid fungus known to form mycorrhizal relationships with spruce trees. It was first described as Thelephora byssoides in 1801 by Christiaan Hendrik Persoon, but was transferred to the genus Amphinema by John Eriksson in 1958. References Atheliales Fungi described in 1801 Taxa named by Christiaan Hendrik Persoon Fungus species
Amphinema byssoides
[ "Biology" ]
96
[ "Fungi", "Fungus species" ]
66,828,833
https://en.wikipedia.org/wiki/Prunus%20%C3%97%20rossica
Prunus × rossica, the Russian plum, is a hybrid cultigen between cherry plum (Prunus cerasifera) and Chinese or Japanese plum (Prunus salicina). It is of commercial importance in the European Russia, and there are many cultivars developed there, such as 'Gek', 'Desertnaya', 'Kubanskaya Kometa', 'Obilnaja'. In the US, a few cultivars have also been developed, such as Sprite Cherry-Plum and Delight Cherry-Plum. The South African cultivar 'Methley' is also a cultivar of P. × rossica. References x rossica Hybrid plants Plum cultigens
Prunus × rossica
[ "Biology" ]
152
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
66,830,513
https://en.wikipedia.org/wiki/Irish%20Council%20for%20Bioethics
The Irish Council for Bioethics () was an independent body established by the Government of Ireland in 2002 to examine and respond to bioethical issues in science and medicine. It provided independent advice to the government and those making policy, as well as promoting public understanding of contemporary bioethical issues. It ceased operations in 2010, due to withdrawal of state funding in the wake of the post-2008 Irish economic downturn. During its years of operation, the members of the council were nominated by the Royal Irish Academy, which also provided the body's secretariat. It was funded by grants through Forfás. References External links Archived version of official website Bioethics Bioethics research organizations Ethics of science and technology Medical and health organisations based in the Republic of Ireland Organisations based in Dublin (city) Organizations established in 2002
Irish Council for Bioethics
[ "Technology" ]
167
[ "Bioethics", "Ethics of science and technology" ]
66,831,728
https://en.wikipedia.org/wiki/Dapr
Dapr (Distributed Application Runtime) is a free and open source runtime system designed to support cloud native and serverless computing. Its initial release supported SDKs and APIs for Java, .NET, Python, and Go, and targeted the Kubernetes cloud deployment system. The source code is written in the Go programming language. It is licensed under Apache License 2.0 and hosted on GitHub. Dapr is a CNCF project and graduated in November 2024. See also Microservices Service mesh References Further reading External links Serverless computing Microsoft free software Software using the Apache license Software using the MIT license 2019 software
Dapr
[ "Technology" ]
133
[ "Computing platforms", "Serverless computing" ]
66,834,888
https://en.wikipedia.org/wiki/NASA%20Chief%20Technologist
The Chief Technologist is the most senior technology position at the National Aeronautics and Space Administration (NASA). The Chief Technologist serves as the principal advisor to the NASA Administrator in technology policy and programs, and as interface to the national and international engineering community. The position helps "communicate how NASA technologies benefit space missions and the day-to-day lives of Americans." History The Chief Technologist position was created to advise the NASA Administrator on budget, strategic objectives, and current content of NASA's technology programs. The Chief Technologist works closely with appropriate representatives of the NASA Strategic Enterprises and the Field Centers, as well as advisory committees and the external community. The Chief Technologist represents the Agency's technology objectives and accomplishments to other federal agencies, industry, academia, other government organizations, the international community, and the general public. "The Chief Technologist leads NASA technology transfer and technology commercialization efforts, facilitating internal creativity and innovation." He also "coordinates, tracks and integrates technology investments across the agency and works to infuse innovative discoveries into future missions."- The position was created in 2010 by NASA Administrator Charlie Bolden. The first three Chief Technologists were aerospace engineering professors whose universities (specified below) entered into an intergovernmental personnel agreement with NASA. Douglas Terrier, was the NASA Johnson Space Center Chief Technologist before becoming the Agency Chief Technologist. On November 1, 2021 the Office of the Chief Technologist and the Office of Strategic Engagements and Assessments were merged into the new Office of Technology, Policy, and Strategy (OTPS). Bhavya Lal was appointed to serve as OTPS’s new Associate Administrator. The role of the NASA Chief Technologist was changed to one of a staff position in the newly created OTPS. Douglas Terrier was reassigned to NASA’s Johnson Space Center (JSC) in Houston to serve in a newly created position as the associate director for vision and strategy. Lal served as acting chief technologist. On January 3, 2023 A.C.Charania started his role as the Agency Chief Technologist. Before joining NASA, Charania served as vice president of product strategy at Reliable Robotics. He previously worked in strategy and business development for the Virgin Galactic (now Virgin Orbit) LauncherOne small satellite launch vehicle program. He also served in multiple management and technology roles at SpaceWorks Enterprises, including helping to incubate two startups, Generation Orbit and Terminal Velocity Aerospace. List of Chief Technologists Bobby Braun of Georgia Tech, February 3, 2010 to September 30, 2011 (Acting) Joseph Parrish, October 1, 2011 to December 31, 2011 Mason Peck of Cornell, January 1, 2012 to 2013 Dave Miller of MIT, March 2014 to 2016 (Acting) Dennis J. Andrucyk, 2016 to January 17, 2017 Douglas Terrier, 2017 to October 31, 2021 Terrier served as Acting Chief Technologist from 2017 to 2018 Bhavya Lal, was the acting Chief Technologist from November 1, 2021 to January 3, 2023 A.C. Charania, is the Agency Chief Technologist from January 3, 2023 to present References
NASA Chief Technologist
[ "Astronomy" ]
637
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
66,837,646
https://en.wikipedia.org/wiki/Saprotrophic%20bacteria
Saprotrophic bacteria are bacteria that are typically soil-dwelling and utilize saprotrophic nutrition as their primary energy source. They are often associated with soil fungi that also use saprotrophic nutrition and both are classified as saprotrophs. A saprotroph is a type of decomposer that feeds exclusively on dead and decaying plant matter. Saprotrophic organisms include fungi, bacteria, and water molds which are critical to decomposition and nutrient cycling, providing nutrition for consumers at higher trophic levels. They obtain nutrients via absorptive nutrition, in which nutrients are digested by a variety of enzymes and subsequently secreted by the saprotroph. Community composition and proliferation rates of saprotrophic indicator bacteria are often considered signals of community health in soil, aquatic, and bodily systems. Structure and life cycle All saprotrophic bacteria are unicellular prokaryotes, and reproduce asexually through binary fission. Variation in the turnover times (the rate at which a nutrient is depleted and replaced in a particular nutrient pool) of the bacteria may be due in part to variation in environmental factors including temperature, soil moisture, soil pH, substrate type and concentration, plant genotype, and toxins. These factors can, in turn, alter the rates of decomposition and soil organic matter turnover, impacting ecosystem productivity. When colonizing a new environment, the population of a saprotrophic strain of bacteria initially decreases and then reaches a point of population stabilization. While they are common in soil environments, they can persist anywhere with available food resources, such as in aquatic environments, or in fecal matter. As such, they are a common organism in waste products, where they break down various compounds to obtain nourishment. Growth rate Saprotrophic bacterial growth rate is very sensitive to changes in environmental conditions, making it a good variable to detect rapid and subtle changes in microbial communities. Growth rates are also used to measure interactions between bacteria and fungi, with research suggesting bacterial inhibition of fungal growth as it may exert a competitive pressure on fungi. Under normal soil conditions, bacterial biomass production remains relatively steady, as the growth of microorganisms is balanced by predation and other types of cell death. Studies on bacterial growth rates using leucine or thymidine incorporation suggest the turnover times of soil bacterial communities to be in the order of days to weeks at a temperature of around 20 °C. Other studies have estimated a longer turnover time varying between 107–160 days at 25 °C. This large discrepancy could be due to differences in the methods used for these estimations, as well as differences in the incubation temperatures, which are of utmost importance in determining growth rates. Studies have shown that optimal bacterial growth is achieved at temperatures around 25-30 °C in temperate soils, which is usually much higher than the mean annual temperature. Bacterial growth in the rhizosphere presents a special situation, as it supports the rapid proliferation of bacteria compared with the surrounding soil due to the input of root exudates into the soil. Here, bacterial turnover times are estimated to be in the range of just 12–19 hours, with shorter times exhibited on younger roots. Overall, there has not been sufficient research on bacterial growth rates in soil. This contrasts with our comparatively vast knowledge of bacterial growth rate measurements in aquatic environments. We may blame this disparity on the complexity of the soil matrix, which includes both bacterial and fungal decomposers with different feeding strategies. Environmental factors Several environmental factors may impact the activity of saprotrophs, including soil moisture, pH, and the presence of substrates. Soil moisture, indicated by carbon mineralization, is positively correlated with bacterial growth, with bacterial growth increasing as soil moisture content increases. In terms of soil pH, there is a well-known pattern of bacterial dominance in neutral or slightly alkaline soils, though clear evidence for the differential growth of bacteria in soils with different pH is scarce. Compared to fungi, bacteria are considered more competitive in degrading easily available substrates. In addition to quality and type, the concentration of substrate is also important to bacterial growth in soil. For example, a study utilizing the addition of different concentrations of glucose found that bacterial growth increased significantly at low concentrations, and was inhibited at very high concentrations. On the other hand, increased substrate flow in the rhizosphere due to root exudation has been shown to significantly increase bacterial growth rates. Here, there is a plant species and genotype effect on growth, presumably due to different exudation rates. Parasitism Some saprotrophic bacteria are common pathogens in medicine and agriculture, as they move readily between individuals via consumption or other modes of exposure, such as contact with excrement. For example, certain bacteria may be vectors for food borne illnesses such as Escherichia coli. Others have the ability to decompose cellulose, and are often found in the rumen of cows, aiding in their digestion by fermenting the cellulose in grass. Nutrient cycling and MEEs Through saprotrophic nutrition, saprotrophic bacteria release microbial extracellular enzymes (MEEs) into the environment to break down soil organic matter (SOM). MEEs are released when an organism's energy and nutrient needs are not being met. This allows for the monitoring of MEEs as an indicator of nutrient availability in soil. Some significant MEEs are: Phenol oxidases (PHO): PHOs can biodegrade or detoxify aromatic pollutants into sources of carbon. Additionally, PHO's act as an indirect hydrolases in peat bogs, which accelerate the decomposition of soil organic matter. PHO's break down phenolics, which inhibit hydrolases. Thus, when microorganisms are limited, decomposition is also limited. This process has been termed an "enzymatic latch." β-glucosidase (GLU): GLUs are involved in securing energy sources and labile carbon for microorganisms. This is accomplished through the catalysis of the release of monosaccharides and the hydrolysis of oligosaccharides. Acid (alkaline) phosphatase (AP): APs can be used as indicators for P mineralization potential and availability in soil. Role in forest ecosystems In forest soils, bacteria are important in the decomposition of fungal mycelia and in nitrogen cycle processes, including nitrogen fixation. Additionally, bacteria, alongside fungi, mediate the bulk of biogeochemical processes, determine the availability of mineral nutrients, and determine the fate of carbon in these soils. However, bacteria’s higher demand for nitrogen and inability to translocate nutrients makes them less efficient decomposers than fungi. Ecosystem disturbances such as fires, insect invasions, and timber harvesting can lead to a slight decrease in bacterial abundance. Furthermore, the bacterial community composition may change in response to changes in nutrient availability and overall chemistry. References Bacteria Trophic ecology Soil biology
Saprotrophic bacteria
[ "Biology" ]
1,450
[ "Microorganisms", "Prokaryotes", "Soil biology", "Bacteria" ]
66,837,925
https://en.wikipedia.org/wiki/HD%2049947
HD 49947 (HR 2531) is a solitary star in the southern circumpolar constellation Volans. It has an apparent magnitude of 6.36, placing it near the max naked eye visibility. Parallax measurements place the object at a distance of 459 light years but is receding with a heliocentric radial velocity of HD 49947 has a stellar classification of G8 III and is a red clump giant, meaning that it is located on the warm end of the horizontal branch. At present it has double the mass of the Sun and at an age of 1.27 billion years, has expanded to an enlarged radius of . It radiates at 61 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of , giving a yellow hue. HD 49947 is metal deficient with an iron abundance 68% that of Sun and spins very slowly. References G-type giants Volans 049947 032222 2531 Volantis, 3 CD-72 351
HD 49947
[ "Astronomy" ]
213
[ "Volans", "Constellations" ]
66,838,226
https://en.wikipedia.org/wiki/HD%2076270
HD 76270, also known as HR 3544, is a solitary, white hued star located in the southern circumpolar constellation Volans. It has an apparent magnitude of 6.10, making it faintly visible to the naked eye if viewed under ideal conditions. The object is relatively far with a distance of 2,360 light years, but is slowly approaching the Solar System with a heliocentric radial velocity of . HD 76270 was considered as a chemically peculiar Am star, and as a result, was given a spectral classification of A3mA6-7 by Nancy Houk and A.P Cowley. This means it is an A3 star with the metallic lines of a star with a class of A6-7. However, this peculiarity is now considered doubtful. An alternate class of A5 III/IV was given, instead making it an evolved A-type star with a blended luminosity class of a subgiant and a giant star. At present it has 5.3 times the mass of the Sun but has expanded to 22.9 times its girth. It shines with a high luminosity of from its enlarged photosphere at an effective temperature of . HR 3544 is metal deficient, having an iron abundance 64% below solar levels. A 1984 study used HD 76270 as a comparison star and suspected it of being slightly variable, but this has not been confirmed and it is not even listed as a suspected variable in the General Catalogue of Variable Stars. References Volans A-type giants Am stars 076270 3544 043351 CD-72 00488 Volantis, 44 A-type subgiants
HD 76270
[ "Astronomy" ]
347
[ "Volans", "Constellations" ]
66,838,526
https://en.wikipedia.org/wiki/HD%2064484
HD 64484 (HR 3081) is a solitary star in the southern circumpolar constellation Volans. With an apparent magnitude of 5.76, it is faintly visible to the naked eye under dark skies. Parallax measurements place it at a distance of 458 light years but is receding with a heliocentric radial velocity of . HD 64484 has a stellar classification of B9 V, indicating that it is an ordinary B-type main-sequence star. It has 2.8 times the mass of the Sun and an effective temperature of , giving it a bluish white hue. However, a slightly enlarged radius of yields a luminosity 140 times that of the Sun. This is due to HD 64484 completing 80.6% of its main sequence lifetime at an age of 339 million years. The star has a solar metallicity and like many hot stars — spins rapidly with a projected rotational velocity of . References Volans 064484 PD-65 827 3081 038210 B-type main-sequence stars Volantis, 19
HD 64484
[ "Astronomy" ]
223
[ "Volans", "Constellations" ]
66,839,694
https://en.wikipedia.org/wiki/G-dwarf%20problem
In astronomy, the G-dwarf problem refers to the apparent discrepancy in the distribution of metallicity levels in stars of different populations as compared to closed box models of galactic chemical evolution. According to closed box models, which represent galaxies without outside non-metallic material inflow, the distribution of metallicity levels in stars should follow a logarithmic curve. This amounts to high and low mass stars having the least metallicity, with G-type stars inbetween. However, these models are inconsistent with Milky Way observations. Other galaxies have been shown to have the same problem. The name comes from G-type stars, which are bright enough to be studied easily, yet are most often found unevolved. This provides an extensive look at relatively young stars. Despite this, the G-dwarf problem has also been observed in K and M dwarfs (the M dwarf problem). See also Solar analog List of nearest bright stars List of nearest stars and brown dwarfs Main sequence G-type main-sequence star References Stellar astronomy Unsolved problems in astronomy
G-dwarf problem
[ "Physics", "Astronomy" ]
217
[ "Unsolved problems in astronomy", "Concepts in astronomy", "Astronomy stubs", "Stellar astronomy stubs", "Astronomical controversies", "Astronomical sub-disciplines", "Stellar astronomy" ]
66,839,757
https://en.wikipedia.org/wiki/Astrovirology
Astrovirology is an emerging subdiscipline of astrobiology which aims to understand what role viruses played in the origin and evolution of life on Earth as well as the potential for viruses beyond Earth. Viruses and early life on Earth Viruses drive evolution Viruses are a major driving force in evolution; the arms race between viruses and their host, or the Red Queen hypothesis, causes strong evolutionary pressures in both the host and viruses. The host evolves to evade and destroy viruses, while the virus evolves mechanisms to continue infecting the host. Evolution is also influenced by viral horizontal gene transfer. Viral genes can be inserted into the host genome (ex. Retroviruses) and sometimes these genes are evolutionarily favorable. One common example of beneficial horizontal gene transfer in humans is the gene for syncytin, which came from ancient viruses and is important in placenta development. Viruses influence major evolutionary events Though unproven, some virologists posit that viruses may have played an important role in major evolutionary events, including the emergence of a DNA genome from an RNA world, divergence from LUCA to the three domains of life, archaea, bacteria, and eukarya, and development of multicellularity. Emergence of a DNA genome and divergence from LUCA may have been aided by horizontal gene transfer of polymerases and other gene-editing enzymes from viruses. Meanwhile, viral selection pressures could have also aided divergence from LUCA to defend against different viruses, while multicellularity provides greater cell population protection from viruses. Viruses and Earth's environment Viruses influence biogeochemical cycles Viruses cause nutrient cycling in the ocean via the viral shunt, and up to 25% of the available carbon in the upper ocean is attributed to virus-induced cell lysis. Around 5% of Earth's oxygen is thought to be produced by cells infected by viruses encoding photosynthetic genes otherwise absent from the cell. For example, some viruses of cyanobacteria contain genes for Photosystem II, which allows those cyanobacteria to photosynthesize and live in a different part of the ocean as their non-infected counterparts. Some viruses encode other metabolic genes that allow new metabolic functions in their host, for example, phosphate, carbon, and sulfur metabolism. Extremophile viruses Viruses have been found in extremely hot, cold, and acidic natural environments, up to , down to , and down to pH 1.5. Viruses in space Infectivity in space Viruses including tobacco mosaic virus, poliovirus, and bacteriophage T1 have maintained infectivity after being exposed to space-like conditions including interstellar radiation, low temperature, and low pressure. Further studies are needed to assess the risk of viral hitchhikers, but any virus infecting an organism inside a habitable spacecraft can survive as long as that organism survives. Effect on astronauts Latent viruses such as herpes virus, prevalent in humans, can become reactive during spaceflight due to spaceflight stressors. While astronauts experienced few if any symptoms, the potential for other viruses to become reactivated or more virulent is a substantial threat. Furthermore, some bacteria (Serratia marcescens) have been found to be more virulent in spaceflight conditions, leading to a question of whether viruses could also become more virulent. Forward contamination potential Limiting forward contamination is critical to be confident in the results of life detection efforts. Bacteria pose a significant contamination challenge in spacecraft assembly clean rooms despite decontamination procedures. However, viruses were found to be present at relatively low levels, based on a metagenomic analysis. Another metagenomic study detected viable human viruses, including herpesvirus and cycloviruses. Back contamination potential Life (and viruses) on other planetary bodies have two important potential origins: from Earth or from a second genesis (life originated on that planet). Ancient viruses could have been transported from Earth to another planetary body, perhaps following a massive meteorite impact or volcanic eruption. If this occurred, these viruses would likely be very biological similar to modern organisms. There may be minimal or no immunity among Earth life against the ancient virus, and whatever organism it can infect may be crippled by its re-introduction. If extraterrestrial viruses are part of a second genesis, their infectivity of Earth life depends on how they encode their genetic information. While their encoding could be incompatible with Earth life, it is also possible that RNA, DNA, or similar molecules could encode for life in the second genesis. In this case, Earth life may be a suitable host. Potential biosignatures/detection methods While viruses may or may not be "alive", detection of virions on another planet would be powerful indirect evidence for life. The following methods could offer biosignatures with varying levels of usefulness: Scanning electron microscopy: SEM has potential to be integrated onto a spacecraft, but currently lacks the resolution to detect virion structure. Transmission electron microscopy: TEM can visualize virion structure, but the imaging procedure is more difficult than SEM, and so integration onto an automated spacecraft seems unlikely. Lipid detection in rock: Enveloped viruses may be identifiable via this method. Chemical identification: Specific chemicals can be identified via GC-MS, NMR, or FTIR spectroscopy. Virus-mediated event: Large-scale lysis of a given host cell can cause easily detectable effects. For example, the chalk deposits in the white cliffs of Dover are caused by large-scale lysis of algae, which could have been virus-induced. Proposed and current life detection missions Astrovirologists have called for proposed missions to sample the water plumes of Enceladus and/or Europa for viruses. Others have called for virus detection as part of Mars rover missions like the Rosalind Franklin rover. However, given the lack of validated biosignatures to detect viruses in situ, sample return to Earth has been recommended, which would allow use of TEM and other detection methods requiring complex sample preparation and/or large equipment. The Mars 2020 Perseverance rover has equipment to drill regolith samples and store them for sample return on a future Mars mission. References Viruses Astrobiology Evolution
Astrovirology
[ "Astronomy", "Biology" ]
1,269
[ "Viruses", "Tree of life (biology)", "Origin of life", "Speculative evolution", "Astrobiology", "Biological hypotheses", "Microorganisms", "Astronomical sub-disciplines" ]
72,640,684
https://en.wikipedia.org/wiki/Gerel%20Ochir
Gerel Ochir (; born 17 July 1941) is a Mongolian geologist. She specializes in petrology, geochemistry, and metallogeny. She has taught at the Mongolian University of Science and Technology for over 50 years and headed the Department of Geology for 30 years. After earning bachelor's and master's degrees in geology, geochemistry, and petrology from Charles University in Prague, she received her PhD and ScD through the Siberian Branch of the Russian Academy of Sciences. Ochir has served as vice president of the International Union of Geological Sciences and received the Jan Masaryk Medal in 2021. Early life and education Gerel Ochir was born in Moscow on 17 July 1941. She gained an interest in geology at the age of 10 after her mother gave her a book on geology by Russian geochemist Alexander Fersman. She graduated from secondary school in Ulaanbaatar in 1958. From 1959, Ochir attended Charles University in Prague. She earned a bachelor's degree in geology and petrography in 1964. She then spent a year with the Department of Geological Survey at the Central Geological Laboratory before she started teaching at the Mongolian State University (now Mongolian University of Science and Technology) in 1965. She later returned to Charles University, earning her RNDr. in geology and geochemistry in 1980. Ochir earned her PhD in petrology from the Irkutsk Institute of Geochemistry of the Siberian Branch of the Russian Academy of Sciences in 1978. Her thesis was on the "Petrology and geochemistry of granite with crystal-bearing pegmatites of Eastern Mongolia." Ochir earned her ScD in geochemistry, petrology, and metallogeny from the Vinogradov Institute of Geochemistry of the Russian Academy of Sciences in 1990. Career Ochir has been a professor at the Mongolian University of Science and Technology since 1965. She held the positions of assistant professor, associate professor and professor, teaching courses in petrology and petrography. She served as the head of the university's Department of Geology and Mineralogy from 1978 to 2009. She has also served as Director of the university's Geoscience Center since 2001. Ochir has carried out field and basic research work through joint expeditions of the Russian and Mongolian Academies of Sciences. She is the author of over 350 scientific publications and was the lead editor of the book Mineral Resources of Mongolia. Ochir served as vice president of the International Union of Geological Sciences for four years. She is an adjunct professor at the Institute of Mineral Resources of the Chinese Academy of Geological Sciences and a foreign member of the Russian Academy of Natural Sciences. Ochir is an Honoured Scientist of Mongolia. She was presented with the Jan Masaryk Medal by the Czech Ambassador in 2021. Personal life Ochir married a chemist and has one daughter. Selected publications References 1941 births Living people Mongolian geologists Women geologists Petrologists Scientists from Moscow Charles University alumni Geochemists Women geochemists Mongolian women academics
Gerel Ochir
[ "Chemistry" ]
610
[ "Geochemists", "Women geochemists" ]
72,640,713
https://en.wikipedia.org/wiki/Amanita%20ibotengutake
Amanita ibotengutake is a species of agaric fungus in the family Amanitaceae native to Japan. It was first described in 2002 as distinct on a genetic level from A. pantherina, and earlier has been classified under that name. The scientific name derives from Japanese name of A. strobiliformis, ibotengutake (疣天狗茸, lit. "wart tengu mushroom"), which inspired the name of ibotenic acid. A. ibotengutake contains ibotenic acid and muscimol, rendering it toxic and psychoactive. References External links ibotengutake Fungi of Japan Fungi described in 2002 Fungus species
Amanita ibotengutake
[ "Biology" ]
146
[ "Fungi", "Fungus species" ]
72,640,730
https://en.wikipedia.org/wiki/Schizostoma%20laceratum
Schizostoma laceratum is a fungus in the family Agaricaceae. It was first described in 1829 by Christian Gottfried Ehrenberg as Tulostoma laceratum, and transferred to the genus, Schizostoma, in 1846 by Joseph-Henri Léveillé. References Secotioid fungi Agaricaceae Fungi described in 1829 Taxa named by Christian Gottfried Ehrenberg Fungus species
Schizostoma laceratum
[ "Biology" ]
88
[ "Fungi", "Fungus species" ]
72,640,914
https://en.wikipedia.org/wiki/Oliver%20Gilbert%20%28lichenologist%29
Oliver Gilbert (7 September 1936 – 15 May 2005) was an urban ecologist and lichenologist. He was a reader in landscape ecology at Sheffield University. He was one of the early users of lichens as indicators of air pollution, and also studied the ecology and diversity of wildlife in urban areas. Early life and education Oliver Lathe Gilbert and his twin brother Christopher were born in Lancaster. His parents were Ruth (nee Ainsworth) who wrote books for children, and Frank Gilbert, managing director of Durham Chemicals. One of his uncles was the mycologist Geoffrey Clough Ainsworth. The family soon moved to London and he attended the private co-educational boarding school St George's School, Harpenden. As a child he became interested in plants, and rock climbing. He studied botany at University of Exeter and was especially interested in mosses and liverworts. He then studied fungal diseases of plants at Imperial College, London, and took up a post as deputy warden at Malham Tarn Field Studies Centre. Here, he was inspired by Arthur Edward Wade to study lichens. While employed at University of Newcastle upon Tyne, he started research for a PhD degree on the subject of Biological Indicators of Air Pollution which was awarded in 1970. Career In 1963 he was employed by University of Newcastle upon Tyne as a demonstrator. He carried out research into the distribution and effects of air pollution on lichens and mosses and showed that their diversity reduced in moving from countryside to industrial urban areas. He moved to University of Sheffield as a lecturer in landscape ecology in 1968 and was promoted to reader in 1986. He retired in 1993 but continued as a part-time tutor until 2000. He learnt how to identify the lichen flora of the British Isles in the 1960s and went on many field visits to record more unusual species, and their locations in more remote parts of the country. In 1970 he began a systematic survey of lichen in the Cheviots that lasted for several decades. He also led surveys of the lichen flora of several Scottish islands and mountains. He collaborated with Brian John Coppins, Alan Fryday and Vince Giavarini. Gilbert wrote a book about the efforts to find lichens in the British Isles. He also studied the urban ecology of Sheffield, identifying that many fig trees grew on the banks of the river Don as it passed through Sheffield, supported by the warm microclimate caused by industrial cooling water. He undertook research into ways to repair urban brownfield land to become a biodiverse habitat and was co-author of the book Habitat Creation and Repair that was considered important for its philosophy and ethics as well as practical information. Awards and honours He was president of the British Lichen Society from 1976 until 1978 and editor of its bi-annual bulletin from 1980 until 1989. In 1997, he was made an honorary member of the society and in 2004 was awarded its Ursula Duncan Award in recognition of his outstanding contribution to the study of lichens in Britain. The Caledonian lichen Catillaria gilbertii was named in his honour by colleagues Alan Fryday and Brian John Coppins in 1996. They noted that the naming of this species, which produces twice the usual number of ascospores in its asci, was "particularly appropriate given the pre-disposition of the Gilbert family for producing twice the usual number of offspring at a time; Dr Gilbert himself is a twin and he also has twin daughters". However, inheritance of a tendency to have non-identical twins is a maternal characteristic; there is no inheritance of a tendency to have identical twins. Personal life He married Daphne Broughton in 1969 and they had three children together, before the marriage was dissolved. Publications Gilbert was the author or co-author of over 150 scientific publications and several books. These included: Papers and book chapters Gilbert, O. L. (2000) Aquatic lichens. In: Lichen Atlas of the British Isles. Fascicle 5. Aquatic Lichens and Cladonia (Part 2) (M. R. D. Seaward, ed.) London, British Lichen Society. Gilbert, O. L. (1996) Retaining trees on construction sites. Arboricultural Journal 20 39–45. Gilbert, O. L., Fryday, A. J., Giavarini, V. J. & Coppins, B. J. (1992) The lichen vegetation of the Ben Nevis range. The Lichenologist 24 43–56. Gilbert, O. L., Fox, B. W. & Purvis, O. W. (1982) The lichen flora of a high-level limestone-epidiorite outcrop in the Ben Alder Range, Scotland. The Lichenologist 14 165–174. Wathern, P. & Gilbert, O. L. (1979) The production of grassland on subsoil. The Journal of Environmental Management 8 269–275. Gilbert, O. L., Earland-Bennett, P. & Coppins, B. J.(1978) Lichens of the sugar limestone refugium in Upper Teesdale. New Phytologist 80 403–408 Gilbert, O. L. (1975) Effects of air pollution on landscape and land use around Norwegian aluminium smelters. Environmental Pollution 8 113–121. Gilbert, O. L. (1974) Lichens and air pollution. In: The Lichens (V. Ahmadjian & M. E. Hale, eds): 443–472. New York and London: Academic Press. Gilbert, O. L. (1970) A biological scale for the estimation of sulphur dioxide pollution. New Phytologist 69 629–634. Gilbert, O. L. (1968) Bryophytes as indicators of air pollution in the Tyne Valley. New Phytologist 67 15–30 Books The Lichen Hunters (2004) Lichens (2000) in the Collins New Naturalist series number 86 Habitat Creation and Repair (1998) co-authored with Penny Anderson The Ecology of Urban Habitats (1989) A Lichen Flora of Northumberland (1988) See also :Category:Taxa named by Oliver Gilbert (lichenologist) References 1936 births 2005 deaths Alumni of Newcastle University Academics of the University of Sheffield British lichenologists Ecologists
Oliver Gilbert (lichenologist)
[ "Environmental_science" ]
1,291
[ "Ecologists", "Environmental scientists" ]
72,641,242
https://en.wikipedia.org/wiki/Millennial%20pause
The millennial pause is a pause in speaking at the start of some videos, especially in short-form content and on social media apps such as TikTok. The pause is generally ascribed to millennials, the generation of people born from the early-mid 1980s to mid-1990s. The phenomenon is an example of the digital generation gap between millennials and subsequent generations. Observation The term "millennial pause" is attributed to TikTok user nisipisa, a millennial who posted a TikTok video on 26 November 2021, pointing out that Taylor Swift, a millennial singer, includes such pauses at the start of her videos. Kate Lindsay of The Atlantic, a millennial, stated that this pause is becoming more noticeable as short-form videos are becoming more prevalent on the social network Instagram, instant messaging app Snapchat, and online video platform YouTube. Videos by people other than millennials have also been described as exhibiting a millennial pause; Parade reported that singer JC Chasez included one in his TikTok debut video, and James Factora of Them mentioned how actress Jennifer Coolidge included one in "a perfect TikTok" during her debut. Hypothesis It has been conjectured that the reason why people older than zoomers tend to include a pause at the start of their videos is to make sure that the device they are using is actually recording before beginning to say anything. In contrast, younger users either test the device before recording or trust that the devices are working correctly, and begin speaking immediately after the recording begins. Another theory is that the habit may have been adopted when earlier recording devices commonly took a split second before beginning to record. Although newer devices do not exhibit the same delay, this habit has proven hard to break. Gen Z shake On 18 January 2023, a Gen Z user of TikTok posted a video describing how members of Generation Z often start recording their videos right before placing their cameras on a stable surface. As a result, the video shakes at the start of these recordings before the camera is set down. Awareness Becoming aware of the phenomenon has made some millennials notice that they are "getting old". People have also noted that, once they have been made aware that their recordings include millennial pauses, they find their own habit embarrassing, yet still have trouble breaking the habit. Some people have stated that, without the pause, the start of their dialogue would be cut off. The phrase has been used untranslated outside of the Anglosphere, including in Brazil, Chile, Denmark, France, Germany, Indonesia, Italy, Mexico, the Netherlands, and Spain. See also Digital literacy Millennial economics Millennial politics Notes References External links 2020s in Internet culture Cultural generations Social impact Social influence Social media Social phenomena Video Pause
Millennial pause
[ "Technology" ]
563
[ "Computing and society", "Social media" ]
72,641,330
https://en.wikipedia.org/wiki/Coud%C3%A9%20Spectrograph
The Coudé Spectrograph was an instrument attached to the ESO 1.52-metre telescope, 3 camera telescope equipped with photographic plates as detectors. It has two cameras working at f/6 and f/14. Dispersions from 1A/mm to 18a/mm are available with a selection of three gratings, each with ruled areas of 20 x 30 cm. The Coudé Spectrograph was installed at the coudé focus of the ESO 1.52-metre telescope at the La Silla Observatory in May 1969. It was decommissioned from the ESO 1.52-metre telescope in mid 1980s. The telescope was named after its inventor, Robert G. Tull Coudé. References External links Diagram of a Coudé spectrograph. European Southern Observatory Telescopes
Coudé Spectrograph
[ "Astronomy" ]
162
[ "Telescopes", "Astronomical instruments" ]
72,643,890
https://en.wikipedia.org/wiki/Audi%20straight-five%20engine
The Audi straight-five engine is a series of four-stroke SOHC and DOHC five-cylinder engines, designed, developed and produced by German manufacturer Audi since 1976. The engines have also been used in various Volkswagen models, as part of the VAG partnership, as well as Volvo using a few of these engines in their diesel model cars. History Diesel engines In 1978, the Audi 2.0 R5 D engine was introduced in the Audi 100 sedan. In 1983, a turbocharged version was introduced, initially for the U.S. market Audi 100. Several Volvo cars, from March 1996 to 2001, were produced with Audi straight-five diesel engines, prior to the introduction of the Volvo D5 turbo-diesel engine; this engine was produced from 2001 to 2017 and was used in several diesel hybrid applications (marketed as "twin engine" models). The Volkswagen Group's first TDI engine was introduced in the 1989 Audi 100 TDI sedan. The Audi 100 was powered by the Volkswagen 2.5 R5 TDI straight-five engine which used an electronic distributor injection pump (called "VerteilerPumpe" by Volkswagen) and two-stage direct injection. The initial version of this engine generated at 3,250 rpm and at 2,500 rpm. Gasoline engines The first production straight-five petrol engine was the Audi 2.1 R5 introduced in the Audi 100 in 1977. Audi has continued use of straight-five petrol engines (in both naturally aspirated and turbocharged versions) to the present day. The Audi TT RS and Audi RS3 currently use straight-five engines. In motorsport, the first car to use a straight-five engine was the Audi Quattro rally car; other racing cars which used straight-five engines include the 1985-1986 Audi Sport Quattro E2 and the 1989 Audi 90 Quattro IMSA GTO. For the year 1987 factory team tested a 1000 hp version of the inline-5 powered Audi S1 Sport Quattro. Several Volkswagen-branded straight-five engines have been produced, beginning with the Volkswagen WH/WN 1.9 litre 10v engine used in the 1981 Volkswagen Passat. The final Volkswagen straight-five petrol engine was the Volkswagen EA855 2.5 litre 20v engine used in the North American Passat models until 2014. References Volkswagen Group Straight-five engines Audi engines Volkswagen Group engines Gasoline engines by model Diesel engines by model Engines by model Piston engines Internal combustion engine
Audi straight-five engine
[ "Technology", "Engineering" ]
506
[ "Internal combustion engine", "Engines", "Engines by model", "Piston engines", "Combustion engineering" ]
72,645,041
https://en.wikipedia.org/wiki/Core-compact%20space
In general topology and related branches of mathematics, a core-compact topological space is a topological space whose partially ordered set of open subsets is a continuous poset. Equivalently, is core-compact if it is exponentiable in the category Top of topological spaces. Expanding the definition of an exponential object, this means that for any , the set of continuous functions has a topology such that function application is a unique continuous function from to , which is given by the Compact-open topology and is the most general way to define it. Another equivalent concrete definition is that every neighborhood of a point contains a neighborhood of whose closure in is compact. As a result, every (weakly) locally compact space is core-compact, and every Hausdorff (or more generally, sober) core-compact space is locally compact, so the definition is a slight weakening of the definition of a locally compact space in the non-Hausdorff case. See also Locally compact space References Further reading Topology
Core-compact space
[ "Physics", "Mathematics" ]
199
[ "Topology stubs", "Topology", "Space", "Geometry", "Spacetime" ]
72,645,136
https://en.wikipedia.org/wiki/Ren%C3%A9%20Peters%20%28chemist%29
René Peters (born August 26, 1971, in Simmerath) is a German chemist and since 2008 Professor of Organic Chemistry at the University of Stuttgart. Life and work Peters studied chemistry at RWTH Aachen University from 1992 to 1997 and subsequently received his doctorate under Dieter Enders until 2000. This was followed by a stay as a postdoc at Harvard University with Yoshito Kishi as a DAAD scholarship holder. Between 2001 and 2004 he worked as a process research chemist at F. Hoffmann-La Roche LTD (Basel). From 2004 to 2008, Peters was an assistant professor at ETH Zurich. Since 2008 he has been Professor of Organic Chemistry at the University of Stuttgart. The research team led by Prof. Peters is one of the leading groups in the field of cooperative asymmetric catalysis. The research is centered around the development of bi- and polyfunctional catalysts whose mode of action is inspired by enzymes, although the structure of the artificial catalysts is much simpler than those of enzymes. In the Peters catalysts, a Lewis acid often cooperates with charged, non-metallic functionalities such as ammonium, pyridinium or olium salts, betaine units and classical hydrogen bond donors. Through the different catalyst functional groups, simultaneous activation and a precise spatial alignment of both reactants is often possible, so that high catalytic activity can be combined with very high stereocontrol. In addition to the development of catalysts for asymmetric catalysis, the Peters research group investigates their mechanistic mode of action in an interdisciplinary approach. The research group is also known for its development of planar chiral metallacycles, in which an intramolecular cooperation of two metal centers could often be used. References External links René Peters - University of Stuttgart René Peters - Wiley online library René Peters - ORCID René Peters - Google Scholar René Peters - researchgate René Peters - research.com Living people 1971 births Academic staff of the University of Stuttgart German organic chemists RWTH Aachen University alumni Harvard University alumni Academic staff of ETH Zurich
René Peters (chemist)
[ "Chemistry" ]
426
[ "Organic chemists", "German organic chemists" ]
72,645,767
https://en.wikipedia.org/wiki/Intermittent%20water%20supply
A piped water supply and distribution system is intermittent when water continuity is for less than 24 hours a day or not on all days of the week. During this continuity defining factors are water pressure and equity. At least 45 countries have intermittent water supply (IWS) systems. It is contrasted with a continuous or "24/7" water supply, the service standard. No system is intentionally designed to be intermittent, but they may become that way because of system overexpansion, leakage and other factors. As of 2022, there was no feasible method for modelling IWS, including no computer-aided tools. Contamination issues can be associated with an intermittent water distribution system. Global public health impact includes millions of cases of infections and diarrhea, and 1560 deaths annually. A continuous supply is not practical in all situations. In the short term, an IWS may have some benefits. These may include addressing demand with a limited supply in a more economical manner. An intermittent supply may be temporary (e.g., when water reserves are low) or permanent (e.g., where the piped system cannot sustain a continuous supply). Associated factors resulting from an intermittent supply include water extraction by users at the same time, resulting in low pressure and a possible higher peak demand. Prevalence A large share of water supply systems around the world are intermittent; in other words, intermittent water supply is a norm. About 1.3 billion people have a piped supply that is intermittent, including large populations in Africa, Asia, and Latin America. This does not include those who do not get piped water at all, about 2.7 billion people. Countries with intermittent supply in some areas and continuous supply in others include India and South Africa. In India, various cities are at various stages of constructing 24/7 supply systems, such as Chandigarh, Delhi, Shimla, and Coimbatore. In Cambodia, Phnom Penh increased coverage from 25% to 85% and duration from 10 to 24 hours a day between 1993 and 2004. Storage Installation of storage and pumps at residences may offset the intermittency of the water supply. Roof tanks are a common feature in countries where the water supply is intermittent. In Jordan, most houses have one or more ground or roof tanks. An intermittent supply can be supplemented with other non-piped sources such as packaged drinking and cooking water bought from local shops or delivered to the house. See also Water scarcity References Bibliography (Open access) (Free to read) Citations Further reading (Open access) Water supply Water management Water supply infrastructure Civil engineering Environmental engineering Hydraulic engineering Mechanical engineering Public health
Intermittent water supply
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
539
[ "Hydrology", "Applied and interdisciplinary physics", "Chemical engineering", "Water supply", "Physical systems", "Construction", "Hydraulics", "Civil engineering", "Mechanical engineering", "Environmental engineering", "Hydraulic engineering" ]
72,646,213
https://en.wikipedia.org/wiki/Ameca%20%28robot%29
Ameca is a female robotic humanoid created in 2021 by Engineered Arts. History The first generation of Ameca was developed at Engineered Arts headquarters in Falmouth, Cornwall, United Kingdom. The project started in February 2021, with the first video revealed publicly on 1 December 2021. Ameca gained widespread attention on Twitter and TikTok ahead of her first public demonstration at the Consumer Electronics Show 2022, where she was covered by CNET and other news outlets. In 2022, Ameca presented an Alternative Christmas message by British TV Channel 4 for Christmas Day. Ameca was associated with the Museum of the Future's robotic family, where she could interact with visitors. In 2024, Ameca was moved to Edinburgh in the UK to reside at the National Robotarium. Features It is designed as a platform for further developing robotics technologies involving human-robot interaction. utilizes embedded microphones, binocular eye mounted cameras, a chest camera and facial recognition software to interact with the public. Interactions can be governed by either OpenAI's GPT-3 or human telepresence. She also features articulated motorized arms, fingers, neck and facial features. Ameca's appearance features grey rubber skin on the face and hands, and is specifically designed to appear genderless. Public appearances Computer History Museum, California Copernicus Science Center, Warsaw, Poland Museum of the Future, Dubai Consumer Electronics Show 2022 Deutsches Museum Nuremberg OMR Festival 2022 Hosted by Vodafone GITEX 2022 International Conference on Robotics and Automation 2023 International Telecommunication Union AI for Good Global Summit 2023 Sphere References 2021 robots Artificial intelligence Humanoid robots Robotics Social robots
Ameca (robot)
[ "Technology", "Engineering" ]
335
[ "Social robots", "Computing and society", "Robotics", "Automation" ]
72,646,521
https://en.wikipedia.org/wiki/Levitated%20optomechanics
Levitated optomechanics is a field of mesoscopic physics which deals with the mechanical motion of mesoscopic particles which are optically or electrically or magnetically levitated. Through the use of levitation, it is possible to decouple the particle's mechanical motion exceptionally well from the environment. This in turn enables the study of high-mass quantum physics, out-of-equilibrium- and nano-thermodynamics and provides the basis for precise sensing applications. Motivation In order to use mechanical oscillators in the regime of quantum physics or for sensing applications, low damping of the oscillator's motion and thus high quality factors are desirable. In nano and micromechanics, the Q-factor of a system is often limited by its suspension, which usually demands filigree structures. Nevertheless, the maximally achievable Q-factor usually correlates with the system's size, requiring large systems for achieving high Q-factors. Particle levitation in external fields can alleviate this constraint. This is one of the reasons why the field of levitated optomechanics has become attractive for research on the foundations in physics and for high-precision applications. Physical basics The interaction between a dielectric particle with polarizability and an electric field is given by the gradient force . When a particle is trapped and optically levitated in the focus of a Gaussian laser beam, the force can be approximated to first order by with , i.e. a harmonic oscillator with frequency , where  is the particle's mass. Including passive damping, active external feedback and coupling results in the Langevin equations of motion: Here  is the total damping rate, which has usually two dominant contributions: collisions with atoms or molecules of the background gas and photon shot noise, which becomes dominant below pressures on the order of 10−6 mbar. The coupling term allows to model any coupling to an external heat bath. The external feedback is usually used to cool and control the particle motion. The approximation of a classical harmonic oscillator holds true until one reaches the regime of quantum mechanics, where the quantum harmonic oscillator is the superior approximation and the quantization of the energy levels becomes apparent. The QHO has a ground state of lowest energy where both position and velocity have a minimal variance, determined by the Heisenberg uncertainty principle. Such quantum states are interesting starting conditions for preparing non-Gaussian quantum states, quantum enhanced sensing, matter-wave interferometry or the realization of entanglement in many-particle systems. Methods of cooling Parametric feedback cooling and cold damping The idea of feedback cooling is to apply a position and/or velocity dependent force on the particle in a way which produces a negative feedback loop. One way to achieve that is by adding a feedback term, which is proportional to the particle's position (). Since that mechanism provides damping, which cools down the mechanical motion, without the introduction of fluctuations, it is referred to as “cold damping”. The first experiment employing this type of cooling was done in 1977 by Arthur Ashkin, who received the 2018 Nobel Prize in Physics for his pioneering work on trapping with optical tweezers. Instead of applying a linear feedback signal, one can also combine position and velocity via  to get a signal with twice the frequency of the particle's oscillation. This way the stiffness of the trap increases when the particle moves out of the trap and decreases when the particle is moving back. Cavity-enhanced Sisyphus cooling Coherent scattering cavity cooling References Mesoscopic physics Quantum mechanics
Levitated optomechanics
[ "Physics", "Materials_science" ]
744
[ "Condensed matter physics", "Theoretical physics", "Mesoscopic physics", "Quantum mechanics" ]
72,646,600
https://en.wikipedia.org/wiki/AFt%20phases
AFt Phases refer to the calcium Aluminate Ferrite trisubstituted, or calcium aluminate trisubstituted, phases present in hydrated (or hardened) cement paste (HCP) in concrete. AFm and AFt phases in cement hydration products Calcium aluminates can form complex salts in combination with different types of anions. Two series of calcium aluminates are known in cement chemistry: AFm and AFt phases, being respectively mono- or tri-substituted with a given divalent anion X (e.g. , , or hosting a divalent impurity such as ...), or with two units of a monovalent anion, e.g. , , or hosting a monovalent impurity such as , or ...). Their general formulas are respectively: in which n, the number of water molecules present in the hydrate, is in the range 10 to 12 for the AFm phases, and around 32 for the AFt phases. AFt phases AFt phases are important hydration products of the cement clinker. They are crystalline hydrates with generic formula when also taking into account the possible substitution of the aluminium ion () by the ferric ion (). They are formed when tricalcium aluminate (, also noted ) reacts with a dissolved calcium salt (). is most commonly , but can also be giving rise to sulfoaluminate, or carboaluminate, respectively. Tri-sulfoaluminate The main and most known AFt phase is ettringite, also existing as a natural mineral which was first described in 1874 by , for an occurrence near the Ettringer Bellerberg Volcano, Ettringen, Rheinland-Pfalz, Germany. Ettringite is also known as the 'Candlot's salt' in honor of the pioneering work of the French chemist (1858-1922) who studied cement hydration and discovered calcium sulfo-aluminates. Henri Louis Le Chatelier also identified AFt phases when he investigated cement hydration products after the discovery of Candlot. In the years 1930-1940, the system at 25°C was studied in detail by Jones. In concrete chemistry, ettringite is a hexacalcium aluminate trisulfate hydrate, of general formula , or , also abbreviated as C6A3H32 in cement chemist notation. Ettringite is formed in the hydrated Portland cement system as a result of the reaction of tricalcium aluminate () with calcium sulfate, both present in Portland cement. Ettringite, the most prominent representative of AFt phases or (), can also be directly synthesized in aqueous solution by reacting stoichiometric amounts of calcium oxide, aluminium oxide, and sulfate. In the cement system, the presence of ettringite depends on the ratio of calcium sulfate to tri-calcium aluminate (); when this ratio is low, ettringite forms during early hydration and then converts to the corresponding calcium aluminate monosulfate (AFm phase or ()). When the ratio is intermediate, only a portion of the ettringite converts to AFm and both can coexist, while ettringite is unlikely to convert to AFm at high ratios. Tri-carboaluminate Jones (1938) reported the existence and some optical characteristics of AFt and AFm calcium carboaluminate hydrate phases. Feldman et al. (1965) have shown that the hydration reaction of can be suppressed by additions and that this is primarily due to the formation of calcium carboaluminate onto the surface of the grains. The mechanism is the same as with the sulfate ion released by the more soluble gypsum added to the cement clinker during its milling to prevent the flash setting of concrete. According to Carlson and Berman (1960) and Klieger (1990), carbonate-AFm, monocarbonate and hemicarbonate, are more stable than carbonate-AFt, the tricarboaluminate. Lothenbach and Winnefeld (2006) thermodynamic calculations also indicate that in the presence of small amounts of calcite, calcium aluminate monocarbonates (AFm) are present amongst the main common hydration products of ordinary Portland cement, along with the other hydrates, C-S-H, portlandite, and ettringite. The tricarboaluminate, (C6A3H32), is the carbonate equivalent of ettringite, (C6A3H32). It has an isomorph structure and the same number of water hydration molecules: 32 . The corresponding monocarboaluminate (C4AH11) is also less hydrated: 11 water molecules in place of 12 for the monosulfoaluminate. Reactivity of during clinker hydration Amongst the four main phases of the cement clinker (alite: , belite: , , and the tetracalcium alumino ferrite: ), the tricalcium aluminate (: ), is the most reactive phase and its hydration reaction is also the most exothermic. If nothing is done to control and slow down the hydration rate, concrete can be easily subjected to flash setting, especially if the ambient temperature is elevated during the summer. This is why a small addition of of gypsum (: solubility ) is done to the clinker during the grinding process to manufacture cement. During the making of concrete, gypsum dissolves at the contact with water, freeing up and ions in solution. These ions react with aluminate ions present at the surface of the hydrating grains forming a thin impervious coating of less soluble ettringite according to the following reaction: As a consequence, the surface of particles undergoes some passivation, becomes less accessible, and the hydration reaction slows down. A similar effect, although less pronounced, can also be obtained in the presence of the less soluble calcium carbonate (). Conversion of AFt-sulfate ⇌ AFm-sulfate In the absence of an external source of sulfate ions, the transformation of ettringite (AFt) into AFm and vice versa depends both on temperature and on pH conditions (concentration in ions) in the concrete pore water. Different mechanisms can co-exist and can explain these transformations. A first mechanism can directly depend on the respective thermodynamic stability (solubility: dissolution-precipitation reactions) of each phase, as a function of temperature and pH. Another mechanism can depend on the sorption of sulfate anions onto the C-S-H phases. The sorption of onto C-S-H increases with temperature and pH. At high T, or high pH, because of this sorption, the concrete pore water is depleted in sulfate anions and AFm preferentially forms with respect to AFt (ettringite). When T, or pH, decreases due to the cooling of concrete after setting and hardening, or because of the leaching of the concrete structure by water (immersed concrete component, such as a pile of a bridge), the sulfate ions physically adsorbed onto C-S-H are released (desorption process) into the concrete pore water and become available for the slow crystal growth of AFt (ettringite). Other mechanisms have also been proposed in the literature, such as the slow and delayed release of sulfate ions by the clinker. Oxidation of iron(II) sulfides, such as pyrite (), or pyrrhotite (Fe(1-x)S), sometimes present in construction aggregates can also represent an additional internal source of sulfate into concrete. One mechanism does not necessarily exclude the other ones as all can co-exist. If the external sulfate attacks (ESA) are relatively well known, for internal sulfate attack (ISA) (delayed ettringite formation, DEF) there is no unanimity on the relative importance of the main mechanism at work inside concrete and the question is still debated. Effect of temperature At temperature lower than 65 °C, ettringite (AFt) is less soluble than AFm and therefore ettringite precipitates in a privileged way. Around 65 °C, the solubilities of ettringite and AFm are similar (in fact, it also depends on the pH value of the concrete pore water, as explained further in the next section). Above 65 °C, ettringite is more soluble than AFm and the less soluble AFm phase preferentially precipitates. If concrete is poured by hot weather in the summer, or that the concrete component or the structure is massive and that its internal temperature exceeds 65 °C, ettringite does not form, but only AFm. While concrete sets and hardens, it cools down back to ambient temperature. During the months, or the years, after its placing, concrete is subject to slow chemical reactions accompanied by mineral phases transformations and volumetric changes. Back to ambient temperature, AFm becomes more soluble that ettringite and slowly dissolves while ettringite slowly crystallizes. This slow conversion reaction is known under the name of Delayed Ettringite Formation (DEF) and can be schematically expressed as: (delayed ettringite formation: DEF reaction) Ettringite occupies a larger volume than AFm phase and crystallizes under the form of acicular needles. This reaction is expansive and can cause a huge crystallization pressure in the small concrete pores once they are totally filled by the growing ettringite crystals. As a consequence, the hardened cement paste (HCP) is submitted to an important tensile stress and starts to crack because of the internal expansion of the concrete matrix. Contrary to the primary ettringite initially formed when concrete is still in the plastic state before hardening, the DEF reaction occurring in hardened concrete can be very harmful for concrete structures and components, potentially compromising their structural integrity and stability. Ultimately, DEF can cause the ruin of concrete structures. Effect of pH At high pH, in the presence of dissolved aluminates and calcium ions, ettringite is transformed into AFm-sulfate: The conversion of AFt into AFm at high pH can be schematically summarized as: The reverse reaction can also occur when concrete with a high alkali content (NaOH/KOH, expressed as ) is leached by water: AFm converts back into AFt or ettringite. The slow crystal growth of small needles of ettringite into the concrete pores can exert an important crystallization pressure inside the concrete matrix. This reaction is expansive and can be very damaging for concrete structures and components. To minimize the risk of DEF in massive concrete structures continuously immersed in water and subject to alkali leaching (bridge piles, locks, sluices, dams), a low content is therefore also a desirable characteristic for the selected sulfate resisting (SR) cement. Internal and external sulfate attacks of concrete The formation of ettringite in the hardened cement paste is an internal expansive chemical reaction that can lead to severe degradation of concrete. The two main classical forms are the internal sulfate attack (ISA), also known as delayed ettringite formation (DEF), already described before, and the external sulfate attack (ESA) when concrete is exposed to an external source of sulfate, such as dissolved sulfate sometimes directly present in soils or aquifers, or produced by pyrite oxidation (see acid mine drainage). In ESA, or when pyrite oxidation is also sometimes involved in contaminated aggregates, beside ettringite crystallization, a lot of gypsum () can also be formed in the ultimate degradation stage. A less common, but very severe, form of ESA is the thaumasitic form of sulfate attack (TSA) when concrete is exposed to an external source of sulfate in the concomitant presence of carbonate, ions, or . It preferentially occurs in clay formations exposed to air oxygen by excavation works and in which pyrite has been oxidized. The sulfuric acid () produced by pyrite oxidation also dissolves carbonates present in the surrounding clay, or directly in the concrete aggregates, freeing up the two main ingredients necessary for this very deleterious pathology. In contrast to conventional ESA, no expansive phase as ettringite needles forms, but thaumasite (), which consumes the calcium silicate hydrates (C-S-H, the "glue" of the hardened cement paste), ultimately leading to the decohesion of the cement paste. In the most severe cases, concrete suffering TSA can be dug with a simple shovel, or even by hand. Thaumasite forms a continuous solid-solution series with ettringite (AFt). Indeed, as more easily observable in another, but equivalent, expression of its chemical composition, , its crystal lattice is isostructural with that of ettringite and an unusually hexacoordinated silicate anion can take the place of an aluminate anions in octahedral position, as long as another ion compensates for the difference in electrical charge. Although, thaumasite can develop alone at low temperature in the absence of ettringite (AFt), thus even in sulfate-resisting cement (SR0 cement without ), the presence of ettringite (AFt) acts as a scaffold (template, chaperone) for thaumasite crystallization and therefore favors its formation by hetero-epitaxy. Prevention of sulfate attacks To minimize, and ideally to avoid, delayed ettringite formation (DEF; synonym: internal sulfate attack, ISA), several precautions can be taken: Maintaining the maximal temperature inside the concrete to a value lower than 65 °C, but as this critical threshold temperature also depends on the pH of the concrete pore water, it is advised not to exceed 60 °C. Low-heat Portland cements with a coarse granulometry size, or better, the choice of an appropriate metallurgic cement with a high content in blast furnace slags (BFS), and therefore a low content in Portland clinker, can be a way to keep temperature sufficiently low. Metallurgic cements are latent hydraulic binders and slow setting cements. They have the double advantage of producing less heat and also spreading their heat production over a longer time period. If the ambient temperature is too high in the summer, no concrete casting is allowed, or special precautions need to be taken such as making concrete with ice and cold aggregates in desertic conditions. Selecting a cement with a low-alkali content ( < 0.60 wt. % according to the European cement norm EN 197). Choosing a sulfate-resisting (SR) cement with a low content < 3 wt. % according to the European cement norm EN 197), or with the lowest possible content (ideally 0 wt. %). For massive concrete structures whose temperature at core will likely exceed 65 °C during the cement setting and hardening and moreover will be immersed under water or exposed to alkali leaching, the only effective solution to avoid, or to minimize, delayed ettringite formation is to eliminate the tricalcium aluminate phase in the cement clinker, or to drastically reduce its content. Concomitantly, it will also make it possible to limit the quantity of gypsum added to clinker during its grinding to avoid the risk of cement flash setting due to the very exothermic hydration reaction of . Eliminating the root cause of DEF by removing , the main culprit phase present in the cement, is the only way to completely get rid of DEF (ISA). The same precaution also applies to the prevention of the conventional external sulfate attack (ESA) when concrete is exposed to an external source of sulfate in the presence of water. However, this precaution is insufficient to completely eliminate the risk of formation of thaumasite (TSA) because this latter can still develop, although with more difficulty, in concrete made with sulfate-resistant cement. Finally, as a last resort, but challenging for immersed concrete structures, when feasible to prevent the development of expansive internal reactions harmful for concrete, it is always advisable to minimize the contact of concrete with water. See also References Further reading Brown, P.W.; Taylor, H.F.W. (1999). The role of ettringite in external sulfate attack. Sulfate Attack Mechanisms, Materials Science of Concrete. Amer. Ceramic Society, Ohio, 73–97. External links Aluminium compounds Cement Concrete Hydrates Iron compounds Iron(III) compounds Silicates Sulfate minerals Sulfates
AFt phases
[ "Chemistry", "Engineering" ]
3,511
[ "Structural engineering", "Sulfates", "Hydrates", "Salts", "Concrete" ]
72,649,767
https://en.wikipedia.org/wiki/Water%20clarity
Water clarity is a descriptive term for how deeply visible light penetrates through water. In addition to light penetration, the term water clarity is also often used to describe underwater visibility. Water clarity is one way that humans measure water quality, along with oxygen concentration and the presence or absence of pollutants and algal blooms. Water clarity governs the health of underwater ecosystems because it impacts the amount of light reaching the plants and animals living underwater. For plants, light is needed for photosynthesis. The clarity of the underwater environment determines the depth ranges where aquatic plants can live. Water clarity also impacts how well visual animals like fish can see their prey. Clarity affects the aquatic plants and animals living in all kinds of water bodies, including rivers, ponds, lakes, reservoirs, estuaries, coastal lagoons, and the open ocean. Water clarity also affects how humans interact with water, from recreation and property values to mapping, defense, and security. Water clarity influences human perceptions of water quality, recreational safety, aesthetic appeal, and overall environmental health. Tourists visiting the Great Barrier Reef were willing to pay to improve the water clarity conditions for recreational satisfaction. Water clarity also influences waterfront property values. In the United States, a 1% improvement in water clarity increased property values by up to 10%. Water clarity is needed to visualize targets underwater, either from above or in water. These applications include mapping and military operations. To map shallow-water features such as oyster reefs and seagrass beds, the water must be clear enough for those features to be visible to a drone, airplane, or satellite. Water clarity is also needed to detect underwater objects such as submarines using visible light. Water clarity measurements Water clarity is measured using multiple techniques. These measurements include: Secchi depth, light attenuation, turbidity, beam attenuation, absorption by colored dissolved organic matter, the concentration of chlorophyll-a pigment, and the concentration of total suspended solids. Clear water generally has a deep Secchi depth, low light attenuation (deeper light penetration), low turbidity, low beam attenuation, and low concentrations of dissolved substances, chlorophyll-a, and/or total suspended solids. More turbid water generally has a shallow Secchi depth, high light attenuation (less light penetration to depth), high turbidity, high beam attenuation, and high concentrations of dissolved substances, chlorophyll-a, and/or total suspended solids. Overall general metrics Secchi depth Secchi depth is the depth at which a disk is no longer visible to the human eye. This measurement was created in 1865 and represents one of the oldest oceanographic methods. To measure Secchi depth, a white or black-and-white disk is mounted on a pole or line and lowered slowly down in the water. The depth at which the disk is no longer visible is taken as a measure of the transparency of the water. Secchi depth is most useful as a measure of transparency or underwater visibility. Light attenuation The light attenuation coefficient – often shortened to "light attenuation" – describes the decrease in solar irradiance with depth. To calculate this coefficient, light energy is measured at a series of depths from the surface to the depth of 1% illumination. Then, the exponential decline in light is calculated using Beer’s Law with the equation: where k is the light attenuation coefficient, Iz is the intensity of light at depth z, and I0 is the intensity of light at the ocean surface. Which translates to: This measurement can be done for specific colors of light or more broadly for all visible light. The light attenuation coefficient of photosynthetically active radiation (PAR) refers to the decrease in all visible light (400–700 nm) with depth. Light attenuation can be measured as the decrease in downwelling light (Kd) or the decrease in scalar light (Ko) with depth. Light attenuation is most useful as a measure of the total underwater light energy available to plants, such as phytoplankton and submerged aquatic vegetation. Turbidity Turbidity is a measure of the cloudiness of water based on light scattering by particles at a 90-degree angle to the detector. A turbidity sensor is placed in water with a light source and a detector at a 90-degree angle to one another. The light source is usually red or near-infrared light (600–900 nm). Turbidity sensors are also called turbidimeters or nephelometers. In more turbid water, more particles are present in the water, and more light scattering by particles is picked up by the detector. Turbidity is most useful for long-term monitoring because these sensors are often low cost and sturdy enough for long deployments underwater. Beam attenuation Beam attenuation is measured with a device called a transmissometer that has a light source at one end and a detector at the other end, in one plane. The amount of light transmitted to the detector through the water is the beam transmission, and the amount of light lost is the beam attenuation. Beam attenuation is essentially the opposite of light transmission. Clearer water with a low beam attenuation coefficient will have high light transmission, and more turbid water with a high beam attenuation coefficient will have low light transmission. Beam attenuation is used as a proxy for particulate organic carbon in oligotrophic waters like the open ocean. Concentration-based metrics Colored dissolved organic matter (CDOM) absorption Colored dissolved organic matter (CDOM) absorbs light, making the water appear darker or tea-colored. Absorption by CDOM is one measure of water clarity. Clarity can still be quite high in terms of visibility with high amounts of CDOM in the water, but the color of the water will be altered to yellow or brown, and the water will appear darker than water with low CDOM concentrations. CDOM absorbs blue light more strongly than other colors, shifting the color of the water toward the yellow and red part of the visible light spectrum as the water gets darker. For example, in lakes with high CDOM concentrations, the bottom of the lake may be clearly visible to the human eye, but a white surface in the same lake water may appear yellow or brown. Total suspended solids (TSS) concentration Total suspended solids (TSS) concentration is the concentration (dry weight mass per unit volume of water) of all the material in water that is caught on a filter, usually a filter with about a 0.7 micrometer pore size. This includes all the particles suspended in water, such as mineral particles (silt, clay), organic detritus, and phytoplankton cells. Clear water bodies have low TSS concentrations. Other names for TSS include total suspended matter (TSM) and suspended particulate matter (SPM). The term suspended sediment concentration (SSC) refers to the mineral component of TSS but is sometimes used interchangeably with TSS. If desired, the concentrations of volatile (organic) and fixed (inorganic) suspended solids can be separated out using the loss-on-ignition method by burning the filter in a muffle furnace to burn off organic matter, leaving behind ash including mineral particles and inorganic components of phytoplankton cells, with TSS = volatile suspended solids + fixed suspended solids. Chlorophyll-a concentration Chlorophyll-a concentration is sometimes used to measure water clarity, especially when suspended sediments and colored dissolved organic matter concentrations are low. Chlorophyll-a concentration is a proxy for phytoplankton biomass, which is one way to quantify how turbid the water is due to biological primary production. Chlorophyll-a concentration is most useful for research on primary production, the contribution of phytoplankton to light attenuation, and harmful algal blooms. Chlorophyll-a concentration is also useful for long-term monitoring because these sensors are often low cost and sturdy enough for long deployments underwater. Case studies High water clarity The clearest waters occur in oligotrophic ocean regions such as the South Pacific Gyre, tropical coastal waters, glacially-formed lakes with low sediment inputs, and lakes with some kind of natural filtration occurring at the inflow point. Blue Lake in New Zealand holds the record for the highest water clarity of any lake, with a Secchi depth of 230 to 260 feet. Blue Lake is fed by an underground passage from a nearby lake, which acts as a natural filter. Some other very clear water bodies are Lake Tahoe between California and Nevada in the United States, Lake Baikal in Russia, and Crater Lake in Oregon in the United States. In tropical coastal waters, the water is clear thanks to low nutrient inputs, low primary production, and coral reefs acting as a natural buffer that keep sediments from getting resuspended. The clearest recorded water on Earth is either Blue Lake, New Zealand or the Weddell Sea near Antarctica, both of which claim Secchi depths of 80 meters (230 to 260 feet). Low water clarity Very low water clarity can be found where high loads of suspended sediments are transported from land. Some examples are estuaries where rivers with high loads of sediments empty into the ocean. One example is the Río de la Plata, an estuary in South America between Uruguay and Argentina where the Uruguay River and the Parana River empty into the Atlantic ocean. The Río de la Plata shows long-term mean TSS concentrations between 20 and 100 grams per cubic meter, higher than most estuaries. Another example is the gulf coast of North America where the Mississippi River meets the Gulf of Mexico. Turbid water from snowmelt and rain washes high loads of sediment downstream each spring, creating a sediment plume and making the water clarity very low. Water bodies can also experience low water clarity after extreme events like volcanic eruptions. After the eruption of Mount St. Helens, the water of Spirit Lake, Washington was darkened by decaying trees in the lake and had a Secchi depth of only 1 to 2 centimeters. Water clarity vs. water quality Water clarity is more specific than water quality. The term "water clarity" more strictly describes the amount of light that passes through water or an object’s visibility in water. The term "water quality" more broadly refers to many characteristics of water, including temperature, dissolved oxygen, the amount of nutrients, or the presence of algal blooms. How clear the water appears is only one component of water quality. An underwater ecosystem can have high water clarity yet low water quality, and vice versa. Scientists have observed that many lakes are becoming less clear while also recovering from acid rain. This phenomenon has been seen in the northeastern United States and northern Europe. In the past, some lakes were ecologically bare, yet clear, while acidity was high. In recent years, as acidity is reduced and watersheds become more forested, many lakes are less clear but also ecologically healthier with higher concentrations of dissolved organic carbon and more natural water chemistry. See also Beer-Lambert law Color of water Forel-Ule scale Ocean color Ocean optics Secchi disk Turbidity Visibility Water quality References External links Water quality indicators
Water clarity
[ "Chemistry", "Environmental_science" ]
2,335
[ "Water quality indicators", "Water pollution" ]
72,650,058
https://en.wikipedia.org/wiki/Markus%20Ralser
Markus Ralser (born 3 April 1980 in Vipiteno, Italy) is an Italian biologist. His main research interest is metabolism of microorganisms. He is also known for his work on the origin of metabolism during the origin of life, and proteomics. Life and career Prof. Ralser serves since 2019 as head of the Institute of Biochemistry at the Charité – Universitätsmedizin Berlin, Germany; as well as since 2022 as group leader at the University of Oxford, UK. He studied genetics and molecular biology in Salzburg, Austria. He completed his PhD in 2006 at the Max Planck Institute for Molecular Genetics in Berlin, Germany, studying neurodegenerative diseases. This was followed by a postdoctoral fellowship at the Vrije Universiteit Amsterdam, Netherlands, where he started to explore mass spectrometry. He returned to the MPI for Molecular Genetics in 2007 to become junior group leader, but in 2011 relocated his group to the University of Cambridge, UK. He relocated again, becoming group leader at the newly opened Francis Crick Institute in London in 2013 (senior group leader since 2019). His group moved to Oxford in 2022. Research Ralser's two research groups use LC–MS to analyze the proteomes and metabolomes of microorganisms. The main model organism is the baking yeast (Saccharomyces cerevisiae), but other species, such as pathogenic fungus Candida albicans and the fission yeast Schizosaccharomyces pombe, are used too. His lab not only uses LC–MS, but also develops novel LC–MS methods and protocols that improve detection accuracy, speed, and throughput. Specializing in data-independent acquisition, the group has developed scanning SWATH MS and Zeno SWATH MS in collaboration with MS manufacturer SCIEX. Both methods greatly improve upon SWATH MS, which was developed in Switzerland in 2012. The group additionally developed an acquisition method—DIA-NN—that uses neural networks. But proteins and metabolites are not the only focus: in 2022 the lab developed a protocol for the accurate quantification of DNA methylation using LC–MS. Key research topics include: Metabolic networks within cells. The exchange of metabolites between cells. The group found that yeast cells prefer to take up metabolites from the outside environment (the exometabolome) rather than produce their own, and that these cells can survive non-autonomously in a community, thus mutually depending on other community members for survival. The biochemistry of competing reactions within cells. The group generated a genome-scale enzyme-inhibition network in humans and revealed that compartmentalization in eukaryotes helps alleviate the extent of the self-inhibition. The role, function, and regulation of metabolic genes. This is achieved by analyzing the proteome and metabolome in a genome-wide collection of yeast gene deletion strains. Microbial cytogenetics. The group found that aneuploidy (abnormal chromosome number) is tolerated in yeast through a mechanism of dosage compensation: the expression of genes on aneuploid chromosomes is adjusted so as to produce a normal amount of protein. Metabolism-related protective mechanisms against oxidative stress. The group found, among others, that methionine, a known antioxidant, protects against oxidative stress through the pentose phosphate pathway. Earlier, Ralser found that cells dynamically switch between glycolysis and the pentose phosphate pathway to supply the antioxidant machinery with electrons—a mechanism known as the glycolysis/pentose phosphate pathway transition, which is now considered the first-line cellular anti-stress mechanism across species. The evolution of central carbon metabolism, and non-enzymatic reactions in cellular metabolism. The group found that key reactions of glycolysis, the pentose phosphate pathway, and gluconeogenesis can occur spontaneously, without enzyme catalysis, and in the ambient conditions that prevailed billions of years ago on Earth. Metabolic mechanisms of resistance to antifungal drugs. During the COVID-19 pandemic the Ralser group developed a proteomics panel assay for the assessment of disease severity and for the prediction of outcome. The assay quantifies 50 peptides derived from 30 proteins found in a patient's blood plasma. The lab found that these proteins can serve as markers: their abundance strongly correlates with COVID-19 severity and outcome. The assay can be performed at a routine clinical laboratory, and has become commercially available. As of January 2023, Ralser has published nearly 200 peer-reviewed articles that have been cited more than 13,000 times. Awards BioMed Central Research Award, Biology (2007) Wellcome Beit Prize (2011) South Tyrolean Science Award (2014) Colworth Medal of the Biochemical Society, UK (2017) Starling Medal of the Endocrinological Society, UK (2019) EMBO Gold Medal (2020) References External links The Ralser Lab's official website Living people 1980 births 21st-century Italian biologists Microorganisms
Markus Ralser
[ "Biology" ]
1,069
[ "Microorganisms" ]
72,650,562
https://en.wikipedia.org/wiki/Apomyoglobin
Apomyoglobin is a representative of a group of relatively small, α-helical and globular proteins. It has been extensively employed as a model system for protein folding and stability studies. Apomyoglobin is a type of myoglobin that does not have a haem group. This means that apomyoglobin lacks the haem groups that would have their iron atoms bind to Oxygen. There is a possibility, however, that apomyoglobin can bind to other different cofactors that is not a haem group. It also serves as an intermediate of myoglobin in its biosynthesis process. Apomyoglobin can be found in certain solutions with a neutral pH, it has a spatial structure that is compact and unique. Apomyoglobin also has an extended hydrophobic core. Apomyoglobin's structure has also been found to be similar to that of holomyoglobin's structure. Apomyoglobin also has an extended hydrophobic core. Apomyoglobin is produced in the sarcoplasm and is stated as being a hydrophilic protein. This means that the protein has a high affinity for water. Folding and unfolding at different pHs Apomyoglobin folds slowly (taking around 2 seconds) in comparison to other proteins. Apomyoglobin is an ideal protein to take into consideration when looking at folding in proteins. This is because it has no cysteines, no disulfides, and also does not exhibit any proline isomerization, which makes it easier to label. The protein contains primary, secondary, tertiary (not stable), and a quartenary structure at different pHs. Apomyoglobin also has different folding states at different pH's. At a pH of 6, the F helix of the monomer is not folded completely. At this pH of 6, both secondary and tertiary structures are contained by the folded protein. At a pH of 4, apomyoglobin forms a structure known as "molten globule" and becomes more stabilized. A molten globule, in other words, is where the tertiary structure of the protein is lost, but the secondary structure is allowed to remain and becomes stronger. At a pH of 2, the monomer protein becomes unfolded, yet it still has some small amount of helical structure. The important thing to note about the apomyoglobin monomer when it comes to folding and unfolding, is that it unfolds backwards at acidic pHs but it can also be refolded as easily from acidic or alkaline solutions. The kinetic folding pathway intermediate Apomyoglobin is stated as having eight total helices, with the labels (A, B, C, D, E, F, G and H). These helices, ranging from A to H, are directly involved in what is known as helix-helix interactions (13 total and hydrophobic) in its native folding state. When looking at the first-approximation to apomyoglobin's folding kinetics (calculated using a diffusion-collision model), the two small helices D and C can be disregarded. To further explain, the diffusion-collision model is a model that states that the process of folding for four-helix proteins should be placed in randomized helix-helix collisions. This leaves the apomyoglobin as only having A, B, E, F, G, H helices as part of its protein, meaning that the only possible interactions left are BG, GH, BE, FH, and AE. The kinetic folding pathway of apomyoglobin, showcased that the folding of the protein proceeded forward through a "burst phase" intermediate, delineated via 2D1H NMR spectra. This specific kinetic intermediate, formed within 6 ms (milliseconds), contained only parts A, G, and H helices. Furthermore, it can be said that the folding kinetics of apomyoglobin are transitions among the 64 states that commonly occur when breaking or forming one of the helix-helix interactions. Apomyoglobin tends to collapse rapidly when the initiation of folding occurs into an intermediate that contains A, G, and H helices. This kinetic pathway folding of apomyoglobin results in the A(B)GH intermediate occurring. It can also be shown that the folding kinetics associated with apomyoglobin can be tied to nascent helices through a network of diffusion-collision steps. These steps are stated as being: G + H <-> GH + A <-> AGH + B <-> A(B)GH. Interactions with membranes There was a research study conducted by a group of scientists, that studied the way apomyoglobin interacts with membranes. The purpose of the study, specifically, was to determine if apomyoglobin interacted with the membranes to extract a heme group from the lipid bilayer of the membrane. This was done by looking at apomyoglobin's interactions with large unilamellar vesicles (LUVs), and then measuring the impact that apomyoglobin's membrane binding has to that of the rate of heme uptake. The results of the research conducted points to apomyoglobin and membrane interactions being heavily pH dependent. It was also found that apomyoglobins may require the presence of phospholipids that are of an anionic state. All of the conditions that were found that showcased a positive interaction between apomyoglobin with a membrane, pointed towards the destabilization of apomyoglobin and a decrease in the rate of the protein's binding with heme. This further sealed the narrative that the interactions between membranes and the protein is not necessary for holomyoglobin formation. It was also concluded that the molten globule state of apomyoglobin is an important step in making the hydrophobic regions of the protein accessible when interacting with the membrane. In sperm whales Apomyoglobin can be found in certain animals, one of the main animals being sperm whales. The apomyoglobin from sperm whales is used for studying ligand binding, folding in proteins, and protein stability. Apomyoglobin found in sperm whales binds pigments similar to chlorophyll, while its counterpart Myoglobin, has never really encountered these pigments. It was also discovered that apomyoglobin is around 20 to 100 times more resistant to GdmCl (Guanidinium chloride) denaturation. When comparing sperm whale apomyoglobin to other mammal apomyoglobin, it can be stated that sperm whale apomyoglobin is much more resistant to GdmCl-induced denaturation. Sperm whale apomyoglobin has also been vastly used in studies involving comprehensive studies of protein unfolding. There is also specific whale sperm apomyoglobin mutations that give rise amyloid fibrils at a pH 7. Examples of mutations that would give rise to this are mutations such as, the replacement of Trp-7 and Trp-14 by two Phe's. Apomyoglobins found in deep-diving whales are far more stable than those from mammals found on land or those who swim on the surface. See also Bovine serum albumin Myoglobin Molten globule Hydrophobic collapse References Proteins
Apomyoglobin
[ "Chemistry" ]
1,561
[ "Proteins", "Biomolecules by chemical classification", "Molecular biology" ]
72,651,734
https://en.wikipedia.org/wiki/Madhava%20Observatory
Madhava Observatory is an observatory set up by the University of Calicut in 2005 in association with the Indian Institute of Astrophysics (IIA), Bangalore. It is the largest observatory at the university level in India. The hemispherical dome has a slit opening, a wheel assembly and a 14-inch Meade (Cassegrain) telescope. The observatory is used by the faculty and staff of the university for study and research purposes. The observatory has also an 18-inch NGT reflector telescope gifted by IIAP, Bangalore, a dedicated computer facility for data collection and analysis. The observatory is named after Madhava of Sangamagrama (c. 1340 – c. 1425), who is considered one of the greatest mathematician-astronomers of the Middle Ages and was the founder of the Kerala school of astronomy and mathematics. References Astronomical observatories in India
Madhava Observatory
[ "Astronomy" ]
183
[ "Astronomy stubs" ]
72,654,852
https://en.wikipedia.org/wiki/Codeberg
Codeberg e.V. is a nonprofit organization that provides online resources for software development and collaboration. About According to its bylaws, Codeberg e.V. is organized as a non-profit, collaborative organization. Its primary goals are to provide services for the development and distribution of free/libre content and Free and open-source software (FOSS). Codeberg e.V. is funded through donations. In January 2019, Codeberg e.V. launched with an initial 25 members and began publishing monthly newsletters on the status of its main project Codeberg.org. The organization selected the European Union for their headquarters and computer infrastructure, due to members' concerns that a software project repository hosted in the United States could be removed if a malicious actor made bad faith copyright claims under the Digital Millennium Copyright Act. Codeberg.org Codeberg.org is a Forgejo-based collaborative development environment maintained by Codeberg e.V. In addition to the core software forge and bug tracker functionality provided by Forgejo, Codeberg has over time introduced related services such as Codeberg Pages (a basic web hosting service for projects hosted on Codeberg), a Weblate translation server, and CI/CD features via Woodpecker CI. As of January 2024, Codeberg hosts over 110,000 open-source projects by over 89,000 users. History After Microsoft's 2018 purchase of GitHub, developers Holger Wächtler, Thomas Boerger, and David Schneiderbauer forked software forge software Gitea with a project called TeaHub. Codeberg.org launched in January 2019. After one month, the Codeberg e.V. organization had 25 members, and Codeberg.org hosted 333 repositories with 379 users. As of December 2022, the Codeberg.org website uses Forgejo, a software fork of the Gitea software forge. Reception In 2020, Ade Malsasa Akbar wrote in a review for ubuntubuzz.com that he believed anybody from the FLOSS community would be interested in Codeberg, especially those looking for a GitHub alternative. In June 2022 the Software Freedom Conservancy's "Give Up Github" campaign (in response to the GitHub Copilot licensing controversy) promoted Codeberg as an alternative to GitHub. As a result, Codeberg gained increased visibility in the open-source community, and a number of major open source projects migrated to Codeberg. See also Comparison of source-code-hosting facilities Forgejo Gitea GitHub GitLab References External links Open-source software hosting facilities Bug and issue tracking software Free software websites Git (software) Project hosting websites Project management software Version control
Codeberg
[ "Technology", "Engineering" ]
562
[ "Software engineering", "Computing websites", "Version control", "Free software websites" ]
75,468,149
https://en.wikipedia.org/wiki/Xuan%20tu
Xuan tu or Hsuan thu () is a diagram given in the ancient Chinese astronomical and mathematical text Zhoubi Suanjing indicating a proof of the Pythagorean theorem. Zhoubi Suanjing is one of the oldest Chinese texts on mathematics. The exact date of composition of the book has not been determined. Some estimates of the date range as far back as 1100 BCE, while others estimate the date as late as 200 CE. However, from astronomical evidence available in the book it would appear that much of the material in the book is from the time of Confucius, that is, the 6th century BCE. Hsuan thu represents one of the earliest known proofs of the Pythagorean theorem and also one of the simplest. The text in Zhoubi Suanjing accompanying the diagram has been translated as follows: "The art of numbering proceeds from the circle and the square. The circle is derived from the square and the square from the rectangle (literally, the T-square or the carpenter's square). The rectangle originates from the fact that 9x9 = 81 (that is, the multiplication table or properties of numbers as such). Thus, let us cut a rectangle (diagonally) and make the width 3 (units) wide and the height 4 (units) long. The diagonal between the two corners will then be 5 (units) long. Now after drawing a square on the diagonal, circumscribe it by half-rectangles like that which has been left outside, so as to form a (square) plate. Thus the (four) outer half-rectangles of width 3, length 4 and diagonal 5, together make two rectangles (of area 24); then (when this is subtracted from the square plate of area 24) the remainder is of area 25. This (process) is called piling up 'piling up the rectangles' (chi chu)." The hsuan thu diagram makes use of the 3,4,5 right triangle to demonstrate the Pythagorean theorem. However the Chinese people seems to have generalized its conclusion to all right triangles. The hsuan thu diagram, in its generalized form can be found in the writings of the Indian mathematician Bhaskara II (c. 1114–1185). The description of this diagram appears in verse 129 of Bijaganita of Bhaskara II. There is a legend that Bhaskara's proof of the Pythagorean theorem consisted of only just one word, namely, "Behold!". However, using the notations of the diagram, the theorem follows from the following equation: References Chinese mathematics Pythagorean theorem Euclidean plane geometry History of geometry Proof without words
Xuan tu
[ "Mathematics" ]
573
[ "History of geometry", "Euclidean plane geometry", "Mathematical objects", "Equations", "Pythagorean theorem", "Proof without words", "Geometry", "Planes (geometry)" ]
75,468,396
https://en.wikipedia.org/wiki/Gravitational%20Aharonov-Bohm%20effect
In physics, the gravitational Aharonov-Bohm effect is a phenomenon involving the behavior of particles acting according to quantum mechanics while under the influence of a classical gravitational field. It is the gravitational analog of the well-known Aharonov–Bohm effect, which is about the quantum mechanical behavior of particles in a classical electromagnetic field. Electric effect There are many variants of the Aharonov-Bohm effect in electromagnetism. Here we review an electric version of the Aharonov-Bohm effect that is most similar to the gravitational effect which has been experimentally observed. This electric effect is caused by a charged particle (say, an electron) being in a superposition of traveling down two different paths. In both paths, the electric field that the electron sees is zero everywhere along the path, but the scalar electric potential that the electron sees is not the same for both paths. In the above figure, the beamsplitter puts the electron in a superposition of taking the upper path and taking the lower path. In both paths, when the electron gets to the mirror, it is stopped and held there. During that time when the electron is held in place at a mirror, 2 electric charges each with charge are brought near the upper mirror in a symmetric manner such that the net electric field caused by the 2 charges at the upper mirror is 0. We assume that the lower mirror is far enough away from the upper mirror such that the electric potential (and electric field) caused by the 2 charges is 0 at the lower mirror. So, this creates an electric potential difference between upper and lower mirrors equal to , where is the distance of the charges from the mirror and is the electric constant. The electron is held there for a time , after which the charges are moved away and the electron is allowed to continue moving along its path. Assuming that the time we take to move the 2 charges to and from the mirror is much smaller than , this time that the electron spends at the mirror causes a phase shift equal to where is the elementary charge. When the 2 paths of the interferometer are recombined, we see a different interference pattern depending on whether we brought the charges near the upper mirror to create a potential difference. This is surprising, because no matter whether we brought the charges near the upper mirror to create a potential difference, the electron always remains at a location where the electric field is zero (to be more precise, the wavefunction of the electron is only ever nonzero at locations where the electric field is 0). This electric Aharonov-Bohm effect has not been experimentally observed, unlike the magnetic effect. It not generally feasible to trap an electron at a "mirror" in the interferometer while the potential is turned on and off, which is necessary in this setup to ensure that the electron stays in a region where the field is 0 while the potential is varied. Proposals for experimentally observing the effect instead involve shielding the electron from any electric field by having it travel through a conducting cylinder while the potential is varied. In contrast, one experiment proposal for the gravitational Aharonov-Bohm effect actually does involve trapping atoms (which play an analogous role to electrons in the experiment proposal) and holding them in a region where the gravitation field is zero using optical lattices. Gravitational effect Just as there are many variants of the Aharonov-Bohm effect in electromagnetism, there are many variants of the gravitational effect. The simplest version of the gravitational effect is analogous to the electric effect above, with the electron replaced by a small test mass such as an atom, and the 2 charges that create an electric potential replaced by 2 masses that create a gravitational potential. In the above figure, an atom passes through an atomic "beamsplitter" that puts the atom in a superposition of taking the upper and lower paths. The atoms are then reflected by atomic "mirrors" that cause them to recombine at the detector on the right, where an interference pattern is detected. When the atom is at a "mirror", it is paused and held there while a potential is introduced. The potential is created by moving 2 massive objects, each with mass , to the left and right sides of the upper mirror, a distance away from the mirror. The masses are brought towards the upper mirror in a symmetric manner such that the gravitational field caused by the masses is 0 at the upper mirror. We assume that the upper mirror is far enough away from the lower mirror such that the masses create zero potential (and zero field) at the lower mirror, which means they create a gravitational potential difference of between the upper and lower mirrors. Despite this gravitational potential difference, the gravitational field at the upper and lower mirrors is 0, and the atom is never in any position with a nonzero gravitation field. Still, a time spent at the mirrors with that potential difference causes a phase shift, where is the mass of the atom. This phase shift is detected by observing the interference pattern where the atom paths recombine, which will be different depending on whether the potential difference was applied. Instead of these idealized paths for the atom that involve "mirrors" that pause the atom in its place while a potential is applied, the atom could be moved in those paths by an optical lattice. This would allow precise control over the positions of the atom and the amount of time spent in the gravitational potential. The various electromagnetic versions of the Aharonv-Bohm effect can be described in a way that does not suggest any physical reality to the electromagnetic potentials and does not require any nonlocality, by treating the sources of the electromagnetic field and the electromagnetic field itself quantum mechanically, instead of treating the test charge (electron) quantum mechanically and the electromagnetic field and its sources classically. Without a theory of quantum gravity, we cannot appeal to a fully quantum treatment of the test mass (atom), the sources of the gravitational field, and the gravitational field itself in order to explain the gravitational Aharonov-Bohm effect in a fully local, gauge-independent manner. However, this effect can be explained in a local, gauge-independent manner by considering the gravitational time dilation experienced by the atom in the path with the nonzero potential, and taking into account that matter waves pick up a phase at the Compton frequency of the matter. Experimental observation In January 2022, a team led by Mark Kasevich announced that they had experimentally observed a gravitational Aharonov-Bohm effect with an experiment broadly similar to the one outlined above. The source of the gravitational potential in their experiment was a single 1.25 kg tungsten mass. The test masses were rubidium-87 atoms. The tungsten mass was fixed, so the gravitational field caused by the tungsten mass was not zero everywhere along the paths of the 87Rb atoms. This means that the phase shift of the rubidium atoms between the 2 paths was not caused by a gravitational potential energy difference alone, but also by a difference in the gravitational force felt by the atoms in the 2 paths. By detecting a difference in the phase shift between when the tungsten mass is present and when it is not present, they observed a phase shift consistent with that predicted by the Aharonov-Bohm effect. The "beamsplitters" and "mirrors" used to make the 87Rb atoms interfere are not solid-state components as would be the case with standard interferometers with light. Rather, they consisted of laser pulses that coherently transfer momentum between the atoms and photons. References Quantum mechanics
Gravitational Aharonov-Bohm effect
[ "Physics" ]
1,544
[ "Theoretical physics", "Quantum mechanics" ]
75,468,672
https://en.wikipedia.org/wiki/Quasi-polynomial%20growth
In theoretical computer science, a function is said to exhibit quasi-polynomial growth when it has an upper bound of the form for some constant , as expressed using big O notation. That is, it is bounded by an exponential function of a polylogarithmic function. This generalizes the polynomials and the functions of polynomial growth, for which one can take . A function with quasi-polynomial growth is also said to be quasi-polynomially bounded. Quasi-polynomial growth has been used in the analysis of algorithms to describe certain algorithms whose computational complexity is not polynomial, but is substantially smaller than exponential. In particular, algorithms whose worst-case running times exhibit quasi-polynomial growth are said to take quasi-polynomial time. As well as time complexity, some algorithms require quasi-polynomial space complexity, use a quasi-polynomial number of parallel processors, can be expressed as algebraic formulas of quasi-polynomial size or have a quasi-polynomial competitive ratio. In some other cases, quasi-polynomial growth is used to model restrictions on the inputs to a problem that, when present, lead to good performance from algorithms on those inputs. It can also bound the size of the output for some problems; for instance, for the shortest path problem with linearly varying edge weights, the number of distinct solutions can be quasipolynomial. Beyond theoretical computer science, quasi-polynomial growth bounds have also been used in mathematics, for instance in partial results on the Hirsch conjecture for the diameter of polytopes in polyhedral combinatorics, or relating the sizes of cliques and independent sets in certain classes of graphs. However, in polyhedral combinatorics and enumerative combinatorics, a different meaning of the same word also is used, for the quasi-polynomials, functions that generalize polynomials by having periodic coefficients. References Asymptotic analysis Computational complexity theory
Quasi-polynomial growth
[ "Mathematics" ]
375
[ "Mathematical analysis", "Asymptotic analysis" ]
75,469,450
https://en.wikipedia.org/wiki/Aldafermin
Aldafermin is a fibroblast growth factor 19 (FGF19) analogue developed for non-alcoholic steatohepatitis. References Experimental drugs developed for non-alcoholic fatty liver disease
Aldafermin
[ "Chemistry" ]
43
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
75,469,818
https://en.wikipedia.org/wiki/Interspecies%20design
Interspecies design is design practice that intentionally involves and emphasizes the contributions of multiple species, focusing on the participation and outcomes for both human and non-human lifeforms. It aims to create a mutual benefit and centers on designing for and with all life. Definition Interspecies design is characterized by the participation of more than one species in design activities and the use of design outcomes by multiple species. This concept extends to all design practices that could potentially involve multiple species, making it a broad and inclusive field. Need and ethics The field arises from a need to include all those at risk of harm, domination, or oppression in the design process, highlighting the ethical dimension of design decisions. This approach challenges traditional practices by considering the impact on and inclusion of non-human species. Synonyms and related concepts Interspecies design is related to but distinct from concepts such as interspecies cultures, multispecies design, ecocentric design, ecological engineering, and more-than-human design. Application in art and design In the realm of art and design, interspecies design has been applied in creating shared spaces and experiences for multiple species, such as in the design of prosthetic habitat-structures for owls. See also Animal-computer interaction Human–animal studies Ecological design Environmental ethics References Further reading Alexis, Nekeisha Alayna (2020). Beyond Compare: Intersectionality and Interspeciesism for Co-Liberation with Other Animals. Routledge. Coulter, Kendra (2016). Animals, Work, and the Promise of Interspecies Solidarity. Palgrave Macmillan. Goodale, Eben; Beauchamp, Guy; Ruxton, Graeme D. (2017). Mixed-Species Groups of Animals: Behavior, Community Structure, and Conservation. Academic Press. Linden, Dirk van der (2021). "Interspecies Information Systems". Requirements Engineering. doi:10/gmmvps Meijer, Eva (2019). When Animals Speak: Toward an Interspecies Democracy. New York University Press. Parker, Dan; Roudavski, Stanislav (2022). "Toward Interspecies Art and Design: Prosthetic Habitat-Structures in Human-Owl Cultures". Leonardo. 55 (4): 351–356. doi:10.1162/leon_a_02224. Rice, Louis (2018). "Nonhumans in Participatory Design". CoDesign. 14 (3): 238–257. doi:10/gfvpfx. Roudavski, Stanislav (2021). "Interspecies Design". In Parham, John (ed.). Cambridge Companion to Literature and the Anthropocene. Cambridge University Press. pp. 147–162. Santos, Rodrigo dos; Kaczmarek, Michelle; Shankar, Saguna; Nathan, Lisa P. (2021). "Who Are We Listening to? The Inclusion of Other-than-Human Participants in Design". LIMITS ’21: Workshop on Computing within Limits. doi:10/gkdd7f. Veselova, Emīlija; Gaziulusoy, İdil (2019). "Implications of the Bioinclusive Ethic on Collaborative and Participatory Design". The Design Journal. 22 (sup1): 1571–1586. doi:10/f9p9. Vink, Janneke (2020). The Open Society and Its Animals. Palgrave Macmillan. Design Human–animal interaction Sustainability Interspecies communication
Interspecies design
[ "Engineering", "Biology" ]
735
[ "Humans and other species", "Animals", "Design", "Human–animal interaction" ]
75,470,023
https://en.wikipedia.org/wiki/Surface%20imperfections%20%28optics%29
Surface imperfections on optical surfaces such as lenses or mirrors, can be caused during the manufacturing of the part or handling. These imperfections are part of the surface and cannot be removed by cleaning. Surface quality is characterized either by the American military standard notation (eg "60-40") or by specifying RMS (root mean square) roughness (eg "0.3 nm RMS"). American notation focuses on how visible surface defects are, and is a "cosmetic" specification. RMS notation is an objective measurable property of the surface. Tighter specifications increase the costs of fabricating optical elements but looser ones affect performance. While surface imperfections can be labeled "cosmetic defects", they are not purely cosmetic. Optics for laser applications are more sensitive to surface quality as any imperfections can lead to laser-induced damage. In some cases, imperfections in optical elements will be directly imaged as defects in the image plane. Optical systems requiring high radiation intensity tend to be sensitive to any loss of power due to surface scattering caused by imperfections. Systems operating in the ultraviolet range require a more demanding standard as the shorter wavelength of the ultraviolet radiation is more sensitive to scattering. There are many different standards used by optical element manufacturers, designers, and users which vary by geographic region and industry. For example, German manufacturers use ISO 10110, while the US military developed MIL-PRF-13830 and their long-standing use of it has made it the de facto global standard. It is not always possible to translate the scratch grade by one standard to another and sometimes the translation ends up being statistical (sampling defects to ensure that statistically, the percentage rejected elements will be similar in both methods). Examining surface quality in terms of 'Scratch & Dig' is a specialized skill that takes time to develop. The practice is to compare the element to a standard master (reference). Automated systems now replace the human technician, for flat optics, but recently also for convex and concave lenses. In contrast, 'Roughness' characterization is done with more precise and easier-to-quantify methods. Overview of types The various standards separate two main categories for surface quality: scratch & dig and roughness. A scratch is defined as a long and narrow defect that tears the surface of the glass or coating. There are standards that refer to the degree of visibility, which is the relative brightness of the scratch. In these cases, there is also a standard for the lighting conditions used for the test. Other standards classify scratches according to their dimensions. A dig is defined as a pit, a rough area, or a small crater on the surface of the glass (or any other optical material). All standards measure the physical size of the dig. Some standards include small defects within the glass that are visible through the surface, such as bubbles and inclusions. Roughness, texture or optical finish is a defect that originates from the element's manufacturing. Texture is a periodical phenomenon with a high spatial frequency (or in other words, in small dimensions), which affects the entire surface and causes the scattering of incident light. A higher value of roughness means a rougher surface. The texture is especially important in cases where the polishing is carried out using new processing methods such as diamond turning, which leaves a residual periodical signature on the surface, affecting the quality of the obtained image or the level of scattering from the surface. The amount of scattered light is proportional to the square of the RMS of the roughness. Scratch & Dig Military standard MIL-PRF-13830B This is the most common standard, stemming from a standard that was originally proposed by McLeod and Sherwood of Kodak back in 1945 and evolved in 1954 into the military standard MIL-O-13830A. It defines the quality of the surface by a pair of numbers, the first is a measure of the visibility of the scratch and the second is the size of the dig. Scratch visibility grades are described by a series of arbitrary numbers: 10, 20, 40, 60, and 80 where the brightest scratches, the easiest to see using the naked eye, are grade 80, while the most difficult to detect are grade 10. A scratch on a tested part is compared with an industrial or military standard (master) on which there are scratches of different degrees of visibility and the comparison is made using the naked eye, under controlled lighting conditions. It is important to recognize that this is a subjective test and its results can vary between different people. The scratches' visibility largely depends on their shape, and contrary to popular belief, there is little correlation between the scratch's visibility grade and its width. One cannot measure the width of a scratch to determine its grade. On the other hand, a dig's grade is a precise and measurable value. It is the diameter of the largest dig that is found on the tested surface, in units of hundredths of a millimeter. It is customary to use discrete grades of 5, 10, 20, 40, or 50, where of course the larger numbers describe larger imperfections. There are many default definitions in the MIL standard. For example, the grade that must be required outside the clear aperture (the part of the lens to which the standard applies, also called "effective diameter" or CA) is, in the absence of another definition, 80-50. This is a very basic surface characterization and is easy to achieve. It describes a scratch whose brightness is less than that of a scratch at visibility grade 80 and a dig with a diameter of up to 0.5 mm (50 hundredths = 50/100=0.5). 60-40 is considered "commercial" quality, while for demanding laser applications 20-10 or even 10-5 are used. The scratches on a 10-5 or 20-10 surface can be hard to see, making the visibility standard more subjective. Other standards may work better when precision surfaces are required. Optical coating can change scratch visibility, so for example an element that passes 40-20 before coating can be worse than 60-40 after coating. Accumulation and concentration rules regulate common situations in which there are multiple defects on the surface of an optical element, and clarify how they should be added up. For example, if one or more scratches are found with the maximum visibility allowed, to pass the test, the sum of the length of these scratches is limited to a quarter of the diameter of the element. The number of digs at the maximum permitted level is determined by dividing the measured clear aperture diameter (in millimeters) by 20, and rounding up. For example, for a clear aperture of 81 mm, 5 digs are allowed at the maximum level. Since the comparison master is only in possession of the US Army, several commercial masters have been developed that are intended to be compatible, but due to the complexity of the factors that make a scratch visible, these masters are not always compatible with the original and there is no way to match one set to another. For example, a visibility grade 10 scratch on one master can appear brighter than a visibility grade 60 scratch on another master. For this reason, it is recommended to also indicate on the drawing the type of master set to which it must be compared during the test. Examples of such commercial comparison sets made of plastic or glass are Davidson Optronics, Brysen Optical, and Jenoptik Paddle – sold by ThorLabs and Edmund Optics. ISO 10110-7 This standard is used in the USA, China, Japan, Russia, and all of Europe. The notation as of 2007 is: 5/ N x A; C N' x A'; L N" x A"; E A''', where N and A represent the number of defects and the maximum size of the defect, N' and A' represent the number of imperfections on the coating and their maximum size, N'' and A'' represent the number of scratches allowed and their maximum size and A''' represent the maximum size of an edge chip (a defect on the rim of the optical element). A scratch in this case is defined as a defect longer than 2 mm. Only the first part of the characterization, N x A, is mandatory. The rest of the details can be omitted. A and A' are given as the square root of the area of the defect and are indicated by discrete values from the series: 4,2.5,1.6,1,0.63,0.4,0.25. In addition to the limits on the number of defects and their size, the total area of all imperfections must not exceed A*N2. Long defects (scratches) are summed up by their width, independent of length. There is no limit on the number of edge chips, and the concentration of imperfections is limited by the rule that at most 20% of the defects, allowance can be concentrated in an area of 5% of the clear aperture. A fundamental advantage of ISO is a relatively simple translation between the percentage of light scattered from a surface and the characterization of its surface, according to the formula: Scatter % = 4 x [(N x A2)+(N' x A'2)+ N" x A" x Φ]/(π x Φ2) Unlike MIL-PRF-13830B which is cheap and fast to use, but suffers from inaccuracies, the use of the dimensional standard of ISO 10110-7 is more accurate but takes a longer time to test and is therefore expensive. The relatively long test time is derived from the fact that testing according to this standard is carried out using a microscope, comparing sizes of each defect to defects on a master, and because of the large magnification needed the field of view is small, requiring several measurements to map each optical element. David Aikens, director of Optics and Electro-Optics Standards Council, presented a recommended conversion chart that preserves the level of quality control, or percent fall, in ISO scratch & dig testing versus the military standard. For example 5/2x0.40; L 3 x0.010 is a statistically-equivalent standard to 60-40 of the strict military standard, over a 20 mm opening. The logical flaw of this dimensional standard is in defining a scratch according only to its width. For example, if a lens with a diameter of 100 mm has a requirement of L 1 x 0.025, a single scratch with a thickness of up to 25 microns is allowed, even if it covers the entire 100 mm diameter. However, if the manufacturer polishes the surface and removes the scratch from the central 95 millimeters of the lens, there will be two scratches each 2.5 mm in length and now the lens will fail the acceptance tests because the characterization allows only one scratch. The illogicality here is obvious: it is not acceptable to reject a component due to a process that improves its quality. As of 2017, to support quick measurements intended for less sensitive surfaces, ISO 10110-7 also allows the definition of scratches according to their visibility, and the definition of digs according to their diameter, just like MIL-PRF-13830B, using the same grades, for example 60-40. It is possible to expand and also mark coating imperfections as well as edge chips, similarly the definition in the dimensional standard: 5 / S - D; C S' - D'; E A''' where S and D are the definitions for scratches and digs, S' and D' for these defects on the coating and A''' characterizes edge chip as defined above. As explained about the military standard, it is important to explicitly specify which master set the scratches brightness are to be compared to. MIL-C-48497A ו- MIF-F-48616 These standards are almost as popular as MIL-PRF-13830B but they have become less popular with time. These standards define scratches and digs according to their physical size and mark their grade with the letters: A, B, C, D, E, F, G (and H which is used only for digs). The letter A represents the narrowest scratch, which is 0.005 mm wide, and the smallest dig, which is 0.05 mm in diameter. On the other hand, the letter G represents a scratch that is 0.12 mm wide and a dig that is 0.7 mm in diameter. A microscope or magnifying glass is used for testing, or sometimes even just using the naked eye to compare to a master. ANSI OP1.002 This American standard was first published in 2006. Just like in the MIL-PRF-13830B standard, ANSI OP1.002 defines digs according to their diameter. ANSI OP1.002 also supports two separate methods for scratches: visibility and size. The visibility method defines scratches according to their visibility and is identical in design and terminology to the MIL-PRF-13830B standard. Just like the military standard, it uses two numbers, the first for scratches and the second for digs, maintaining their meaning as in the military standard. Examples: 80-50, 60-40. This method takes advantage of the speed and low cost of the visual inspection and is used for elements with looser tolerances. The dimensionality method for scratches is based on the MIL-C-48497A standard, which is considered easy to use and functional. The dimensional method uses two letters, the first for scratches and the second for digs. For example: A-A or E-E. This standard is intended for parts with tight surface quality tolerances, such as CCD cover glasses or demanding laser applications. The OP1.002 standard allows using a microscope to compare with the master. This standard allows a relatively easy translation between the desired scattering level and the surface quality, as mentioned above. Roughness US military standard MIL-STD-10A This original standard was common in nature, not intended for the characterization of polished surfaces per se. It used parameters that are not typically used for the characterization of optical elements such as average roughness. ASME B46.1-2002 This standard replaced MIL-STD-10A and defines more than forty different parameters including RMS (root mean square), slope, skew, PSD (Power Spectral Density, which is the most comprehensive characteristic), and more. There is a significant improvement in this standard because it allows the characterization of machined surfaces, at different spatial frequencies, which is especially important in cases where the optics were produced using techniques that leave periodic marks, such as caused by diamond turning. For most uses it is sufficient to use RMS. In all cases, it is important to specify the range in which the calculation is performed because without defining the spatial frequency range in which the measurement is performed, this standard is meaningless. ISO 10110-8 (2010) This popular standard, similar to ASME B46.1, also defines the RMS of the surface over a specific length scale, PSD and more. It differs from the ASME specification by using symbols instead of words. See also Surface roughness Scattering from rough surfaces Quality control Visual inspection References External links Comparing various specifications for Scratch and Dig Poster explaining drawing notations by ISO 10110 (2023 update) Optics Applied and interdisciplinary physics Glass engineering and science
Surface imperfections (optics)
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,150
[ "Glass engineering and science", "Applied and interdisciplinary physics", "Optics", "Materials science", " molecular", "Atomic", " and optical physics" ]
75,471,646
https://en.wikipedia.org/wiki/List%20of%20lightning%20phenomena
This is a list of lightning phenomena. Types Anvil crawler lightning, sometimes called spider lightning, is created when leaders propagate through horizontally-extensive charge regions in mature thunderstorms, usually the stratiform regions of mesoscale convective systems. These discharges usually begin as IC discharges originating within the convective region; the negative leader end then propagates well into the aforementioned charge regions in the stratiform area. If the leader becomes too long, it may separate into multiple bidirectional leaders. When this happens, the positive end of the separated leader may strike the ground as a positive CG flash or crawl on the underside of the cloud, creating a spectacular display of lightning crawling across the sky. Ground flashes produced in this manner tend to transfer high amounts of charge, and this can trigger upward lightning flashes and upper-atmospheric lightning. Ball lightning may be an atmospheric electrical phenomenon, the physical nature of which is still controversial. The term refers to reports of luminous, usually spherical objects which vary from pea-sized to several metres in diameter. It is sometimes associated with thunderstorms, but unlike lightning flashes, which last only a fraction of a second, ball lightning reportedly lasts many seconds. Ball lightning has been described by eyewitnesses but rarely recorded by meteorologists. Scientific data on natural ball lightning is scarce owing to its infrequency and unpredictability. The presumption of its existence is based on reported public sightings, and has therefore produced somewhat inconsistent findings. Brett Porter, a wildlife ranger, reported taking a photo in Queensland, Australia in 1987. Bead lightning, also known by the terms pearl lightning, chain lightning, perlschnurblitz, éclair en chapelet and others, is the decaying stage of a lightning channel in which the luminosity of the channel breaks up into segments. Nearly every lightning discharge will exhibit beading as the channel cools immediately after a return stroke, sometimes referred to as the lightning's 'bead-out' stage. 'Bead lightning' is more properly a stage of a normal lightning discharge rather than a type of lightning in itself. Beading of a lightning channel is usually a small-scale feature, and therefore is often only apparent when the observer/camera is close to the lightning. Clear-air lightning describes lightning that occurs with no apparent cloud close enough to have produced it. In the U.S. and Canadian Rockies, a thunderstorm can be in an adjacent valley and not observable from the valley where the lightning bolt strikes, either visually or audibly. European and Asian mountainous areas experience similar events. Also in areas such as sounds, large lakes or open plains, when the storm cell is on the near horizon (within ) there may be some distant activity, a strike can occur and as the storm is so far away, the strike is referred to as a bolt from the blue. These flashes usually begin as normal IC lightning flashes before the negative leader exits the cloud and strikes the ground a considerable distance away. Positive clear-air strikes can occur in highly sheared environments where the upper positive charge region becomes horizontally displaced from the precipitation area. Cloud-to-air lightning is a lightning flash in which one end of a bidirectional leader exits the cloud, but does not result in a ground flash. Such flashes can sometimes be thought of as failed ground flashes. Blue jets and gigantic jets are a form of cloud-to-air or cloud-to-ionosphere lightning where a leader is launched from the top of a thunderstorm. Dry lightning is lightning that occurs with no precipitation at the surface and is the most common natural cause of wildfires. Pyrocumulus clouds produce lightning for the same reason that it is produced by cumulonimbus clouds. This term is mainly used in Australia, Canada, and the United States. Forked lightning is cloud-to-ground lightning that exhibits branching of its path. Heat lightning is a lightning flash that appears to produce no discernible thunder because it occurs too far away for the thunder to be heard. The sound waves dissipate before they reach the observer. Ribbon lightning occurs in thunderstorms with high cross winds and multiple return strokes. The wind will blow each successive return stroke slightly to one side of the previous return stroke, causing a ribbon effect. Rocket lightning is a form of cloud discharge, generally horizontal and at cloud base, with a luminous channel appearing to advance through the air with visually resolvable speed, often intermittently. Sheet lightning is cloud-to-cloud lightning that exhibits a diffuse brightening of the surface of a cloud, caused by the actual discharge path being hidden or too far away. The lightning itself cannot be seen by the spectator, so it appears as only a flash, or a sheet of light. The lightning may be too far away to discern individual flashes. Smooth channel lightning is an informal term referring to a type of cloud-to-ground lightning strike that has no visible branching and appears like a line with smooth curves as opposed to the jagged appearance of most lightning channels. They are a form of positive lightning generally observed in or near the convective regions of severe thunderstorms in the north central United States. It is theorized that severe thunderstorms in this region obtain an "inverted tripole" charge structure in which the main positive charge region is located below the main negative charge region instead of above it, and as a result these thunderstorms generate predominantly positive cloud-to-ground lightning. The term "smooth channel lightning" is also sometimes attributed to upward ground-to-cloud lightning flashes, which are generally negative flashes initiated by upward positive leaders from tall structures. Staccato lightning is a cloud-to-ground lightning (CG) strike which is a short-duration stroke that (often but not always) appears as a single very bright flash and often has considerable branching. These are often found in the visual vault area near the mesocyclone of rotating thunderstorms and coincides with intensification of thunderstorm updrafts. A similar cloud-to-cloud strike consisting of a brief flash over a small area, appearing like a blip, also occurs in a similar area of rotating updrafts. Superbolts are rather loosely defined as strikes with a source energy of more than 100 gigajoule [100 GJ] (most lightning strikes come in at around 1 gigajoule [1 GJ]). Events of this magnitude occur about as frequently as one in 240 strikes. They are not categorically distinct from ordinary lightning strikes, and simply represent the uppermost edge of a continuum. Contrary to popular misconception, superbolts can be either positively or negatively charged, and the charge ratio is comparable to that of "ordinary" lightning. Sympathetic lightning is the tendency of lightning to be loosely coordinated across long distances. Discharges can appear in clusters when viewed from space. Upward lightning or ground-to-cloud lightning is a lightning flash which originates from the top of a grounded object and propagates upward from this point. This type of lightning can be triggered by a preceding lightning flash, or it may initiate entirely on its own. The former is generally found in regions where spider lightning occurs, and may involve multiple grounded objects simultaneously. The latter usually occurs during the cold season and may be the dominant lightning type in thundersnow events. References Articles containing video clips Atmospheric electricity Electric arcs Electrical breakdown Electrical phenomena Terrestrial plasmas Space plasmas Lightning Weather hazards Weather-related lists
List of lightning phenomena
[ "Physics" ]
1,544
[ "Space plasmas", "Electric arcs", "Physical phenomena", "Weather hazards", "Weather", "Plasma phenomena", "Atmospheric electricity", "Astrophysics", "Electrical phenomena", "Weather-related lists", "Electrical breakdown", "Lightning" ]
75,471,835
https://en.wikipedia.org/wiki/KPD%200005%2B5106
KPD 0005+5106 is a helium-rich white dwarf located 1,350 light-years from Earth. As a "pre-white dwarf", it is believed to still be in the helium-burning phase, just before nuclear fusion finally stops. It is one of the hottest known white dwarfs, with a temperature of 200,000 K. Possible companion object KPD 0005+5106 has been observed to emit high-energy X-rays that regularly increase and descrease in luminosity every 4 hours and 42 minutes. This indicates that the star possibly has a companion orbiting it. Simulations show that a Jupiter-mass object can exceed the roche lobe and is the more likely companion for KPD 0005+5106. The white dwarf pulls material from its companion into a disk around itself, before it slams into its north and south poles. The concentration of material at the poles causes the creation of two bright spots emitting high-energy X-rays. References White dwarfs Cassiopeia (constellation) Astronomical objects discovered in 2008
KPD 0005+5106
[ "Astronomy" ]
216
[ "Cassiopeia (constellation)", "Constellations" ]
75,473,427
https://en.wikipedia.org/wiki/Wind%20Engineering%20%28journal%29
Wind Engineering is a bimonthly peer-reviewed scientific journal covering research on wind power published by Sage Publishing. The editor-in-chief is Jon McGowan (University of Massachusetts, Amherst). According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.5. The journal began in 1977 as a quarterly journal under the editorship of E. Mowforth at the University of Surrey, and was published by the Multi-Science Publishing Company. Starting with the first issue of 1983, it became the official journal of the European Wind Energy Association, with a new editorial board representing the EWEA; and starting with the second issue, it was additionally identified as the official journal of the British Wind Energy Association. References External links Issues from 1977-2019 at JSTOR Academic journals established in 1977 Wind power Bimonthly journals Energy and fuel journals English-language journals SAGE Publishing academic journals
Wind Engineering (journal)
[ "Environmental_science" ]
186
[ "Environmental science journals", "Energy and fuel journals" ]
75,473,943
https://en.wikipedia.org/wiki/AXA1125
AXA1125 is an experimental drug developed by Axcella Health that "increased β-oxidation and improved bioenergetics in preclinical models". It was studied as a treatment for non-alcoholic fatty liver disease and long COVID. AXA1125 is a fixed composition comprising five amino acids (leucine, isoleucine, valine, arginine, and glutamine) and an amino acid derivative (N-acetylcysteine). References Experimental drugs developed for non-alcoholic fatty liver disease Long COVID Amino acids Hexapeptides
AXA1125
[ "Chemistry" ]
126
[ "Amino acids", "Biomolecules by chemical classification" ]