id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
71,989,813
https://en.wikipedia.org/wiki/Nitryl%20cyanide
Nitryl cyanide is an energetic chemical compound with the formula NCNO2. Nitryl cyanide is a possible precursor to the theoretical explosive 2,4,6-trinitro-1,3,5-triazine. Synthesis Nitryl cyanide was first synthesized in 2014. The reaction of nitronium tetrafluoroborate with tert-butyldimethylsilyl cyanide at −30 °C produces nitryl cyanide, with tert-butyldimethylsilyl fluoride and boron trifluoride as byproducts. The conversion of this method is only 50%, and using an excess of tert-butyldimethylsilyl causes the yield to drop even further. References Nitro compounds Cyanides Explosives
Nitryl cyanide
[ "Chemistry" ]
174
[ "Explosives", "Explosions" ]
71,990,227
https://en.wikipedia.org/wiki/1%2C1%27-Ferrocenedicarboxylic%20acid
1,1'-Ferrocenedicarboxylic acid is the organoiron compound with the formula . It is the simplest dicarboxylic acid derivative of ferrocene. It is a yellow solid that is soluble in aqueous base. The 1,1' part of its name refers to the location of the carboxylic acid groups on separate rings. It can be prepared by hydrolysis of its diesters (R = Me, Et), which in turn are obtained by treatment of ferrous chloride with the sodium salt of the carboxyester of cyclopentadienide . Ferrocenedicarboxylic acid is the precursor to many derivatives such as the diacid chloride, the diisocyanate, the diamide, and diamine, respectively, , , , and . Derivatives of ferrocenedicarboxylic acid are components of some redox switches and redox active coatings. Related compounds Ferrocenecarboxylic acid References Ferrocenes Cyclopentadienyl complexes Dicarboxylic acids Aromatic acids
1,1'-Ferrocenedicarboxylic acid
[ "Chemistry" ]
230
[ "Organometallic chemistry", "Cyclopentadienyl complexes" ]
71,990,674
https://en.wikipedia.org/wiki/Pyrophile
A pyrophile or pyrophilic/pyrophilous insect is an insect which has evolved to rely upon fire ecology for important parts of their life cycle. Pyrophiles usually occur alongside and co-evolve with pyrophytes, the plant analog of a pyrophilic insect - those plants which rely upon natural fires as part of their lifecycle. These insects have evolved to be able to rapidly colonize environments after a wildfire. Highly sensitive infrared receptors are thought to have evolved independently in at least four different groups of insects. Little is known about the ecological interactions and consequences of pyrophilic insects, though they are known mostly amongst flies and beetles. Flies of the genus Microsania are some of the most numerous and well-described pryophilic insects. Others include buprestid beetles in the genus Melanophila, ground beetles in the genus Sericoda, and some species of flat bugs in the genus Aradus. Various theories explain these fire-loving adaptations as owing to the weakening of the host plant, the sterilization of the medium into which the eggs are laid, or the elimination of competitive or predatory organisms. References Insects
Pyrophile
[ "Biology" ]
249
[ "Animals", "Insects" ]
71,992,445
https://en.wikipedia.org/wiki/Mermin%27s%20device
In physics, Mermin's device or Mermin's machine is a thought experiment intended to illustrate the non-classical features of nature without making a direct reference to quantum mechanics. The challenge is to reproduce the results of the thought experiment in terms of classical physics. The input of the experiment are particles, starting from a common origin, that reach detectors of a device that are independent from each other, the output are the lights of the device that turn on following a specific set of statistics depending on the configuration of the device. The results of the thought experiment are constructed in such a way to reproduce the result of a Bell test using quantum entangled particles, which demonstrate how quantum mechanics cannot be explained using a local hidden variable theory. In this way Mermin's device is a pedagogical tool to introduce the unconventional features of quantum mechanics to a larger public. History The original version with two particles and three settings per detector, was first devised in a paper called "Bringing home the atomic world: Quantum mysteries for anybody" authored by the physicist N. David Mermin in 1981. Richard Feynman told Mermin that it was "One of the most beautiful papers in physics". Mermin later described this accolade as "the finest reward of my entire career in physics". Ed Purcell shared Mermin's article with Willard Van Orman Quine, who then asked Mermin to write a version intended for philosophers, which he then produced. Mermin also published a second version of the thought experiment in 1990 based on the GHZ experiment, with three particles and detectors with only two configurations. In 1993, Lucien Hardy devised a paradox that can be made into a Mermin-device-type thought experiment with two detectors and two settings. Original two particle device Assumptions In Mermin's original thought experiment, he considers a device consisting of three parts: two detectors A and B, and a source C. The source emits two particles whenever a button is pushed, one particle reaches detector A and the other reaches detector B. The three parts A, B and C are isolated from each other (no connecting pipes, no wires, no antennas) in such a way that the detectors are not signaled when the button of the source has been pushed nor when the other detector has received a particle. Each detector (A and B) has a switch with three configurations labeled (1,2 and 3) and a red and a green light bulb. Either the green or the red light will turn on (never both) when a particle enters the device after a given period of time. The light bulbs only emit light in the direction of the observer working on the device. Additional barriers or instrument can be put in place to check that there is no interference between the three parts (A,B,C), as the parts should remain as independent as possible. Only allowing for a single particle to go from C to A and a single particle from C to B, and nothing else between A and B (no vibrations, no electromagnetic radiation). The experiment runs in the following way. The button of the source C is pushed, particles take some time to travel to the detectors and the detectors flash a light with a color determined by the switch configuration. There are nine total possible configuration of the switches (three for A, three for B). The switches can be changed at any moment during the experiment, even if the particles are still traveling to reach the detectors, but not after the detectors flash a light. The distance between the detectors can be changed so that the detectors flash a light at the same time or at different times. If detector A is set to flash a light first, the configuration of the switch of detector B can be changed after A has already flashed (similarly if B set to flash first, the settings of A can be change before A flashes). Expected results The expected results of the experiment are given in this table in percentages: Every time the detectors are set to the same setting, the bulbs in each detector always flash same colors (either A and B flash red, or A and B flash green) and never opposite colors (A red B green, or A green B red). Every time the detectors are at different setting, the detectors flash the same color a quarter of the time and opposite colors 3/4 of the time. The challenge consists in finding a device that can reproduce these statistics. Hidden variables and classical implementation In order to make sense of the data using classical mechanics, one can consider the existence of three variables per particle that are measured by the detectors and follow the percentages above. Particle that goes into detector A has variables and the particle that goes into detector B has variables . These variables determine which color will flash for a specific setting (1,2 and 3). For example, if the particle that goes in A has variables (R,G,G), then if the detector A is set to 1 it will flash red (labelled R), set to 2 or 3 it will flash green (labelled G). We have 8 possible states: where in order to reproduce the results of table 1 when selecting the same setting for both detectors. For any given configuration, if the detector settings were chosen randomly, when the settings of the devices are different (12,13,21,23,31,32), the color of their lights would agree 100% of the time for the states (GGG) and (RRR) and for the other states the results would agree 1/3 of the time. Thus we reach an impossibility: there is no possible distribution of these states that would allow for the system to flash the same colors 1/4 of the time when the settings are not the same. Thereby, it is not possible to reproduce the results provided in Table 1. Quantum mechanical implementation Contrary to the classical implementation, table 1 can be reproduced using quantum mechanics using quantum entanglement. Mermin reveals a possible construction of his device based on David Bohm's version of the Einstein–Podolsky–Rosen paradox. One can set two spin-1/2 particles in the maximally entangled singlet Bell state: , to leave the experiment, where () is the state where the projection of the spin of particle 1 is aligned (anti-aligned) with a given axis and particle 2 is anti-aligned (aligned) to the same axis. The measurement devices can be replaced with Stern–Gerlach devices, that measure the spin in a given direction. The three different settings determine whether the detectors are vertical or at ±120° to the vertical in the plane of perpendicular to the line of flight of the particles. Detector A flashes green when the spin of the measured particle is aligned with the detector's magnetic field and flashes red when anti-aligned. Detector B has the opposite color scheme with respect to A. Detector B flashes red when the spin of the measured particle is aligned and flashes green when anti-aligned. Another possibility is to use photons that have two possible polarizations, using polarizers as detectors, as in Aspect's experiment. Quantum mechanics predicts a probability of measuring opposite spin projections given by where is the relative angle between settings of the detectors. For and the system reproduces the result of table 1 keeping all the assumptions. Three particle device Mermin's improved three particle device demonstrates the same concepts deterministically: no statistical analysis of multiple experiments is necessary. It has three detectors, each with two settings 1 and 2, and two lights, one red and one green. Each run of the experiment consists of setting the switches to values 1 or 2 and observing the color of the lights that flash when particles enter the detectors. The detectors again are assumed independent of one another, and cannot interact. For the improved device, the expected results are the following: if one detector is switched to setting 1 while the others are on setting 2, an odd number of red lights flash. If all three detectors set to 1, and odd number of red light flashes never occurs. Mermin then imagines that each of three particles emitted from the common source and entering the detectors has a hidden instruction set, dictating which light to flash for each switch setting. If only one device of three has a switch set to 1, there will always be an odd number of red flashes. However, Mermin shows that all possible instruction sets predict an odd number of red lights when all three devices are set to 1. No instruction set built in to the particles can explain the expected results. This contradiction implies that local hidden variable theory cannot explain such a device. Quantum mechanical implementation The improved device can be built using quantum mechanics. This implementation is based on the Greenberger–Horne–Zeilinger (GHZ) experiment. The device can be constructed if the three particles are quantum entangled in a GHZ state, written as where and represent two states of a two-level quantum system. For electrons, the two states can be the up and down projections of the spin along the z-axis. The detector settings correspond to other two orthogonal measurement directions (for example, projections along x-axis or along the y-axis). See also Quantum pseudo-telepathy References Additional references Physical paradoxes Quantum measurement Thought experiments in quantum mechanics
Mermin's device
[ "Physics" ]
1,883
[ "Quantum measurement", "Quantum mechanics", "Thought experiments in quantum mechanics" ]
71,993,594
https://en.wikipedia.org/wiki/Roberto%20Markarian
Roberto Markarian Abrahamian (12 December 1946) is a Uruguayan mathematician of Armenian descent, expert in dynamical systems and chaos theory. Biography He started studying at the University of the Republic in the 1960s. During the civic-military dictatorship he was arrested due to political reasons. Later he went to Brazil, where he graduated from the Federal University of Rio Grande do Sul. Later on, his degree was validated in Uruguay. Markarian served as rector of the University of the Republic (2014-2018). He is brother of the football coach Sergio Markarián. References 1946 births Living people Uruguayan people of Armenian descent Federal University of Rio Grande do Sul alumni Academic staff of the University of the Republic (Uruguay) University of the Republic (Uruguay) rectors Uruguayan mathematicians Dynamical systems theorists Chaos theorists
Roberto Markarian
[ "Mathematics" ]
161
[ "Dynamical systems theorists", "Dynamical systems" ]
71,994,054
https://en.wikipedia.org/wiki/Algal%20virus
Algal viruses are the viruses infecting algae, which are photosynthetic single-celled eukaryotes. As of 2020, there were 61 viruses known to infect algae. Algae are integral components of aquatic food webs and drive nutrient cycling, so the viruses infecting algal populations also impacts the organisms and nutrient cycling systems that depend on them. Thus, these viruses can have significant, worldwide economic and ecological effects. Their genomes varied between 4.4 to 560 kilobase pairs (kbp) long and used double-stranded Deoxyribonucleic Acid (dsDNA), double-stranded Ribonucleic Acid (dsRNA), single-stranded Deoxyribonucleic Acid (ssDNA), and single-stranded Ribonucleic Acid (ssRNA). The viruses ranged between 20 and 210 nm in diameter. Since the discovery of the first algae-infecting virus in 1979, several different techniques have been used to find new viruses infecting algae and it seems that there are many algae-infecting viruses left to be discovered DNA viruses The viruses that store their genomic information using DNA, DNA viruses, are the best studied subgrouping of algae-infecting viruses This is especially true for the dsDNA virus family, Phycodnaviridae. However, other groups of dsDNA viruses including giant viruses belonging to the family Mimiviridae also infect algae. A recent survey of 65 algal genomes revealed that some viruses belonging to the Nucleocytoplasmic Large DNA Viruses (NCLDV), the larger viral group containing both the Phycodnaviridae and Mimiviridae viral families, had integrated themselves into 24 of the host’s genomes. Recently, ssDNA viruses, like the group of diatom infecting viruses Bacilladnaviridae, have been discovered RNA viruses RNA viruses also attack algal hosts. There are dsRNA viruses like those belonging to the Reoviridae family that infect Micromonas pusilla and ssRNA viruses like those belonging to the genus Bacillarnavirus that infect the diatom Chaetoceros tenuissimus. Incorporation of RNA virus genes into algal genomes has also been reported. Genes from single stranded dinoflagellate-infecting viruses have been detected in the genomes of the coral endosymbiotic algae, Symbiodinium. Diatom viruses Diatoms are among the most common phytoplankton groups found within marine and freshwater systems. They are estimated to consist of roughly 12,000-13,000 species which account for 35-70% of marine primary production within the ocean and nearly 20% of the world's primary productivity. Diatoms, among other primary important producers, can be susceptible to viral infection. Diatom viruses are a group of viruses that infect diatoms and have potentially been shown to impact population dynamics and mortality rates within diatom populations significantly. Diatom viruses are highly diverse and can have a variety of complex interactions with their specific host cells; they have also been known to have the ability to cause cell lysis and nutrient release demonstrating their significant ecological impact with regard to their potential to release nutrients into the ecosystem and regulate diatom populations. Diatom virus discovery The discovery of Diatom Viruses dates back to the early 1970s when scientists first observed viral-like particles in the cultures of diatoms. Moreover, it was not until the 1990s that marine viruses were found to have the ability to be potentially pathogenic towards marine organisms. The first Diatom virus was identified and isolated, and characterized from the Ariake Sea of Japan in 2004 and was classified as Rhizosolenia setigera RNA Virus (RsRNAV). Rhizosolenia setigera RNA Virus (RsRNAV) was found to have a viral replication site of cytoplasm within the host organism. The viral particle of Rhizosolenia setigera RNA Virus (RsRNAV) was found to have an icosahedral shape lacking a tail and being of a single-stranded RNA structure. Thereafter, another research study found a Diatom virus which was isolated from the marine diatom Skeletonema costatum. The virus, later named the Skeletonema costatum DNA virus (ScDCV), was found to have an icosahedral capsid and a double-stranded DNA genome. Subsequently, after the discovery of Skeletonema costatum DNA virus (ScDCV), another Diatom virus was isolated and characterized as the Chaetoceros setoensis DNA Virus (CsetDNAV), which was found to infect the Diatom Chaetoceros setoensis. The strain of the virus (CsetDNAV) was characterized and identified to be present within the Diatom Chaetoceros setoensis, which was found to be located off the coast of the Seto Island Sea of Japan. The viral particle of Chaetoceros setoensis was found within the cytoplasm of the host organism, and the viral genome was found to be a closed circular ssDNA with segments of unidentified linear single-stranded nucleotides in that this specific viruses genome structure is unique in that it has not been found within any other Diatom Viruses. Due to the discovery of (CsetDNAV) and its unique genome structure, it is thought that other unidentified Diatom Viruses may possess diverse and unique viral genomic structures. Since the discovery of (CsetDNAV), various other Diatom viruses have been isolated and identified, including members of the Bacilladnaviridae, Phycodnaviridae, and Picornaviridae. Diatom virus transmission Diatom viruses are believed to be primarily transported through horizontal gene transmission. Horizontal gene transmission within Diatom viruses occurs when a Diatom virus infects a Diatom and then spreads to other Diatoms via direct contact or through interactions within the water column. Diatom viruses are highly infectious and have the ability to spread rapidly within Diatom populations leading to significant changes in the growth and physiology of the organism. The transmission of Diatom viruses can be potentially explained by the fact that Diatoms are known to form large colonies or chains, which provides Diatom viruses with ample opportunity to infect and spread within new host organisms. Additionally, some Diatom viruses have been shown to have the capability of vertical transmission in which they can be passed from a parent Diatom to a Diatom offspring. The form of vertical transmission is thought to allow Diatom viruses to persist over long periods of time and may influence the evolution and genetic diversity of Diatom populations. The mechanisms of this type of transmission are not fully understood, but it is thought that this form of transmission is facilitated via asexual reproduction and cell division within Diatoms. Taxonomy The following families are recognized. Bacilladnaviridae Phycodnaviridae Picornaviridae Applications Algae-sourced biodiesel Algal viruses have the potential to be used for the production of alternative energy sources to fossil fuels. More specifically, algal viruses can be used as a tool during the process of biodiesel production sourced from green microalgae and cyanobacteria. This is given that green algae and cyanobacteria contain components necessary for the creation of biofuels. In particular, both green algal cells and cyanobacteria produce cellular products known as lipids. The lipids contain triglycerides of which are directly used to make biodiesel. However, green microalgae and cyanobacteria have not been used for the standard commercial production of biodiesel due to the difficulty in extracting adequate yields of lipids from these sources. Use in biodiesel production Using algal viruses as a pretreatment on green microalgae has been found to be an effective procedural change for the lipid extraction process. This particular algal virus was used given its characterized ability to infect and replicate inside of unicellular Chlorella algal cells. After lytic algal viruses infect their cell host, they replicate and assemble mature virions that lyse the host algal cell for release. The degradation of the cell wall of the algal host cell by the mature algal viruses inside of the cell is the key factor that makes for a more efficient lipid extraction from the algal cells. Limits Researchers note that an algal virus may develop a lysogenic cycle after the recurrent use of one particular algal virus. In the case of a lysogenic viral conversion, the algal virus would not perform the desired task of lysing the cell host. Instead, when a lysogenic algal virus releases its genetic material into the algal cell host, the genetic material hides inside of the algal host genome as opposed to beginning genetic replication and protein synthesis for the development of new mature virions. Controlling algal blooms In nature, algal viruses have been observed to play an ecological role in algal bloom demise. Given this, scientists have proposed that algal viruses have the potential to be used as biological treatments for algal bloom control. For example, a particular algal virus, known as a cyanophage, can be used to control harmful algal blooms of cyanobacteria. Lytic cyanophages are often found in the presence of Microcystis cyanobacteria. Specifically, the impact that cyanophages have on the population control of Microcystis aeruginosa has been a topic of interest given that this species of cyanobacteria is commonly responsible for harmful algal blooms. In one lab study, M. aeruginosa was collected and then treated with cyanophage that was found in the presence of M. aeruginosa in a lake. After six days, the M. aeruginosa algal biomass had decreased by 95 percent. The results of another lab study showed that cyanophages maintain their observed function of algal bloom demise in a controlled eutrophic setting. In this case, the biomass of M. aeruginosa also greatly decreased when treated with cyanophages. A negative correlation between M. aeruginosa and cyanophage has also been recorded in a natural setting. In the freshwater body, Lake Mikata, researchers analyzed samples of M. aeruginosa algal growth and found that the biomass of M. aeruginosa decreased in relation to an increase in cyanophage population density. References Viruses Algae
Algal virus
[ "Biology" ]
2,184
[ "Viruses", "Tree of life (biology)", "Microorganisms", "Algae" ]
71,994,467
https://en.wikipedia.org/wiki/Heinrich%20Leutwyler
Heinrich Leutwyler (born Oct 12, 1938) is a Swiss theoretical physicist, with interests in elementary particle physics, the theory of strong interactions, and quantum field theory. Early life and education Leutwyler went to the Gymnasium in Bern and studied physics, mathematics, and astronomy at the University of Bern. After the diploma in 1960 he went to the US, including Princeton. In 1962 he received his PhD under the supervision of John R. Klauder (at Bell Laboratories at the time), for his thesis entitled "Generally covariant Dirac equation and associated Boson Fields." Career In 1965 he got his habilitation in Bern, where he became assistant professor in the same year and full professor in 1969, until his retirement in 2000. In 1983/84 he was dean of the Faculty of Sciences. Leutwyler spent research visits at the Bell Labs in Murray Hill (1963, 1965), at Caltech in Pasadena (1973/74), and at CERN (1969/70, 1983/84, and 1996). Together with Murray Gell-Mann and Harald Fritzsch, Leutwyler was crucially involved in establishing quantum chromodynamics (QCD) as the fundamental theory of strong interactions. Together with Jürg Gasser he performed influential work on chiral perturbation theory, an effective field theory describing QCD at low energies, including the Gasser-Leutwyler coefficients of the effective Lagrangian and the determination of current quark masses. Leutwyler received an honorary doctorate of the Johannes Gutenberg University Mainz (1995), the Humboldt Award (2000), the Pomeranchuk Prize (2011), and the Sakurai Prize (2023). Personal life He is married and has two children. Publications Fritzsch, Gellmann, and Leutwyler: Advantages of the color octet gluon picture. In: Physics Letters B, volume 47, 1973, p. 365 Gasser and Leutwyler: Quark masses. In: Physics Reports, volume 87, 1982, p. 77 Gasser and Leutwyler: Chiral Perturbation Theory to One Loop. In: Annals of Physics, volume 158, 1984, p. 142 Gasser and Leutwyler: Chiral Perturbation Theory: Expansions in the Mass of the Strange Quark. In: Nuclear Physics B, volume 250, 1985, p. 465 Leutwyler, On the history of the strong interaction, Erice 2012 References External links Webpage at the University of Bern 1938 births Living people 20th-century American physicists 21st-century American physicists Swiss physicists Scientists from Bern University of Bern alumni Particle physicists Humboldt Research Award recipients
Heinrich Leutwyler
[ "Physics" ]
566
[ "Particle physicists", "Particle physics" ]
71,994,871
https://en.wikipedia.org/wiki/Angus%20J.%20Wilkinson
Angus J. Wilkinson is a professor of materials science based at the Department of Materials, University of Oxford. He is a specialist in micromechanics, electron microscopy and crystal plasticity. He assists in overseeing the MicroMechanics group while focusing on the fundamentals of material deformation. He developed the HR-EBSD method for mapping stress and dislocation density at high spatial resolution used at the micron scale in mechanical testing and micro-cantilevers to extract data on mechanical properties that are relevant to materials engineering. Selected publications Wilkinson AJ, Meaden G, Dingley DJ. High-resolution elastic strain measurement from electron backscatter diffraction patterns: New levels of sensitivity. Ultramicroscopy 2006;106:307–13. https://doi.org/10.1016/j.ultramic.2005.10.001. Wilkinson AJ, Hirsch PB. Electron diffraction based techniques in scanning electron microscopy of bulk materials. Micron 1997;28:279–308. https://doi.org/10.1016/S0968-4328(97)00032-2. Wilkinson AJ, Britton T. Strains, planes, and EBSD in materials science. Materials Today 2012;15:366–76. https://doi.org/10.1016/S1369-7021(12)70163-3. Wilkinson AJ, Randman D. Determination of elastic strain fields and geometrically necessary dislocation distributions near nanoindents using electron back scatter diffraction. Philosophical Magazine 2010;90:1159–77. https://doi.org/10.1080/14786430903304145. Guo Y, Britton TB, Wilkinson AJ. Slip band–grain boundary interactions in commercial-purity titanium. Acta Mater 2014;76:1–12. https://doi.org/10.1016/J.ACTAMAT.2014.05.015. Britton TB, Wilkinson AJ. High resolution electron backscatter diffraction measurements of elastic strain variations in the presence of larger lattice rotations. Ultramicroscopy 2012;114:82–95. https://doi.org/10.1016/J.ULTRAMIC.2012.01.004. Zhai T, Wilkinson AJ, Martin JW. A crystallographic mechanism for fatigue crack propagation through grain boundaries. Acta Mater 2000;48:4917–27. https://doi.org/10.1016/S1359-6454(00)00214-7. See also Department of Materials, University of Oxford Electron backscatter diffraction Ultramicroscopy References External links Angus Wilkinson at Department of Materials, University of Oxford Angus Wilkinson at Research.com. Living people Microscopists Academics of the University of Oxford British materials scientists Year of birth missing (living people) Metallurgists
Angus J. Wilkinson
[ "Chemistry", "Materials_science" ]
619
[ "Metallurgists", "Metallurgy", "Microscopists", "Microscopy" ]
71,995,177
https://en.wikipedia.org/wiki/Manganese%20cycle
The manganese cycle is the biogeochemical cycle of manganese through the atmosphere, hydrosphere, biosphere and lithosphere. There are bacteria that oxidise manganese to insoluble oxides, and others that reduce it to Mn2+ in order to use it. Manganese is a heavy metal that comprises about 0.1% of the Earth's crust and a necessary element for biological processes. It is cycled through the Earth in similar ways to iron, but with distinct redox pathways. Human activities have impacted the fluxes of manganese among the different spheres of the Earth. Global manganese cycle Manganese is a necessary element for biological functions such as photosynthesis, and some manganese oxidizing bacteria utilize this element in anoxic environments. Movement of manganese (Mn) among the global "spheres" (described below) is mediated by both physical and biological processes. Manganese in the lithosphere enters the hydrosphere from erosion and dissolution of bedrock in rivers, in solution it then makes its way into the ocean. Once in the ocean, Mn can form minerals and sink to the ocean floor where the solid phase is buried. The global manganese cycle is being altered by anthropogenic influences, such as mining and mineral processing for industrial use, as well as through the burning of fossil fuels. Lithosphere Manganese is the tenth most abundant metal in the Earth's crust, making up approximately 0.1% of the total composition, or about 0.019 mol kg−1, which is found mostly in the oceanic crust. Crust Manganese (Mn) commonly precipitates in igneous rocks in the form of early-stage crystalline minerals, which, once exposed to water and/or oxygen, are highly soluble and easily oxidized to form Mn oxides on the surfaces of rocks. Dendritic crystals rich in Mn form when microbes reprecipitate the Mn from the rocks on which they develop onto the surface after utilizing the Mn for their metabolism. For certain cyanobacteria found on desert varnish samples, for example, it has been found that manganese is used as a catalytic antioxidant to facilitate survival in the harsh sunlight and water conditions they face on desert rock surfaces. Soil Manganese is an important soil micronutrient for plant growth, playing an essential role as a catalyst in the oxygen-evolving complex of photosystem II, a photosynthetic pathway. Soil fungi in particular have been found to oxidize the reduced, soluble form of manganese (Mn2+) under anaerobic conditions, and may reprecipitate it as manganese oxides (Mn+3 to Mn+7) under aerobic conditions, where the preferred metabolic pathway typically involves the utilization of oxygen. Although not all iron-reducing bacteria have the capability of reducing manganese, there is overlap in the taxa that can perform both metabolisms; these organisms are very common in a range of environmental conditions. Challenges however persist in isolating these microbes in cultures. Depending on the pH, organic substrate availability, and oxygen concentration, Mn can either behave as an oxidation catalyst or an electron receptor. Though much of the total Mn that is cycled in soil is biologically-mediated, some inorganic reactions also contribute to Mn oxidation or precipitation of Mn oxides. The reduction potential (pe) and pH are two known constraints on the solubility of Mn in soils. As pH increases, Mn speciation becomes less sensitive to variations in pe. In acidic (pH = 5) soils with high reduction potentials (pe > 8), the forms of Mn are mostly reducible, with exchangeable and soluble Mn decreasing dramatically in concentration with increases in pe. Mn is also found in inorganic chelation complexes, where Mn forms coordinate bonds with SO42-, HCO3−, and Cl− ions. These complexes are important for organic matter stabilization in soils, as they have high surface areas and interact with organic matter through adsorption. Hydrosphere Iron (Fe) and Manganese (Mn) similarities in their respective cycles and are often studied together. Both have similar sources in the hydrosphere, which are hydrothermal vent fluxes, dust inputs, and weathering of rocks. The major removal of Mn from the ocean involves similar processes to Fe as well, with the most abundant removal from the hydrosphere via biological uptake, oxidative precipitation, and scavenging. Microorganisms oxidize the bioavailable Mn(II) to form Mn(IV), an insoluble manganese oxide that aggregates to form particulate matter that can then sink to the ocean floor. Manganese is important in aquatic ecosystems for photosynthesis and other biological functions. Freshwater and estuary Advection from tidal flows re-suspends estuary beds and can unearth manganese. The particulate manganese is dissolved via reduction that forms Mn (II), adding it to the internal cycle of manganese in organisms in the ecosystem. Estuary biogeochemistry is heavily influenced by tidal oscillations, temperature, and pH changes, and thus the manganese input into the internal cycling is variable. Mn in rivers and streams typically has a lower residence time than estuaries, and a large majority of the Mn is soluble Mn (II). In these freshwater ecosystems, the manganese cycling is depended on sediment fluxes that provide an influx of Mn into the system. Oxidation of Mn (II) from sediment drives the redox reactions that fuel the biogeochemical processes with Mn, as well as Mn reducing microbes. Marine In the ocean, different patterns of manganese cycling are seen. In the photic zone, there is a decrease in Mn particulate formation during the daytime, as rates of microbially catalyzed oxidation decrease and photo-dissolution of Mn oxides increases. The GEOTRACES program has led the production of the first global manganese model, in which predictions of global manganese distribution can be made. This global model found strong removal rates of Mn as water moves from the Atlantic Ocean surface to the North Atlantic deep water resulting in Mn depletion in water moving southward along the thermohaline conveyor. Overall, when looking at organism interactions with manganese, it is known that redox reactions play a key role, as well as that Mn has important biological functions, however far less is known about uptake and remineralization processes such as with iron. Early Earth Terrestrial manganese has existed since the formation of Earth around 4.6 Ga. The Sun and the Solar System formed during the collapse of a molecular cloud populated with many trace metals, including manganese. The chemical composition of the molecular cloud determined the composition of the many celestial bodies that form within it. Nearby supernova explosions populated the cloud with manganese; the most common manganese-forming supernovae are Type Ia supernovae.   The early Earth contained very little free oxygen (O2) until the Great Oxygenation Event around 2.35 Ga. Without O2, redox cycling of Mn was limited. Instead, soluble Mn(II) was only released into the oceans via silicate weathering on igneous rocks and supplied through hydrothermal vents. The increase in Mn oxidation occurred during the Archean Eon (> 2.5 Ga), whereas the first evidence of manganese redox cycling appears ~ 2.4 Ga, before the Great Oxygenation Event and during the Paleoproterozoic Era. Although the Great Oxygenation Event raised the abundance of oxygen on Earth, the oxygen levels were still relatively low compared to modern levels. It is believed that many primary producers were anoxygenic phototrophs and took advantage of abundant hydrogen sulfide (H2S) to catalyze photosynthesis. Anoxygenic phototrophy and oxygenic photosynthesis both require electron donors, with all known forms of anoxygenic phototrophy relying on reaction center electron acceptors with reduction potentials around 250-500 mV. Oxygenic photosynthesis requires reduction potentials around 1250 mV. It has been hypothesized that this wide difference in reduction potential indicates an evolutionary missing link in the origin of oxygenic photosynthesis. Mn(II) is the leading candidate for bridging this gap. The water-oxidizing complex, a key component of PSII, begins with the oxidation of Mn(II), which, along with additional evidence, strongly supports the hypothesis that manganese was a necessary step in the evolution of oxygenic photosynthesis. Anthropogenic influences While manganese naturally occurs in the environment, the global Mn cycle is influenced through anthropogenic activities. Mn is utilized in many commercial products, such as fireworks, leather, paint, glass, fertilizer, animal feed, and dry cell batteries. However, the effect of Mn pollution from these sources is minor compared to that of mining and mineral processing. The burning of fossil fuels, such as coal and natural gas, further contribute to the anthropogenic cycling of Mn. Mining and mineral processing Anthropogenic influences on the manganese cycle mainly stem from industrial mining and mineral processing, specifically, within the iron and steel industries. Mn is used in iron and steel production to improve hardness, strength, and stiffness, and is the primary component used in low-cost stainless steel and aluminum alloy production. Anthropogenic mining and mineral processing has spread Mn through three methods: wastewater discharge, industrial emissions, and releases in soils. Wastewater discharge Waste from mining and mineral processing facilities is typically separated into liquid and solid forms. Due to insufficient management and poor mining processes, especially in developing countries, liquid waste containing Mn can be discharged into bodies of water through anthropogenic effluents. Domestic wastewater and sewage sludge disposal are the main anthropogenic sources of Mn within aquatic ecosystems. In marine systems, the disposal of mine tailings contributes to aquatic anthropogenic Mn concentrations where high levels can be toxic to marine life. Industrial emissions The main anthropogenic influence of Mn input to the atmosphere is through industrial emissions, and roughly 80% of industrial emissions of Mn is due to steel and iron processing facilities. In the Northern Hemisphere, some of the Mn pollutants released through industrial emissions are transferred to Arctic regions through atmospheric circulation, where particulates settle and accumulate in natural bodies of water. Such atmospheric pollution of Mn can be hazardous for humans working or living near industrial facilities. Dust and smoke containing manganese dioxide and manganese tetroxide released into the air during mining is a primary cause of manganism in humans. Releases in soils The solid waste disposal of substances containing Mn by industrial sources typically ends up in landfills. Additional Mn deposition in soils can result from particulate settling of Mn released through industrial emissions. An analysis of datasets on the soil chemistry of North America and Europe revealed greater than 50% of Mn in ridge soils near iron or steel processing facilities was attributed to anthropogenic industrial inputs, whether through solid waste disposal or previously airborne particulates depositing in soils. Burning of fossil fuels Anthropogenically sourced Mn from the burning of fossil fuels has been found in the atmosphere, hydrosphere, and lithosphere. Mn is a trace element in fly ash, a residue from the use of coal for power production, which often ends up in the atmosphere, soils, and bodies of water. Methylcyclopentadienyl Mn tricarbonyl (MMT), a gasoline additive containing Mn, also contributes to Mn anthropogenic cycling. Due to the use of MMT as a fuel additive, motor vehicles are a significant source of Mn in the atmosphere, especially in regions of high traffic activity. In some regions, roughly 40% of Mn in the atmosphere was due to exhaust from traffic. Particulate manganese phosphate, manganese sulfate, and manganese oxide are the primary emissions from MMT combustion through its usage in gasoline. A portion of these particulates eventually leave the atmosphere to settle in soils and bodies of water. References Wikipedia Student Program Manganese Biogeochemical cycle
Manganese cycle
[ "Chemistry" ]
2,495
[ "Biogeochemical cycle", "Biogeochemistry" ]
71,995,920
https://en.wikipedia.org/wiki/Vertically%20Generalized%20Production%20Model
The Vertically Generalized Production Model (VGPM) is a model commonly used to estimate primary production within the ocean. The VGPM was designed by Behrenfeld and Falkowski and was originally published in a 1997 article in Limnology and Oceanography. It is one of the most frequently used models for primary production estimation due to its ability to be applied to chlorophyll a data from satellites, and its relatively simple design. Chlorophyll a is a common measure of primary production, as it is a main component of photosynthesis. Primary production is often measured using three variables: the biomass (or amount in weight) of the phytoplankton, the availability of light, and the rate of carbon fixation. The VGPM is now one of the most popular models to use for satellite chlorophyll data due to it being surface light dependent as well as using an estimated maximum value of primary production compared to the units of chlorophyll throughout the water column, known as PBopt. It also considers environmental factors that often influence primary production as well as allows for variables often collected using remote satellites to derive the primary production without having to physically sample the water. This PBopt was found to be dependent on surface chlorophyll, and data for this can be collected using satellites. Satellites can only collect the parameters used to estimate primary production; they cannot calculate it themselves, which is why the need for a model to do so exists. Because of this being a generalized model, it is intended to reflect most accurately the open ocean. Other localized areas, especially coastal regions, may need to incorporate additional factors to get the most accurate representation of primary production. The values produced using the VGPM are estimates and there will be some level of uncertainty with using this model. References Ecology Oceanography
Vertically Generalized Production Model
[ "Physics", "Biology", "Environmental_science" ]
379
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "Ecology" ]
71,998,469
https://en.wikipedia.org/wiki/475%20%C2%B0C%20embrittlement
Duplex stainless steels are a family of alloys with a two-phase microstructure consisting of both austenitic (face-centred cubic) and ferritic (body-centred cubic) phases. They offer excellent mechanical properties, corrosion resistance, and toughness compared to other types of stainless steel. However, duplex stainless steel can be susceptible to a phenomenon known as embrittlement or duplex stainless steel age hardening, which is a type of aging process that causes loss of plasticity in duplex stainless steel when it is heated in the range of . At this temperature range, spontaneous phase separation of the ferrite phase into iron-rich and chromium-rich nanophases occurs, with no change in the mechanical properties of the austenite phase. This type of embrittlement is due to precipitation hardening, which makes the material become brittle and prone to cracking. Duplex stainless steel Duplex stainless steel is a type of stainless steel that has a two-phase microstructure consisting of both austenitic (face-centred cubic) and ferritic (body-centred cubic) phases. This dual-phase structure gives duplex stainless steel a combination of mechanical and corrosion-resistant properties that are superior to those of either austenitic or ferritic stainless steel alone. The austenitic phase provides the steel with good ductility, high toughness, and high corrosion resistance, especially in acidic and chloride-containing environments. The ferritic phase, on the other hand, provides the steel with good strength, high resistance to stress corrosion cracking, and high resistance to pitting and crevice corrosion. They are therefore used extensively in the offshore oil and gas industry for pipework systems, manifolds, risers, etc. and in the petrochemical industry in the form of pipelines and pressure vessels. A duplex stainless steel mixture of austenite and ferrite microstructure is not necessarily in equal proportions, and where the alloy solidifies as ferrite, it is partially transformed to austenite when the temperature falls to around . Duplex steels have a higher chromium content compared to austenitic stainless steel, 20–28%; higher molybdenum, up to 5%; lower nickel, up to 9%; and 0.05–0.50% nitrogen. Thus, duplex stainless steel alloys have good corrosion resistance and higher strength than standard austenitic stainless steels such as type 304 or 316. Alpha (α) phase is a ferritic phase with body-centred cubic (BCC) structure, Imm [229] space group, 2.866 Å lattice parameter, and has one twinning system {112}<111> and three slip systems {110}<111>, {112}<111> and {123}<111>; however, the last system rarely activates. Gamma () phase is austenitic with a face-centred cubic (FCC) structure, Fmm [259] space group, and 3.66 Å lattice parameter. It normally has more nickel, copper, and interstitial carbon and nitrogen. Plastic deformation occurs in austenite more readily than in ferrite. During deformation, straight slip bands form in the austenite grains and propagate to the ferrite-austenite grain boundaries, assisting in the slipping of the ferrite phase. Curved slip bands also form due to the bulk-ferrite-grain deformation. The formation of slip bands indicates a concentrated unidirectional slip on certain planes causing a stress concentration. Age hardening by spinodal decomposition Duplex stainless steel can have limited toughness due to its large ferritic grain size, and its tendencies to hardening and embrittlement, i.e., loss of plasticity, at temperatures ranging from , especially at . At this temperature range, spinodal decomposition of the supersaturated solid ferrite solution into iron-rich nanophase () and chromium-rich nanophase (), accompanied by G-phase precipitation, occurs. This makes the ferrite phase a preferential initiation site for micro-cracks. This is because aging encourages Σ3 {112}<111> ferrite deformation twinning at slow strain rate and room temperature in tensile or compressive deformation, nucleating from local stress concentration sites, and parent-twinning boundaries, with 60° (in or out) misorientation, are suitable for cleavage crack nucleation. Spinodal decomposition refers to the spontaneous separation of a phase into two coherent phases via uphill diffusion, i.e., from a region of lower concentration to a region of higher concentration resulting in a negative diffusion coefficient , without a barrier to nucleation due to the phase being thermodynamically unstable (i.e., miscibility gap, + region in the figure), where is the Gibbs free energy per mole of solution and the composition. It increases hardness and decreases magneticity. Miscibility gap describes the region in a phase diagram below the melting point of each compound where the solid phase splits into the liquid of two separated stable phases. For 475 °C embrittlement to occur, the chromium content needs to exceed 12%. The addition of nickel accelerates the spinodal decomposition by promoting the iron-rich nanophase formation. Nitrogen changes the distribution of chromium, nickel, and molybdenum in the ferrite phase but does not prevent the phase decomposition. Other elements like molybdenum, manganese, and silicon do not affect the formation of iron-rich nanophase. However, manganese and molybdenum partition to the iron-rich nanophase, while nickel partitions to the chromium-rich nanophase. Microscopy characterisation Using Field Emission Gun Transmission Electron Microscope FEG-TEM, the nanometre-scaled modulated structure of the decomposed ferrite was revealed as chromium-rich nanophase gave the bright image, and iron-rich darker image. It also revealed that these modulated nanophases grow coarser with aging time. Decomposed phases start as irregular rounded shapes with no particular arrangement, but with time the chromium-rich nanophase takes a plat shape aligned in the <110> directions. Consequences Spinodal decomposition increases the hardening of the material due to the misfit between the chromium-rich and iron-rich nano-phases, internal stress, and variation of elastic modulus. The formation of coherent precipitates induces an equal but opposite strain, raising the system's free energy depending on the precipitate shape and matrix and precipitate elastic properties. Around a spherical inclusion, the distortion is purely hydrostatic. G-phase precipitates appear prominently at grain boundaries. and are phase rich in nickel, titanium, and silicon, but chromium and manganese may substitute titanium sites. G-phase precipitates occur during long-term aging, are encouraged by increasing nickel content in the ferrite phase, and reduce corrosion resistance significantly. It has ellipsoid morphology, structure (Fmm), and 11.4 Å lattice parameter, with a diameter less than 50 nm that increases with aging. Thus, the embrittlement is caused by dislocations impediment/ locking by the spinodally decomposed matrix and strain around G-phase precipitates, i.e., internal stress relaxation by the formation of Cottrell atmosphere. Furthermore, the ferrite hardness increases with aging time, the hardness of the ductile austenite phase remains nearly unchanged due to faster diffusivity in ferrite compared to the austenite. However, austenite undergoes a substitutional redistribution of elements, enhancing galvanic corrosion between the two phases. Treatment 550 °C heat treatment can reverse spinodal decomposition but not affect the G-phase precipitates. The ferrite matrix spinodal decomposition can be substantially reversed by introducing an external pulsed electric current that changes the system's free energy due to the difference in electrical conductivity between the nanophases and the dissolution of G-phase precipitates. Cyclic loading suppresses spinodal decomposition, and radiation accelerates it but changes the decomposition nature from an interconnected network of modulated nanophases to isolated islands. References Further reading Steel Materials degradation
475 °C embrittlement
[ "Materials_science", "Engineering" ]
1,748
[ "Materials degradation", "Materials science" ]
72,003,087
https://en.wikipedia.org/wiki/NRICM101
NRICM101 (; Taiwan: Chingguan Yihau; Mainland: Qīng-guān Yī-hào), is a treatment for COVID-19 developed in Taiwan using Traditional Chinese Medicine (TCM), created by the National Research Institute of Chinese Medicine (NRICM), a governmental body of Taiwan. The prescription has gained legal approval in several countries, the first TCM recipe to do so for COVID-19 treatment. In its native Taiwan, the treatment is used in conjunction with vaccinations. Amongst the formulations available are RespireAid from , and COVRelief from . The formula is provided in granule form, which is common for Chinese patent medicine. Formulation NRICM101 contains 10 herbal medicinal components, including: 黃芩 (Scutellaria root) 魚腥草 (Heartleaf houttuynia) 栝蔞實 (Mongolian snakegourd fruit) 北板藍根 (Indigowoad root) 厚朴 (Magnolia bark) 薄荷 (Peppermint herb) 荊芥 (Fineleaf nepeta) 桑葉 (Mulberry leaf) 防風 (Saposhnikovia root) 甘草 (Baked liquorice root) Method of action According to the NRICM, the herbal medicine is supposed to work by preventing the SARS-CoV2 virus from binding with ACE2. This would reduce the chance of severe illness. Legal status On 2021 May 18, Taiwan licensed NRICM101 for distribution under emergency access policy. It is licensed as a herbal drug in Singapore, Thailand, and Australia. It is registered as a herbal granule in the Philippines. It is registered as a dietary supplement or a "health food" in the EU and in Cambodia. It has also obtained licenses for importing into the U.S., the UK, Canada, and South Africa. Brining NRICM101 into Argentina is illegal. NRICM102 In October 2021, a new formula called NRICM102 was released. Five herbs were changed in the new formula. Whereas NRICM101 is intended for mild-and-moderate cases, NRICM102 is intended for severe-to-critical cases, corresponding to score ranges of 1-4 and 5-7 on the WHO Clinical Progression Scale for Covid-19. The NRICM warns that the new formula is only suitable for severe cases and must not be produced and used without the approval of a TCM practitioner. Studies and efficacy The NRICM has held press conferences on the results of preclinical and clinical studies involving NRICM101 and NRICM102. Not all results are subsequently published in a peer-reviewed journal, and no high-quality meta-analysis has been done on what little has been published. As a result, it is difficult to draw much conclusion about their efficacy or safety. See also Lianhua Qingwen References Further reading External links Project AIM: NRICM101: A Traditional Chinese Medicine formulation ClinicalTrials.gov: The Outcomes of NRICM101 on SARS-COV-2 (COVID-19) Infection National Research Institute of Chinese Medicine (Taiwan): Clarifications About NRICM101 National Research Institute of Chinese Medicine (Taiwan): NRICM101 Frequently Asked Questions Traditional Chinese medicine COVID-19 drug development
NRICM101
[ "Chemistry" ]
699
[ "Pharmacology", "Drug discovery", "Medicinal chemistry stubs", "COVID-19 drug development", "Pharmacology stubs" ]
72,003,155
https://en.wikipedia.org/wiki/Beta-model
In model theory, a mathematical discipline, a β-model (from the French "bon ordre", well-ordering) is a model which is correct about statements of the form "X is well-ordered". The term was introduced by Mostowski (1959) as a strengthening of the notion of ω-model. In contrast to the notation for set-theoretic properties named by ordinals, such as -indescribability, the letter β here is only denotational. In analysis β-models appear in the study of the reverse mathematics of subsystems of second-order arithmetic. In this context, a β-model of a subsystem of second-order arithmetic is a model M where for any Σ11 formula with parameters from M, iff .p. 243 Every β-model of second-order arithmetic is also an ω-model, since working within the model we can prove that < is a well-ordering, so < really is a well-ordering of the natural numbers of the model. There is an incompleteness theorem for β-models: if T is a recursively axiomatizable theory in the language of second-order arithmetic, analogously to how there is a model of T+"there is no model of T" if there is a model of T, there is a β-model of T+"there are no countable coded β-models of T" if there is a β-model of T. A similar theorem holds for βn-models for any natural number . Axioms based on β-models provide a natural finer division of the strengths of subsystems of second-order arithmetic, and also provide a way to formulate reflection principles. For example, over , is equivalent to the statement "for all [of second-order sort], there exists a countable β-model M such that .p. 253 (Countable ω-models are represented by their sets of integers, and their satisfaction is formalizable in the language of analysis by an inductive definition.) Also, the theory extending KP with a canonical axiom schema for a recursively Mahlo universe (often called ) is logically equivalent to the theory Δ-CA+BI+(Every true Π-formula is satisfied by a β-model of Δ-CA). Additionally, proves a connection between β-models and the hyperjump: for all sets of integers, has a hyperjump iff there exists a countable β-model such that .p. 251 Every β-model of comprehension is elementarily equivalent to an ω-model which is not a β-model. In set theory A notion of β-model can be defined for models of second-order set theories (such as Morse-Kelley set theory) as a model such that the membership relations of is well-founded, and for any relation , " is well-founded" iff is in fact well-founded. While there is no least transitive model of MK, there is a least β-model of MK.pp. 17,154–156 References Mathematical logic
Beta-model
[ "Mathematics" ]
646
[ "Mathematical logic" ]
72,003,335
https://en.wikipedia.org/wiki/Whole-cell%20vaccine
Whole-cell vaccines are a type of vaccine that has been prepared in the laboratory from entire cells. Such vaccines simultaneously contain multiple antigens to activate the immune system. They induce antigen-specific T-cell responses. Whole-cell vaccines have been researched in the fields of bacterial infectious disease (as an inactivated vaccine) and cancer (as tumor cells modified to stimulate the immune system by secreting stimulatory molecules). One whole-cell vaccine that sees global use is the whole-cell pertussis vaccine. Against infectious disease Pertussis The causative organism of pertussis is Bordetella pertussis. The whole-cell pertussis vaccine is effective and safe in treating this disease but is also associated with short-term side effects. Depending upon the different B. pertussis antigens, the immune response produced by the whole-cell vaccine also varies. The pertussis whole-cell vaccine contains inactivated bacterial cells that contain antigens like pertussis toxin, adenylate cyclase toxin, lipooligosaccharides and agglutinogens. The whole-cell pertussis vaccine is prepared by growing Bordetella pertussis in a liquid medium. After the inactivation of the bacteria, a specific cellular concentration is aliquoted. The vaccine efficacy ranges between 36 and 98%. Advantages over acellular pertussis vaccine Whole-cell pertussis vaccine stimulates natural infection better than the acellular pertussis vaccine. Even though cell-mediated immunity persists in patients received with the acellular pertussis vaccine, stronger lymphocytic proliferation, specifically memory T helper 1 cell and T helper 17 cells and cytokine responses are observed in patients received with the whole cell pertussis vaccine. The vaccination with whole cell pertussis vaccine ensures the low risks of pulmonary infection and nasal colonization, due to the increased production of Tissue Resident Memory cells. Pneumococcus The whole-cell pneumococcal vaccine consisted of inactive Streptococcus pneumoniae RM200 cells and was the first whole-cell vaccine used against S. pneumoniae. In 2012, Phase-I studies were conducted by combining the whole-cell vaccine with alum. 1 out of 42 experienced adverse reactions which were not related to vaccination. The mild reactions experienced were similar to the control groups. Immunoglobulin G responses to the whole-cell vaccine was determined by pan proteome microassay and found that the whole-cell pneumococcal vaccine induced an increase in IgG response in a naturally immunogenic protein expressed by RM200 and also caused a reaction to PclA, PspC and ZmpB protein variants. Against cancer The whole-cell tumour vaccine is based on the logic that tumour cells will contain proteins produced by cancer lesion and will provide multiple antigens for immune recognition. Whole-cell tumour vaccines represent one form of immunotherapy method undergoing clinical development. To make a whole-cell tumor vaccine, tumor cells from the patient are transduced so that they produce costimulatory molecules such as cytokines, chemokines, and others. The cells are irradiated so they cannot grow like the parent tumor, but can still express the tumor antigens and the additional molecules. Phase I & II clinical trials of various whole-cell tumour vaccines indicate this method is safe for cancer patients. The advantage of a whole-cell vaccine is that the cells provide a source of all potential antigens, eliminating the need to identify the most optimal antigen to target in a particular type of cancer. Multiple tumour antigens can be targeted simultaneously, generating an immune response to various tumour antigens. Advantages Whole tumour cell vaccines contain characterised and uncharacterised Tumour Antigen Associated Cells that can be processed Antigen Presenting Cells to stimulate the immune system; this makes the whole tumour cell vaccine different from other antigen-specific vaccines. Antigen Presenting Cells can present Tumour Associated Antigens to CD8+ and CD4+ T cells via MHC I & II, respectively. The simultaneous presentation of MHC I & II leads to a robust immune response against tumours. Induces immune response to multiple epitopes within an antigenic protein. Disadvantages The use of whole-tumour cells for vaccine preparation is not very specific because only a portion of the antigens expressed by tumour cells are specific to tumours, and the rest of the antigens are present in normal cells. A tumour biopsy is needed to prepare autologous tumour cell vaccines. In some cases, the cells obtained through tumour biopsy may not be sufficient, or the tumour cells might have undergone necrosis. The Tumour Associated Antigens present in whole tumour cell vaccines can release immunosuppressive cytokines like TGF-β, inhibiting the development of proper immune response. The CD8+ T cell presented by MHC-I does not elicit a response against tumour antigens due to a lack of expression of costimulatory molecules like CD80 & CD86 in these cancer cells. Mode of action The whole tumour cell vaccine consists of the identified and unidentified tumour antigens. Antigen-presenting cells present these tumour antigens via Major Histocompatibility Complex Class I & II to CD8+ T lymphocytes and CD4+T lymphocytes, respectively. By interacting with the Fas ligand or secretion of lytic enzymes, cytotoxic T lymphocytes can lead to apoptosis. Active CD4+ T cells activate the Natural-killer cells, and also CD4+T cells activate the humoral immune response and also promote the activity of CD8+ T cells. Vaccine-induced immune responses are measured by Delayed type Hypersensitivity responses to autologous tumour cells. The granulocyte-macrophage colony-stimulating factor (GM-CSF) is superior to other cytokines, and the addition of GM-CSF in whole-cell vaccine results in a better response against tumour cells. GM-CSF recruits dendritic cells to the site of irradiated cells and stimulates the antigen uptake, processing and presentation. These dendritic cells facilitate the T-cell response by combining with CD8+ T cells. See also Pertussis vaccine Pneumococcal vaccine References Vaccines
Whole-cell vaccine
[ "Biology" ]
1,336
[ "Vaccination", "Vaccines" ]
72,003,352
https://en.wikipedia.org/wiki/Fuzzy%20differential%20inclusion
Fuzzy differential inclusion is the extension of differential inclusion to fuzzy sets introduced by Lotfi A. Zadeh. with Suppose is a fuzzy valued continuous function on Euclidean space. Then it is the collection of all normal, upper semi-continuous, convex, compactly supported fuzzy subsets of . Second order differential The second order differential is where , is trapezoidal fuzzy number , and is a trianglular fuzzy number (-1,0,1). Applications Fuzzy differential inclusion (FDI) has applications in Cybernetics Artificial intelligence, Neural network, Medical imaging Robotics Atmospheric dispersion modeling Weather forecasting Cyclone Pattern recognition Population biology References Dynamical systems Variational analysis Fuzzy logic
Fuzzy differential inclusion
[ "Physics", "Mathematics" ]
140
[ "Mechanics", "Dynamical systems" ]
72,004,229
https://en.wikipedia.org/wiki/GRB%20221009A
GRB 221009A was an extraordinarily bright and very energetic gamma-ray burst (GRB) jointly discovered by the Neil Gehrels Swift Observatory and the Fermi Gamma-ray Space Telescope on October 9, 2022. The gamma-ray burst was ten minutes long, but was detectable for more than ten hours following initial detection. Despite being around 2.4 billion light-years away, it was powerful enough to affect Earth's atmosphere, having the strongest effect ever recorded by a gamma-ray burst on the planet. The peak luminosity of GRB 221009A was measured by Konus-Wind to be ~ 2.1 × 1047 W and by Fermi Gamma-ray Burst Monitor to be ~ 1.0 × 1047 W over its 1.024s interval. A burst as energetic and as close to Earth as 221009A is thought to be a once-in-10,000-year event. It was the brightest and most energetic gamma-ray burst ever recorded, with some dubbing it the BOAT, or Brightest Of All Time. Characterization GRB 221009A came from the constellation of Sagitta and occurred an estimated 1.9 billion years ago, however its source is now 2.4 billion light-years away from Earth due to the expansion of the universe during the time-of-flight to Earth. The burst's high-intensity emissions spanned 15 orders of magnitude on the electromagnetic spectrum, from radio emissions to gamma rays. Radio signals broadcast by the winding down of whatever process created the initial burst will likely linger for years to come. This broadband emission offers the rare opportunity to study normally-fleeting GRBs in great detail. Observation The burst saturated the Fermi Gamma-ray Space Telescope's detector, which captured gamma ray photons with energies exceeding 100 GeV. GRB 221009A is by far the most productive event for very high-energy (VHE) photons ever witnessed by scientific instrumentation. Before GRB 221009A, the number of very high-energy photons detected over the entire history of GRB astronomy numbered only a few hundred. The burst also marked the first time that very high-energy (VHE) photon emissions from a GRB were detected during the early epoch. When the burst's radiation arrived at Earth the Large High Altitude Air Shower Observatory (LHAASO) alone saw more than 5,000 such VHE photons. Some of these photons arrived at Earth carrying a record 18 TeV of energy, which is more than can be produced at the Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN). Russia's Carpet-2 facility at the Baksan Neutrino Observatory may have also recorded a single 251-TeV photon from this burst. These detected energies are far more than GRB 190114C, which had up to 1 TeV of energy, and GRB 190829A, which had up to 3.3 TeV of energy, with 221009A being the first and only GRB so far to have photons above 10 TeV. The burst possibly had the signature of accelerating ultra-high-energy cosmic rays for the first time, with one study estimating that if cosmic rays were accelerated by the burst, they probably would have reached energies of 1 ZeV or greater (1021 electronvolts), almost an order of magnitude or greater, than the Oh-My-God particle, which is the highest-energy cosmic ray ever observed. GRB 221009A was subsequently observed by the Neutron Star Interior Composition Explorer (NICER), the Monitor of All-sky X-ray Image (MAXI), the Imaging X-ray Polarimetry Explorer (IXPE), the International Gamma-ray Astrophysics Laboratory (INTEGRAL), the XMM-Newton space telescope, the Large High Altitude Air Shower Observatory (LHASSO) and many others. Origin Observations with the James Webb Space Telescope (JWST) have confirmed that GRB 221009A was caused by a massive star undergoing a supernova. The supernova was a Type Ic supernova, similar to SN 1998bw, the first supernova linked to a GRB. Lightning detectors in India and Germany picked up signs that the Earth's ionosphere was perturbed for several hours by the burst, though only mildly, as well as an enormous influx of electrically charged particles, showing just how powerful it was. Further, one study described the relativistic jet of this gamma-ray burst as having an unusual structure. Record magnitude Some astronomers referred to the burst as the "brightest of all time", or by the acronym "BOAT". Dan Perley, an astrophysicist at the Astrophysics Research Institute at Liverpool John Moores University, stated that "There is nothing in human experience that comes anywhere remotely close to such an outpouring of energy. Nothing." Eric Burns, an astronomer at Louisiana State University, also stated about the energy of the burst that "The energy of this thing is so extreme that if you took the entire Sun and you converted all of it into pure energy, it still wouldn't match this event. There's just nothing comparable." The power of gamma-ray bursts may be gauged by the degree of interaction between the gamma rays they emit and the ubiquitous lanes of interstellar dust in deep space. Such interactions generate an afterglow in X-ray frequencies, usually seen as concentric rings of scattered X-rays with the gamma ray burst at the center. GRB 221009A is only the seventh gamma-ray burst known to have generated these rings, and as of March 2023, a record twenty X-ray afterglow rings had been identified around the burst, triple the previous record. The afterglow of GRB 221009A was the brightest ever recorded, beating the record of GRB 030329. The burst also had the brightest X-ray afterglow ever recorded, with the X-ray afterglow being around a thousand times brighter than the typical GRB. It also had the brightest UVOT afterglow ever recorded once corrected for extinction. It had the largest amount of energy ever recorded in the TeV range, and had the most energetic photons ever recorded for a GRB, peaking at 18 TeV. The burst was ten times brighter than any previous GRB detected by the Swift mission. It was the brightest and most intense GRB detected by KONUS-Wind. The prompt emission of the burst far surpassed anything before it, far exceeding four previous GRB record-holders, as no GRB had been recorded delivering more than 500,000 gamma-ray photons-per-second, yet this GRB peaked at over 6 million photons-per-second. The burst was so bright that it blinded most gamma-ray instruments in space, preventing a true recording of its intensity. It was even detected by satellites not designed to detect gamma-ray bursts, such as Voyager 1 and a pair of Mars orbiters. GRB 221009A could have produced multi-TeV gamma rays for more than a week after the prompt phase, with this feature being unique to GRB 221009A, far longer compared to other bursts such as GRB 180720B, which produced multi-TeV gamma rays for ten hours after the prompt phase, and GRB 190829A, which produced multi-TeV gamma rays for nearly three days after the prompt phase. Despite being around two billion light-years away, the burst was powerful enough to affect Earth's atmosphere, having the strongest effect ever recorded by a gamma-ray burst on the planet. It triggered instruments generally reserved for studying solar flares, with solar physicist Laura Hayes at the ESA stating that it left an "imprint comparable to that of a major solar flare" from the nearby Sun, meaning the burst had the same effect as a solar flare over 100 trillion times closer, with the burst being equivalent to a C3 to M1 class solar flare based on the VLF amplitude increase. Also significant is that the burst was detectable in daytime observations, where solar radiation dominates, as compared to the night-time ionosphere, which is much more sensitive to external disturbances, when solar radiation is not dominating, showing just how large the burst was. It also disrupted longwave radio communications. With a radiated isotropic-energy of around 1.2×1055 erg or even 3×1055 erg, to as high as 1.4×1057 erg, GRB 221009A, together with events such as 1.5×1053 AT 2021lwx, the 1061 erg MS 0735.6+7421 event, and the 5×1061 erg Ophiuchus Supercluster eruption, are among the most energetic events ever. However, AT 2021lwx occurred over a span of three years, while the eruptions were high-energy low-power events occurring over millions of years as compared to GRB 221009A, which occurred over a minuscule time frame in comparison. In the binary-driven hypernova model, the isotropic total of the burst is its true energy total. Physicist and author Don Lincoln described it as being the "greatest cosmic explosion humanity has ever seen". Possible pair annihilation Reanalysis of the data from Fermi Gamma-ray Space Telescope revealed with 6.2σ significance an emission line 5 minutes after the burst was detected and after it had dimmed enough to end saturation effects of the instruments. The signal persisted for at least 40 seconds, and reached a peak with energy of about 12 MeV and luminosity of about 0.43×1050 erg/s that has been interpreted as electron–positron annihilation blueshifted from the usual 0.511 MeV at which the low-energy case should occur due to strong energy of the relativistic jet which itself can look as a superluminial jet. Relevance to new physics Through comparison of data collected by different observatories, scientists concluded that the 221009A event was 50 to 70 times brighter than the previous record holder, along with it being far more energetic than the previous record holder. The extremely bright peak and long afterglow may help physicists study the manner in which matter interacts at relativistic speeds, the only known regime capable of generating gamma ray photons with more than 100 GeV of energy. GRB 221009A has enabled scientists to impose stringent limits on any violations of Lorentz invariance proposed in certain theories of quantum gravity. Studies of 221009A and similarly extreme events are at present humanity's only access to particles with energies larger than any that can be generated artificially. The close examination of 221009A may eventually yield physical explanations that are neither predicted by nor accounted for in the Standard Model. It has become the most studied gamma-ray burst in history. Gallery See also List of gamma-ray bursts Gamma-ray astronomy AT 2021lwx GRB 160625B GRB 130427A GRB 230307A, second-brightest gamma-ray burst by gamma-ray fluence with 3×10−4 erg cm−2 GRB 080916C, second most powerful gamma-ray burst with 8.8×1054 erg References External links Gamma-ray bursts Astronomical objects discovered in 2022 Sagitta
GRB 221009A
[ "Physics", "Astronomy" ]
2,379
[ "Physical phenomena", "Sagitta", "Astronomical events", "Constellations", "Gamma-ray bursts", "Stellar phenomena" ]
72,005,840
https://en.wikipedia.org/wiki/DW%20Ursae%20Majoris
DW Ursae Majoris is an eclipsing binary star system in the northern circumpolar constellation of Ursa Major, abbreviated DW UMa. It is a cataclysmic variable of the SX Sextanis type, consisting of a compact white dwarf that is accreting matter from an orbiting companion star. The brightness of this source ranges from an apparent visual magnitude of 13.6 down to magnitude 18, which is too faint to be viewed with the naked eye. The distance to this system is approximately 1,920 light years based on parallax measurements. In 1982, R. F. Green and associates identified this star as a cataclysmic variable candidate with the Palomar–Green survey designation PG 1030+590. A. W. Shafter and F. V. Hessman in 1984 found this to be a close eclipsing binary system with a period of 3.27 hours. This is a nova-like binary where mass is being transferred from a late-type star to a white dwarf companion. This material is first accumulated in an accretion disk orbiting the white dwarf. Typically, the light curve for an eclipsing binary of this type should display a hump-like feature from where the stream of material interacts with the disk. However, during early observations, no such feature was observed before the eclipse. The behavior of the emission lines in the spectrum of this star were found to resemble those of other SW Sextantis variables. In 2000, the system was observed with the Hubble Space Telescope and was found to be in a low state about three magnitudes fainter, unlike previous observations where it had been in a high state. Comparison of the ultraviolet spectrum in the two states suggested that the accretion disk is self-eclipsing and it can obscure the view of the white dwarf. The light output of the system undergoes a 13.6 year cycle of variation, probably because of precession of the accretion disk. Both positive and negative superhumps are observed that vary over time in a complex fashion. Mass is being transferred from the donor star at a rate of about ·yr−1. References Further reading M-type main-sequence stars White dwarfs Cataclysmic variable stars Ursa Major Ursae Majoris, DW
DW Ursae Majoris
[ "Astronomy" ]
478
[ "Ursa Major", "Constellations" ]
72,006,492
https://en.wikipedia.org/wiki/Sfold
Sfold is a software program developed to predict probable RNA secondary structures through structure ensemble sampling and centroid predictions with a focus on assessment of RNA target accessibility, for major applications to the rational design of siRNAs in the suppression of gene expressions, and to the identification of targets for regulatory RNAs particularly microRNAs. Development The core RNA secondary structure prediction algorithm is based on rigorous statistical (stochastic) sampling of Boltzmann ensemble of RNA secondary structures, enabling statistical characterization of any local structural features of potential interest to experimental investigators. In a review on nucleic acid structure and prediction, the potential of structure sampling described in a prototype algorithm was highlighted. With the publication of the mature algorithms for Sfold, the sampling approach became the focus of a review Both the sampling approach and the centroid predictions were discussed in a comprehensive review. As an application module of the Sfold package, the STarMir program has been widely used for its capability in modeling target accessibility. STarMir was described in an independent study on microRNA target prediction STarMir predictions have been used in an attempt to derive improved predictions. Predictions by Sfold have led to new biological insights. The novel ideas of ensemble sampling and centroids have been adopted by others not only for RNA problems, but also for other fundamental problems in computational biology and genomics. An implementation of stochastic sampling has been included in two widely used RNA software packages, RNA Structure and the ViennaRNA Package, which are also based on the Turner RNA thermodynamic parameters. Sfold was featured on a Nucleic Acids Research cover, and was highlighted in Science NetWatch. The underlying novel model for STarMir was featured in the Cell Biology section of Nature Research Highlights. Distribution Sfold runs on Linux, and is freely available to the scientific community for non-commercial applications, and is available under license for commercial applications. Both the source code and the executables are available at GitHub. External links Sfold GitHub repository Sfold commercial licensing Sfold GitHub page References Computational biology
Sfold
[ "Biology" ]
414
[ "Computational biology" ]
72,006,534
https://en.wikipedia.org/wiki/Transmission%20congestion
In power engineering, transmission congestion occurs when overloaded transmission lines in an electrical grid are unable to carry additional electricity flow due to the risk of overheating. During grid congestion, the transmission system operator (TSO) has to direct the providers to adjust their dispatch levels to accommodate the constraint. In an electricity market a power plant may be able to produce electricity at a competitive price but cannot transmit the power to a willing buyer. Congestion increases the electricity prices for some customers. Definitions There is no universally accepted definition of the transmission congestion. Congestion is not an event, so it is frequently not possible to pinpoint its place and time (in this respect it is similar to traffic congestion). Regulators define congestion as a condition that prevents market transactions from being completed, while a transmission system operator sees it as inability to maintain the security of the power system operation with the power flow scheduled for the grid. A congestion is a symptom of a constraint or a combination of constraints in a transmission system, usually the limits on physical electricity flow are used to prevent the overheating, unacceptable voltage levels, and loss of system stability. Congestion can be permanent, an effect of the system configuration, or temporary, due to a fault in the transmission equipment. Congestion management Avoiding the congestion is essential for a competitive electricity market and is "one of the toughest problems" of its design. The goal is to ensure that a power flow as defined by the wholesale market result does not violate the constraints during the normal operation of the grid and in the case of failure of any one particular component (so called n-1 criterion). The existing markets use a range of approaches to solve the problem. On one end of this range is "uniform pricing" that ignores the transmission constraints altogether and lets the market find a single price for all the locations ("nodes"). On the other end "locational marginal pricing" accommodates all the constraints by defining a separate pricing for each node (thus another name, "nodal pricing"). The uniform pricing has an advantage of transparent market design and quick clearing, so auctions can happen frequently, typically they start a day ahead of the delivery ("day-ahead" auction) and continue until the delivery (so called "intra-day" auctions). However, the market result might violate the congestion constraints and thus cannot be implemented at the time of delivery (in "real-time"). If this is the case, the TSO intervenes and uses so called system redispatch by changing the schedules of the generators in a way that the load can be served. Redispatch payments are usually negotiated in advance and providers are paid as they bid in a "command and control" fashion, without creating a market. With nodal pricing all grid constraints are accounted for during the clearing and different prices are set for different nodes, this typically requires the independent system operator (ISO) to manage the market clearing. The drawback of the nodal pricing is that the local markets might not have enough participants to efficiently function. In particular, in the load pockets (areas of the grid with concentrated load and lack of tie lines to the rest of the system) a large generator might exhibit significant market power, forcing the price for this node to be directly regulated on a cost basis. The zonal pricing represents a compromise where the grid is split into relatively large zones, electricity price within each zone is uniform (and thus intra-zone congestion need to be resolved with a redispatch), but the inter-zone constraints are accounted for during the market clearing via different prices for different zones. The "discriminatory pricing" the providers in case of acceptance of their bids by the system operator are paid the amount of their bid ("pay-as-offered", "pay-as-bid"). The discriminatory pricing is also used in a market-based redispatch scenario (counter-trading). Transmission rights To avoid congestion, it might be necessary to deny some transmission transactions. One way to do it is through the transmission rights. The owner of a transmission right is entitled to transport a predefined amount of electric power from a source location on the network to the destination. There are two types of transmission rights: physical transmission right (PTR) provides a property right to a portion of a capacity of a transmission line, which is reserved for the holder's exclusive use (the holder can deny access to the transmission capacity to non-holders. ). The right can be acquired by building a transmission line or by purchasing the right from some other holder, so costs are typically known in advance. The owner can "sublet" the capacity to supplement the return on investment (for example, at a time when the capacity is not being used). The PTRs are essentially self-scheduling and in practice not only can interfere with the ability of a system operator to perform the economic dispatch, but are incompatible with locational marginal pricing, as the holder of a right from, say, A to B, can artificially increase prices in B (and lower prices in A) by simply withholding the access; financial transmission right (FTR) is similar to PTR in appearance (it specifies the source, destination, and power in MW), but does not reserve the line yet instead provides its holder with a payout that is equal to the difference in the price of electricity between the source and the destination (form of a congestion rent). The funds for the payment are collected whenever the electricity is purchased in the lower-cost location and resold in a higher-cost one, so the FTRs cannot be used in a uniform pricing market arrangement. Example of an FTR operation In a simple example of FTR operation, locations A and B are connected with a 1000 MW line. Location A has a load of 200 MW and two generation companies: GA1 with 1000 MW capacity and marginal cost of $10/MW; GA2 with 1000 MW capacity and marginal cost of $15/MW. Location B has a load of 2500 MW and a single generator GB with 2000 MW capacity and marginal cost of $30/MW. The electricity market with locational pricing will fully engage the 1000 MW line and settle on: $15 at location A, as GA1 cannot satisfy all demand (transmission line plus local load) and the price will be thus determined by GA2; $30 at B: the transmission line cannot satisfy all local load and the price is thus determined by GB. GA1, standing to gain most if the links between A and B are improved, decides to build another 1000-MW transmission line. Now there is no congestion, and the market will settle at the same price in both A and B ($30, since GA1 and GA2 cannot satisfy all demand, and the price will be determined by the cost of GB). GA1 will hold the FTR for 1000 MW, but will not collect anything from this right, instead pocketing the difference between its $10 cost and $30 price. A new plant, GA3, is constructed in A with capacity of 1000 MW and marginal cost of $9/MW. Now the pricing in A is $15 again (determined by GA2), pricing at B is still $30. Although the line built by GA1 might now be effectively used by GA3, GA1 as a holder of FTR receives the congestion rent for electricity transmitted over the line that GA1 had invested into. The arrangement works as if GA1 had leased the line to GA3 for the full value of the line, so FTRs are similar to tradable securities, but with automated trading. Transmission access charge Some transmission system operators offer the transmission rights owner to collect the transmission fees for them. For example, in California the California Independent System Operator (CAISO) offers the PTR owners a scheme where the owners hand over the operational control of their infrastructure to CAISO in exchange for the "Transmission Revenue Requirement" (TRR) that recoups the owner's costs. CAISO in turn collects the Transmission Access Charge (TAC) from the utilities based on the gross load, and utilities bill TAC to their customers. References Sources Electric power transmission Electrical engineering
Transmission congestion
[ "Engineering" ]
1,679
[ "Electrical engineering" ]
72,007,163
https://en.wikipedia.org/wiki/1%2C1%27-Diaminoferrocene
1,1'-Diaminoferrocene is the organoiron compound with the formula . It is the simplest diamine derivative of ferrocene. It is a yellow, air-sensitive solid that is soluble in aqueous acid. The 1,1' part of its name refers to the location of the amine groups on separate rings. Compared to the parent ferrocene, the diamine is about 600 mV more reducing. It can be prepared from the diisocyanate , which in turn is derived from 1,1'-ferrocenedicarboxylic acid. 1,1'-Diaminoferrocene was originally prepared by hydrogenation of 1,1'-diazidoferrocene ( ). 1,1'-Diaminoferrocene has been incorporated into various diamide and diimine ligands, which form catalysts that exhibit redox switching. References Ferrocenes Cyclopentadienyl complexes Diamines
1,1'-Diaminoferrocene
[ "Chemistry" ]
207
[ "Organometallic chemistry", "Cyclopentadienyl complexes" ]
74,777,469
https://en.wikipedia.org/wiki/M.Phil%20Clinical%20Psychology
The M.Phil Clinical Psychology is a professional course and qualification in clinical psychology in some countries, including India. Elswhere, other qualifications may be required or expected. By country India in India offered by more than 50 organizations in India/ M.Phil Clinical Psychology is one of the popular courses to get a license of a Clinical Psychologist. M.Phil Clinical Psychology is a two-year hospital-based program. The old name of M.Phil in Clinical Psychology was Diploma in Medical and Social Psychology. MA/MSc in Psychology with 55% marks is the eligibility criteria for admission to this course. New Education Policy discontinued M.Phil program in India but M.Phil in Clinical Psychology is an exception, This course will be continued in Institutes and hospitals. References Academic degrees in healthcare Clinical psychology Psychology education
M.Phil Clinical Psychology
[ "Biology" ]
163
[ "Behavioural sciences", "Behavior", "Clinical psychology" ]
74,777,997
https://en.wikipedia.org/wiki/Difluorodioxirane
Difluorodioxirane (CF2O2) is a rare, stable member of the dioxirane family, known for a single oxygen-oxygen bond (O-O). Unlike most dioxiranes that decompose quickly, difluorodioxirane is surprisingly stable at room temperature, making it potentially useful for further research and applications. Synthesis Difluorodioxirane was first synthesised by Russo and DesMarteau in 1993 by treating fluorocarbonyl hypofluorite (FCOOF) with X2 (= F2, Cl2 or ClF) over pelletized CsF in a flow system. It also likely exists as a possible intermediate in reactions involving other fluorine-containing compounds. Properties Unlike most dioxiranes that decompose quickly, difluorodioxirane is surprisingly stable at room temperature due to the stabilising interacton of two fluorine atoms with the ring. This effect makes the O-O bond less reactive and more stable compared to other dioxiranes. The central F–C–F angle is 109°, approximately a tetrahedral angle. Difluorodioxirane is known for its ability to perform regiospecific and stereoselective oxidations. This makes it a valuable tool in organic synthesis for precise manipulation of molecules. Despite its increased stability, difluorodioxirane can still act as an oxidizing agent, transferring oxygen to other molecules. it often leads to cleaner and more predictable reaction outcomes due to its controlled reactivity. Uses Difluorodioxirane itself has not yet found widespread applications due to its recent discovery. However, its unique stability and reactivity similar to other dioxiranes suggest potential uses in several areas: Organic synthesis: Due to its oxidizing properties, difluorodioxirane could be a valuable reagent in organic reactions, particularly for controlled oxidation processes. Researchers are exploring its potential applications in epoxidation (adding oxygen atoms to create epoxide rings), hydroxylation (adding hydroxyl groups -OH), and other oxidation reactions. Development of new catalysts: The stability and reactivity profile of difluorodioxirane make it a promising candidate for the development of new and more efficient catalysts for various organic transformations. See also Dimesityldioxirane Dimethyldioxirane Shi epoxidation References Dioxiranes Organic peroxides Oxidizing agents Heterocyclic compounds with 1 ring Oxygen heterocycles
Difluorodioxirane
[ "Chemistry" ]
535
[ "Organic compounds", "Redox", "Oxidizing agents", "Organic peroxides" ]
74,780,868
https://en.wikipedia.org/wiki/Chlorosipentramine
Chlorosipentramine is an analogue of the anorectic drug sibutramine, which has been sold as an ingredient in weight loss products sold as dietary supplements, first detected in South Korea in 2017. It is one of a number of sibutramine derivatives which have been sold in grey-market weight loss products since sibutramine itself was taken off the market due to safety concerns. Others include desmethylsibutramine, didesmethylsibutramine, homosibutramine, chlorosibutramine, and benzylsibutramine. Chlorosipentramine is illegal in South Korea along with other related compounds. See also 3F-PiHP 4-Cl-PHP O-2390 1-Methyl-3-propyl-4-(p-chlorophenyl)piperidine JZ-IV-10 References Anorectics Chlorobenzene derivatives Cyclobutanes Phenethylamines Serotonin–norepinephrine–dopamine reuptake inhibitors Stimulants Substituted amphetamines Dimethylamino compounds
Chlorosipentramine
[ "Chemistry" ]
239
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
74,782,157
https://en.wikipedia.org/wiki/Carbazole%20alkaloids
The carbazole alkaloids are natural products of the indole alkaloid type, derived from carbazole. Occurrence Carbazole alkaloids with unsubstituted benzene rings occur rarely. Olivacin has been found in the bark of Aspidosperma olivaceum and ellipticin in Ochrosia elliptica. Some carbazole alkaloids, especially glybomin B, have been isolated from Glycosmis pentaphylla. Representatives Representatives include, among others, glycozoline, olivacine and ellipticin and further glybomine B. Properties Carbazole alkaloids have cytotoxic and antifungal properties. Furthermore, they have detrimental effects against HIV viruses and tumor cells. Olivacin has fluorescent properties. References External links Alkaloids
Carbazole alkaloids
[ "Chemistry" ]
180
[ "Organic compounds", "Biomolecules by chemical classification", "Natural products", "Alkaloids" ]
74,782,216
https://en.wikipedia.org/wiki/Conium%20alkaloids
Conium alkaloids are natural products of the piperidine alkaloid type. Occurrence Conium alkaloids are found in spotted hemlock. The mature fruits may contain up to 3.5% alkaloids. Representative The main alkaloid is coniine. Other representatives are γ-conicein, conhydrin, pseudoconhydrin, and N-methylconiin. Most Conium alkaloids are liquid at room temperature. Properties 500 mg of coniin is fatal to a human. Coniin is the poison of the spotted hemlock. Poisoning results in nausea, vomiting, salivation, and diarrhea. Within half an hour to an hour, paralysis of the chest muscles occurs, which is fatal. History In ancient times, aqueous extracts of this plant (hemlock cup) were administered. In 399 BC, Socrates was sentenced to death by the cup of hemlock as a "free thinker and seducer of youth." References Alkaloids
Conium alkaloids
[ "Chemistry" ]
211
[ "Organic compounds", "Biomolecules by chemical classification", "Natural products", "Alkaloids" ]
74,782,302
https://en.wikipedia.org/wiki/Chlorosibutramine
Chlorosibutramine is an analogue of the anorectic drug sibutramine, which has been sold as an ingredient in weight loss products sold as dietary supplements, first detected in South Korea in 2013. It is illegal in South Korea. See also Desmethylsibutramine Didesmethylsibutramine Chlorosipentramine O-2390 LR-5182 3,4-CTMP References Anorectics Chlorobenzene derivatives Cyclobutanes Phenethylamines Serotonin–norepinephrine–dopamine reuptake inhibitors Stimulants Substituted amphetamines Dimethylamino compounds
Chlorosibutramine
[ "Chemistry" ]
144
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
74,782,341
https://en.wikipedia.org/wiki/2020%20MK53
is a trans-Neptunian object in the scattered disc, around in diameter. It was discovered on 22 June 2020 by the New Horizons KBO Search-Subaru survey using the 8.2-meter Subaru Telescope of the Mauna Kea Observatories in Hawaii, and announced on 7 April 2023 (MPS 1836391, MPO 735634). It was 160 astronomical units from the Sun when it was discovered, making it the farthest known Solar System object from the Sun , well ahead of (124 AU) and (~132 AU). Given its very short data-arc the only reliable information is in its range () and range-rate (−0.04 km/s), which suggests that it is close to its aphelion. References External links Minor planet object articles (unnumbered) 20200622
2020 MK53
[ "Physics", "Astronomy" ]
180
[ "Concepts in astronomy", "Unsolved problems in astronomy", "Possible dwarf planets" ]
74,782,762
https://en.wikipedia.org/wiki/Stubenrauchbr%C3%BCcke%20%28Treptow-K%C3%B6penick%20district%29
The Stubenrauchbrücke (lit. English: Stubenrauch Bridge), built in 1908, is a three-arch iron truss bridge. In recent years, it has effectively become two separate bridges running parallel to each other, facilitating both vehicular and pedestrian traffic. This bridge links the Berlin neighborhoods of Oberschöneweide and Niederschöneweide, situated on either bank of the Spree River, within the Treptow-Köpenick district. History of the Spree Crossing During the late 19th century, the growth of Oberschöneweide and Niederschöneweide was closely linked to the rapid expansion of Berlin's large-scale industry. The presence of railways and waterways provided favorable conditions for the establishment of industrial settlements. At the same time, however, constructing paths and roads and connecting them to the Chausseenetz of the district of Teltow became necessary. To create the first permanent link between the two banks of the Spree River, a chain ferry was established in 1885, and funded by the district. The ferry connection operated until 1891 when it was replaced by a wooden bridge in 1890/1891. This bridge also accommodated the tracks of the Oberschöneweide industrial railroad (Bullenbahn), which connected seventeen newly established factories in Oberschöneweide to the Berlin-Görlitz railroad line. The 121-meter-long bridge over the Spree was a wooden truss structure with nine openings, with three in the middle serving as ship passages. In addition to this bridge, other Spree crossings were built around the same time, such as the Kaisersteg (1898, pedestrian bridge) and the Treskowbrücke (1904), which together helped to relieve traffic in Schöneweide. However, the wooden bridge quickly deteriorated and, after only ten years, a new fixed crossing was urgently needed. The district administration opted for a steel bridge constructed of three arches of unequal length, which also had to accommodate the tracks of the industrial railroad leading to the Oberschöneweide factories. The new construction led to re-routing streetcar lines operated by the Berlin Ostbahnhof exclusively over the neighboring Treskow Bridge. The bridge plans were developed by Berlin civil engineer Karl Bernhard, who was also responsible for the superstructure. Preparatory work began in 1905, and the entire construction costs were covered by the municipality. Before the new Stubenrauch Bridge was built, there was only the Kaisersteg for pedestrians to cross the Spree and the later Treskow Bridge for vehicles, but the latter was a rather dilapidated wooden structure at the time. When the new bridge was inaugurated, it was named Stubenrauchbrücke in honor of the former Teltow district administrator, Ernst von Stubenrauch. Arched bridge as a steel structure The realized bridge with span widths of 21.5 m, 60 m, and 21.5 m was adapted to the requirement of the Spree navigation of a passage width of at least 50 m. The central large arch was built as an iron truss arch with a tension band in the central opening. The two arch sections were connected at their apex, which stood at a height of 7.90 meters. The side arches, also designed to accommodate navigation, were made of reinforced concrete to ensure stability while maintaining a filigree appearance that matched the main arch. All bridge abutments and piers were faced with granite. During the 1920s, the Stubenrauchbrücke faced significant structural demands, with up to 14 trains a day from the private connecting railroad operated by the Berlin tramway, sometimes with up to 130 axles, crossing the bridge. A stretching of the steel train band in the central span led to the closure of traffic on 6 January 1925. After the demolition of the deck and excavation of the truss arches, new tension bands and a new deck were installed from a substructure. Destruction and reconstruction after World War II Allied bombing at the end of World War II severely damaged the bridge by hitting the central and northern openings. Despite the damage, Soviet pioneers built a makeshift bridge over the northern span in 1945, and straightening work was carried out on the steel structure, allowing the bridge to remain in use during this period. After the war, the destruction of the Treskow Bridge resulted in the absence of a streetcar connection between Ober- and Niederschöneweide. To address this, streetcar tracks were laid across the Stubenrauchbrücke in 1947, although they were later removed in 1951 following the reconstruction of the Treskow Bridge. With the Treskow Bridge repaired, comprehensive restoration work was undertaken on the Stubenrauchbrücke, largely restoring its original appearance by 1959. Additionally, the operation of the industrial railroad across the bridge was discontinued at this time, leaving only car traffic to use the bridge. In 1969, the operating track was abandoned, and since 1971, north-south car traffic has been permanently diverted over the Stubenrauchbrücke. However, over the subsequent decades, traffic congestion in this area continued to increase. In addition to the historic Stubenrauchbrücke, which has been a listed building since the 1980s, the Berlin Senate, which has been responsible for all of Berlin since the fall of the Wall, had a parallel reinforced concrete beam bridge built 20 meters downstream from the Spree in the early 1990s, which served as a makeshift construction for traffic between Ober- and Niederschöneweide. However, in 1994, both bridges had to be closed to truck traffic. Between 1998 and 1999, extensive repairs were carried out on the Stubenrauch Bridge. These renovations were funded by the Joint Task for Improvement of the Regional Economic Structure and the European Regional Development Fund, with a total cost of 12.5 million marks (equivalent to about 9.5 million euros in today's currency). Engineers from the firm Gregull + Spang were responsible for the project, which included reinforcing the metal structure by installing an orthotropic deck and replacing corroded metal components of the main opening. On the south side of the bridge, a new concrete arch was added to match the design of the northern bridge arch, based on historical plans. Additionally, several historic-style streetlights were installed in the middle section of the bridge. As of 2022, vehicular traffic on the Stubenrauchbrücke flows in one lane in each direction, and the plan calls for the temporary bridge to be deconstructed. References Further reading . Heinze, Thiemann und Demps: Berlin und seine Brücken. VEB Verlag für Verkehrswesen, Berlin 1987, pp. 212. . External links Commons: Stubenrauchbrücke (Treptow-Köpenick district)  - album with pictures Stubenrauchbrücke (Treptow-Köpenick district). Longitudinal section of the Stubenrauch Bridge on the homepage of Lucke-Umpfenbach, which was also involved in the renovation of the bridge in 1999 Stubenrauchbrücke on www.worldpress.com Road bridges in Germany Spree basin Architectural design
Stubenrauchbrücke (Treptow-Köpenick district)
[ "Engineering" ]
1,466
[ "Design", "Architectural design", "Architecture" ]
74,784,001
https://en.wikipedia.org/wiki/Bisbenzylisoquinoline%20alkaloids
Bisbenzylisoquinoline alkaloids are natural products found primarily in the plant families of the barberry family, the Menispermaceae, the Monimiaceae, and the buttercup family. Occurrence More than 225 different bisbenzylisoquinoline alkaloids are known and have been isolated. Representative The bisbenzylisoquinoline alkaloids are considered the largest group within the isoquinoline alkaloids. The best-known representative of this group is tubocurarine chloride. Other representatives include dauricin, oxyacanthin, tetrandrine, and tiliacorin. Structure Bisbenzylisoquinoline alkaloids are characterized by their structure. Typically, they consist of two benzyl-tetrahydroquinoline units linked by ether groups, and occasionally by C-C bonds. Multiple ether bridges are often present. These alkaloids can be categorized into three groups, using the nomenclature head for the 1,2,3,4-tetrahydroisoquinoline unit and tail for the 1-benzyl residue: head-head tail-tail- head-tail-linked bisbenzylisoquinoline alkaloids. Dauricine is the simplest representative with a tail-to-tail linkage. Oxyacanthine and Tetrandrin contain both head-head and tail-tail linkages, while Tiliacorine features one tail-tail linkage and two head-head linkages linking to a dibenzodioxin moiety. Tubocurarine chloride is characterized by two head-tail linkages of the tetrahydroisoquinoline units. In the following structural formulae, the tail linkages are marked red and the head linkages are marked blue: Uses The alkaloids belonging to the bisbenzylisoquinoline group are generally toxic and exhibit curarizing effects. Oxyacanthine serves as a sympatholytic agent, an antagonist to epinephrine, and a vasodilator. Tubocurarine, a potent curarizing poison, stands as the oldest known muscle relaxant. South American indigenous populations have traditionally employed it as an arrow poison (see curare). Tetrandine, found as an ingredient in the Chinese medicine "han-fang-shi," possesses analgesic and antipyretic properties. References Alkaloids
Bisbenzylisoquinoline alkaloids
[ "Chemistry" ]
511
[ "Organic compounds", "Biomolecules by chemical classification", "Natural products", "Alkaloids" ]
74,785,573
https://en.wikipedia.org/wiki/Neutristor
A neutristor is a compact neutron generator made using solid-state electronics, invented at Sandia National Laboratories. Its primary purpose is to act as a light-weight, cheaper, and safer alternative to standard neutron generation devices, benefiting industries and processes such as oilfield operations, heavy mechanical production, neutron activation analysis, and medicine due to these reduced costs. It operates on the standard operational principles of neutron generators. Additionally, Sandia National Laboratories is creating a new generation of neutristors that do not require a vacuum environment to operate. Advantages A neutristor is cheaper and smaller than standard accelerator-based neutron generators. Normal neutron generators use a three-inch (7.5 cm) cylinder, too large for implanted neutron capture therapy and for neutron inspection of weld flaws. References Electrical components Neutron sources
Neutristor
[ "Technology", "Engineering" ]
169
[ "Electrical engineering", "Electrical components", "Components" ]
74,786,662
https://en.wikipedia.org/wiki/Liquidus%20and%20solidus
While chemically pure materials have a single melting point, chemical mixtures often partially melt at the temperature known as the solidus (TS or Tsol), and fully melt at the higher liquidus temperature (TL or Tliq). The solidus is always less than or equal to the liquidus, but they need not coincide. If a gap exists between the solidus and liquidus it is called the freezing range, and within that gap, the substance consists of a mixture of solid and liquid phases (like a slurry). Such is the case, for example, with the olivine (forsterite-fayalite) system, which is common in Earth's mantle. Definitions In chemistry, materials science, and physics, the liquidus temperature specifies the temperature above which a material is completely liquid, and the maximum temperature at which crystals can co-exist with the melt in thermodynamic equilibrium. The solidus is the locus of temperatures (a curve on a phase diagram) below which a given substance is completely solid (crystallized). The solidus temperature specifies the temperature below which a material is completely solid, and the minimum temperature at which a melt can co-exist with crystals in thermodynamic equilibrium. Liquidus and solidus are mostly used for impure substances (mixtures) such as glasses, metal alloys, ceramics, rocks, and minerals. Lines of liquidus and solidus appear in the phase diagrams of binary solid solutions, as well as in eutectic systems away from the invariant point. When distinction is irrelevant For pure elements or compounds, e.g. pure copper, pure water, etc. the liquidus and solidus are at the same temperature, and the term melting point may be used. There are also some mixtures which melt at a particular temperature, known as congruent melting. One example is eutectic mixture. In a eutectic system, there is particular mixing ratio where the solidus and liquidus temperatures coincide at a point known as the invariant point. At the invariant point, the mixture undergoes a eutectic reaction where both solids melt at the same temperature. Modeling and measurement There are several models used to predict liquidus and solidus curves for various systems. Detailed measurements of solidus and liquidus can be made using techniques such as differential scanning calorimetry and differential thermal analysis. Effects For impure substances, e.g. alloys, honey, soft drink, ice cream, etc. the melting point broadens into a melting interval. If the temperature is within the melting interval, one may see "slurries" at equilibrium, i.e. the slurry will neither fully solidify nor melt. This is why new snow of high purity on mountain peaks either melts or stays solid, while dirty snow on the ground in cities tends to become slushy at certain temperatures. Weld melt pools containing high levels of sulfur, either from melted impurities of the base metal or from the welding electrode, typically have very broad melting intervals, which leads to increased risk of hot cracking. Behavior when cooling Above the liquidus temperature, the material is homogeneous and liquid at equilibrium. As the system is cooled below the liquidus temperature, more and more crystals will form in the melt if one waits a sufficiently long time, depending on the material. Alternately, homogeneous glasses can be obtained through sufficiently fast cooling, i.e., through kinetic inhibition of the crystallization process. The crystal phase that crystallizes first on cooling a substance to its liquidus temperature is termed primary crystalline phase or primary phase. The composition range within which the primary phase remains constant is known as primary crystalline phase field. The liquidus temperature is important in the glass industry because crystallization can cause severe problems during the glass melting and forming processes, and it also may lead to product failure. See also Melting/freezing point Phase diagram Solvus References Glass chemistry Glass engineering and science Glass physics Materials science Metallurgy Phase transitions Threshold temperatures Physical chemistry
Liquidus and solidus
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
818
[ "Glass engineering and science", "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Glass chemistry", "Metallurgy", "Phases of matter", "Threshold temperatures", "Critical phenomena", "Materials science", "Glass physics", "Condensed matter physics", "nan", ...
74,787,278
https://en.wikipedia.org/wiki/Wilfley%20table
The Wilfley Table is commonly used for the concentration of heavy minerals from the laboratory up to the industrial scale. It has a traditional shaking (oscillating) table design with a riffled deck. It is one of several brands of wet tables used for the separation and concentration of heavy ore minerals which include the Deister Table and Holman Table, all built to handle either coarse or fine feeds for mineral processing. The Wilfley Table became a design used world-wide due to the fact it significantly increased the recovery of silver, gold and other precious metals. Such was the table's widespread use that it was included in Webster's Dictionary, and has been in constant use by miners and metallurgists since its invention. Origin The Wilfley Table was conceived by Arthur Wilfley, a mining engineer based in Kokomo, Colorado in the United States. As a silver mine operator, Wilfley spent many years refining his separation table design in order to make the extraction of silver more economic. Rather than using heating processes (smelting) to concentrate the ore, Wilfley had been experimenting on mineral separation by use mineral density contrasts. Wilfley was able to perfect a mechanical solution for the recovery of gold and silver from low-grade ores by means of the Wilfley table. The first Wilfley table was built on a preliminary scale in May 1895. The first full-sized table was used in Wilfley's own mill in Kokomo, in May 1896, while the first table sold for installation was placed in the Puzzle Mill, Breckinridge, Colorado, in August 1896. Patented in 1897, the Wilfley table made mining lower-grade ores profitable. Pulverised ore, suspended in a water solution, was washed across a sloping riffled vibrating table so that metals separated as they drained off. The Wilfley Table was said to have revolutionised ore dressing worldwide and more than 25,000 were in service by the 1930s. Mineral separation The Wilfley Table was built to solve a problem common in the recovery of heavy ore minerals; approximately 90% of gold grains, platinum group minerals, sulphides, arsenides/antimonides and tellurides, in source rocks are silt-sized (<). Concentration of these minerals requires preconcentration techniques that include recovery of this fraction. Preconcentration may involve any number of methods including jigs, spirals, shaking tables, Knelson concentration, dense media separation, panning and hydroseparation. The Wilfley Table exploits preconcentration on the basis of density to separate minerals. It can recover silt to coarse sand-sized heavy minerals for a broad spectrum of commodities including diamonds, precious and base metals, and uranium. Design and operation The table, like most shaking tables, consists of a riffled deck with a gentle tilt on a stable support to counteract the table's oscillation. A motor, usually mounted to the side, drives a small arm that shakes the table along its length. The riffles are typically less than high and cover more than half the table's surface. Varied riffle designs are available for specific applications. The riffles run longitudinally, parallel to the long dimension of the table. The table's shaking motion is parallel to the riffle pattern. Deck construction varies from wood to hard-wearing fiberglass where the riffles are formed as part of the mold. The decks are lined with high coefficient-of-friction materials (linoleum, rubber or plastic), which assists in the mineral recovery process. During operation, a slurry of < sample material consisting of about 25% solids by weight is fed with wash water along the top of the table, perpendicular to the direction of table motion. The table is shaken longitudinally, using a slow forward stroke and a rapid return strike that causes particles to migrate or crawl along the deck parallel to the direction of motion. Particles move diagonally across the deck from the feed end and separate on the table according to size and density. Water flow rate, table tilt angle and intensity of the shaking motion must be properly adjusted for effective mineral recovery. The riffles cause mineral particles to stratify in the protected inter-riffle regions. The finest and heaviest particles are forced to the bottom and the coarsest and lightest particles remain at the top. Particle layers migrate across the riffles with addition of new slurry feed and continued water wash. The riffles are tapered and flatten (disappear) towards the concentrate end of the table. The taper of the riffles causes migrating particles of progressively finer size and higher density to be brought into contact with the flowing film of water that tops the riffles; lighter material is washed away as tailings and middlings. Final concentration takes place in the unriffled region at the end of the deck where the layer of material at this stage is usually only a few particles deep. Efficiency Mineral separation is hampered by several factors, with particle size being particularly important. As the slurry feed grainsize increases, the efficiency of separation tends to decrease. Separation efficiency is also affected by the stroke of the table (frequency and length); fine feed requires a higher speed and shorter stroke than a coarse feed. A frequency of 200 to 325 strokes per minute is typical. When Wilfley tables were originally employed to rework tailing dumps, the tables were found to enhance mineral recovery by some 3540% percent compared to existing processes, though this is not always the case. Optimisation of table setup can have a significant impact on the recovery of ore. Using magnetite as a synthetic ore to test recovery on a Wilfley Table, Mackay et al. (2015) found that an optimised table setup (i.e. table inclination, wash-water flow rate, material feed rate, table speed, stroke amplitude, feed grade and feed density) increased magnetite recovery by a factor of 3.7. The Wilfley table, like any wet table, is one of the most metallurgically efficient forms of gravity concentration, being used to treat the smaller, more difficult flow-streams, and to produce finished concentrates from the products of other forms of gravity system. Additional efficiencies are gained in the treatment of low grade feeds where two or even three decks are stacked one above the other allowing for continuous feeding. Modern usage Modern applications of the Wilfley table (and other wet shaking tables) are predominantly observed in the following roles: Laboratories. Small shaking tables are an excellent tool for see if a material will be responsive to gravity separation techniques Gold rooms. Wilfley tables typically act as rougher tables on gravity gold concentrate ahead final concentration methods (e.g. panning) High value heavy mineral concentrate cleaners.  In mining, the vast majority of Wilfley-type tables are installed globally to concentrate (e.g.) tin, tungsten, tantalum, niobium, zircon, rutile, leucoxene, xenotime, monazite Zircon finishing table – a specialist application Tables are now also being used in the recycling of electronic scrap to recover precious metals. External links Wilfley Tables - 911 Metallurgist References Mining equipment Metallurgical processes Mineral processing
Wilfley table
[ "Chemistry", "Materials_science", "Engineering" ]
1,505
[ "Metallurgical processes", "Mining equipment", "Metallurgy" ]
74,787,345
https://en.wikipedia.org/wiki/Cannabidiol%20diacetate
Cannabidiol diacetate (CBD-di-O-Acetate, CBD-DO) is a semi-synthetic derivative of cannabidiol derived by acetylation of the OH groups, which presumably acts as a prodrug for CBD. It has been found as a component of grey-market cannabis products such as e-cigarette liquids and edible gummy lollies. See also 4'-Fluorocannabidiol 7-Hydroxycannabidiol 8,9-Dihydrocannabidiol Cannabidiol dimethyl ether Delta-6-Cannabidiol H4-CBD KLS-13019 THC-O-acetate References Cannabinoids Acetate esters Isopropenyl compounds
Cannabidiol diacetate
[ "Chemistry" ]
162
[ "Isopropenyl compounds", "Functional groups" ]
74,787,988
https://en.wikipedia.org/wiki/KELT-24
KELT-24 (HD 93148, MASCARA-3) is a single star in the constellation Ursa Major at a distance of approximately 316 light-years (about 96.8 parsecs) from Sun. The apparent magnitude of the star is +8.33m. The star's age is estimated to be about 1.6 billion years. As an F-type main-sequence star, it is similar to the Sun, but slightly hotter and more luminous. Nomenclature This star was first catalogued in the Henry Draper Catalogue as HD 93148. The Henry Draper Catalogue gave stars visible to the naked eye in suitable conditions a designation, indicating that this star can be seen with the naked eye. But in 2019, the Multi-site All-Sky Camera and the Kilodegree Extremely Little Telescope announced the discovery of the exoplanet KELT-24b/MASCARA-3b around this star. Thus, it is most commonly known as KELT-24, although the star is sometimes catalogued as MASCARA-3. Characteristics KELT-24 is a yellow-white star with a spectral class of F5 or F7. Its mass is about 1.4 , its radius is about 1.555 , and its luminosity is about 3.466 . Its effective temperature is about 6437 K. Planetary system In 2019, the discovery of the Hot Jupiter type planet KELT-24b/MASCARA-3b was announced by the Multi-site All-Sky CAmeRA and the Kilodegree Extremely Little Telescope. TESS data confirmed that no additional companions are orbiting this star. Since this discovery, the system is now called KELT-24 or MASCARA-3. References F-type main-sequence stars Ursa Major Planetary systems with one confirmed planet
KELT-24
[ "Astronomy" ]
366
[ "Ursa Major", "Constellations" ]
74,788,131
https://en.wikipedia.org/wiki/Sylvicola%20dubius
Sylvicola dubius is a species of wood gnat in the genus Sylvicola. The species is predominantly found in southeastern Australia, but can also be found in New Zealand, southwestern Australia and East Timor. Taxonomy The species was first described by French entomologist Pierre-Justin-Marie Macquart in 1850, who named the species Chrysopyla dubius. Behaviour The species is known to thrive on fallen apples. Distribution The species is found in south-eastern Australia, south-western Australia, Tasmania, Lord Howe Island, New Zealand and in East Timor. Gallery References Anisopodidae Biota of Timor-Leste Diptera of Australasia Diptera of New Zealand Insects described in 1850 Insects of Australia Taxa named by Pierre-Justin-Marie Macquart
Sylvicola dubius
[ "Biology" ]
162
[ "Biota by country", "Biota of Timor-Leste" ]
74,788,333
https://en.wikipedia.org/wiki/Test%20driver%20%28software%29
In software testing a test driver is a software component or application that initiates and controls the execution of a program under test, especially when such components are part of a larger system and cannot run in isolation. Drivers control applications across various stages of software testing, from unit and integration testing right through to system integration testing and acceptance testing , especially when the target module is a component of a larger system that is not yet fully implemented or otherwise unavailable. Definition A test driver is a software component or tool developed to initiate and oversee the execution of a component under test, particularly when the component is part of a larger system and the system is yet to be fully implemented. Essentially, the test driver mimics the components of a system that interact with the component under test, feeding it the necessary input and controlling its execution. The primary goal of using a test driver is to verify the functionality of the isolated component in the absence of its intended complete environment. Purpose Test drivers are tailored to meet the unique requirements of different testing environments. With manual test drivers, testers can directly initiate actions, offering them direct control throughout the testing phase. In comparison, automated test drivers—typically in the form of tools or scripts—can carry out tests on their own. These are especially useful in situations that demand repetitive or extensive testing. Comparison with test stubs Test drivers and test stubs are both instrumental in software testing, but they serve distinct roles within a test harness. Test drivers are typically an active component and control or call the system under test without further inputs after they are initialised, stubs on the other hand are usually passive components that only receive data and respond to calls from the tested system when needed. References Software testing
Test driver (software)
[ "Engineering" ]
342
[ "Software engineering", "Software testing" ]
74,788,788
https://en.wikipedia.org/wiki/Ghost%20characters
are erroneous kanji included in the Japanese Industrial Standard, JIS X 0208. 12 of the 6,355 kanji characters are ghost characters. Overview In 1978, the Ministry of Trade and Industry established the standard JIS C 6226 (later JIS X 0208). This standard defined 6349 characters as JIS Level 1 and 2 Kanji characters. This set of Kanji characters is called "JIS Basic Kanji". At this time, the following four lists of Kanji characters were used as sources. Kanji Table for Standard Codes (Draft): IPSJ Kanji Code Committee (1971) National Land Administrative Districts Directory: Geographical Society of Japan (1972) Nippon Seimei's family name table: Nippon Life (1973, no longer extant) Basic Kanji for Administrative Information Processing: Administrative Management Agency (1975) At the time of the establishment of the standard, the authority for each character was not clearly stated, and it was pointed out that some characters had unknown meanings and usage examples. The term "ghost character" was coined from "ghost word", meaning a word that is included in dictionaries but has no practical use. The most common examples are "妛" and "彁". These characters were never mentioned in the Kangxi Dictionary or the Dai Kan-Wa Jiten, a comprehensive collection of ancient Chinese character books. In 1997, the drafting committee for the revised standard, led by its chairman, Koji Shibano, and Hiroyuki Sasahara of the National Institute for Japanese Language and Linguistics, investigated the literature referred to in the drafting of the 1978 standard. It was revealed that many of the characters that had been considered ghost characters were actually kanji used in place names. According to the survey, prior to the drafting of the 1978 standard, the Administrative Management Agency had compiled eight lists of Kanji characters, including the above 1–3, in 1974, entitled "Frequency of Use and Correspondence Analysis Results of Kanji Characters for Selection of Standard Kanji Characters for Administrative Information Processing." This is accompanied by a list of kanji characters and their original sources. The results of this correspondence analysis, rather than the original sources, were referred to when selecting the JIS basic kanji at that time. Of these, many ghost characters were found to be included in those based on the Comprehensive list of administrative divisions of national land and List of Kanji characters for personal names by Nippon Life Insurance Company. In particular, the List of Kanji characters for personal names had no original source at the time of drafting the first standard, and its contents have been pointed out to be inadequate. In response to these results, the Standard Revision Committee restored the 1972 edition of the Comprehensive list of administrative divisions of national land from its proofreading history, and checked all the kanji appearing in the book against all the pages to confirm the examples. In addition, as a replacement for the List of Kanji characters for personal names, which no longer exists, they conducted an exhaustive literature search, including a comparative study of the NTT and Nippon Telegraph and Telephone Public Corporation telephone directory databases and a survey of more than 30 ancient and modern character books. 12 kanji characters remain unidentified. 3 appear to be typos. Perhaps by coincidence, there are eight characters that were listed in the Japanese ancient dictionary or the Chinese ancient dictionary. As for "彁" is Ka, no concrete source has been found. Ghost characters have already been adopted into international standards such as Unicode, and changes to these standards are likely to cause compatibility problems, making it virtually impossible to modify or remove ghost characters. List of ghost characters The results of the aforementioned survey by Hiroyuki Sasahara et al. are summarized in Annex 7, "Detailed Description of Ward Locations", of JIS X 0208:1997. This section excerpts some of them. JIS X 0208:1997 compiles the details of the sources of 72 characters whose sources have been identified, mainly those not listed in both Morohashi's Dai Kan-Wa Jiten and Kadokawa's Shin Jigen. However, this also includes characters that have been found to be misspelled by the original sources. The list of delimiters appended as "source authority" in Annex 7 of JIS X 0208:1997 lists 72 characters, but the detailed text does not list "鰛(82-60)", which is only 71 characters. Unknown sources JIS X 0208:1997 treats the 12 characters in the table below as "Authority unknown", "Unknown", or "Unidentifiable" because it is not certain which of the four aforementioned lists of kanji is the source of the characters. Since ghost characters are "kanji that do not exist", the readings are given "for convenience". Possible typos Some of the characters of unknown authority are believed to have been miswritten by the standard's creator. It is possible that "壥" was miswritten because "㕓", which is similar to "壥", is not included in the JIS Basic Kanji. "㕓" is also not included in JIS X 0213. It is possible that "妛" was miswritten because "𡚴", which is similar to "妛", is not included in the JIS Basic Kanji. In the National Land Administrative Districts Index, the source for this document, there is a shadow-like print mark on the overlay that appears to have been created by cutting and pasting together parts of different characters when creating the block, and it is assumed that this was mistakenly transcribed as a horizontal stroke. It is possible that "椦" was miswritten because "橳", which is similar to "椦", is not included in the JIS Basic Kanji. Treatment in dictionaries Since the establishment of the standard, the policy for compiling dictionaries has been to publish character books that are based on the assumption that all JIS basic Kanji characters are listed. For ghost characters, it is not possible to refer to past sources. Therefore, their treatment differed depending on the dictionaries and individual characters as follows. Makeshift readings assigned In equipment that implements JIS basic Kanji characters, they are often assigned a phonetic reading. Some dictionaries also list these makeshift readings. Hiroyuki Sasahara points out that these readings may have been given based on a research report by the Japan Electronics and Information Technology Industries Association (1982) and published materials by NEC (1982) and IBM Japan (1983). Regarded as variations of similar characters Some have assigned "駲" as a variant of "馴" and "軅" as a variant of "軈". None of these sources provide a source. The character "妛" may be a typo of the very similar character "" (the upper "山" becomes "屮"), and it is found in the Dai Kan-Wa Jiten and Kangxi Dictionary. This is also introduced in the JIS X 0208:1997 survey with an example of implicit merging with an authoritative source. These two characters are also merged into the same code point in Unicode. Individual interpretations Since "竜" is a variant of "龍", there is an interpretation that "槞" is a variant of "櫳". Some dictionaries consider "鵈"="耳(ear)"+"鳥(bird)" to be the character for black kites. Explained as unknown After the results of the research above were published, these contents were generally adopted in dictionaries. The Dai Kan-Wa Jiten published a supplemental volume in 2000; the letters 垈, 垉, 岾, 橸, 汢, 粭, 糘, 膤, 軅 and 鵈 were recorded there. And Kadokawa's Shin Jigen (New Character Source) was revised in 2017 to include all JIS standards first through fourth, including ghost characters. Character's remains Since Chinese characters (including Japanese kanji) have been used in East Asian countries since ancient times and have been handed down mainly by handwriting, there have arisen characters with slightly different writing styles from country to country or within a single country, so-called variant Chinese characters. Unicode did not adopt all variations, and characters with only slight differences were inclusive and registered. On the other hand, combining simple parts of a Chinese characters to create another character has also been done in different countries and regions. As a result, the same Chinese characters may be invented in different countries by coincidence with different (sometimes identical) meanings. As mentioned above, it is presumed that the Japanese ghost character "妛" was originally just "𡚴", which is a combination of "山" and "女", but with an accidental "一" in between. On the other hand, there is a Chinese character in China "" which is a combination of "屮", "一", and "女" Which is also a variant of "媸". However, in Unicode, "妛", which did not originally exist in Japan, was encompassed because it happened to be similar to "". Moreover, the Japanese character "妛", which is a mistake, was registered as a Unicode character. Also, the Japanese ghost character "閠" (lower part is "玉") is thought to be a misspelling for "閏" (lower part is "王"). (A 16th-century manuscript of the Japanese 15th-century Wagokuhen also has the character "閠", but it is a solitary example.) On the other hand, the Chinese character in China "閠" is a kind of variant of "閏", which is not a misspelling. This was also unified in Unicode. Some believe that the Japanese ghost character "岾" is a Kokuji (a uniquely Japanese kanji) meaning bald mountains, and was originally a misspelling of "岵". In Korea, however, this character was created as a Chinese character meaning mountain pass. This was also unified in Unicode. Contemporary use Since the publishing of the standard, examples of ghost characters have appeared along with their widespread use. The "祢宜", the title of the deputy manager of a Japanese shrine, is sometimes misused as "袮宜" (using the 衣 radical instead of the correct 示) In some cases, the Japanese surname "栩谷" is mistaken for "挧谷" (using the 手 radical instead of the correct 木). Japanese folklorist Motoji Niwa introduced the surname "妛芸凡" (Akiōshi) in his book. The Asahi Shimbun database contained the name "埼玉自彊会" printed in the February 23, 1923 edition of The Asahi Shimbun, but when it was digitized, it was incorrectly labeled "埼玉自彁会." It has now been corrected. Examples of use in fiction Japanese tokusatsu television series Gosei Sentai Dairanger features a character named "嘉挧" (Kaku). The name is taken from the ancient Chinese statesman Jia Xu (賈詡), but the characters have been replaced by ghost characters because the character "詡" is not registered in JIS X 0208. The book 5A73, by Japanese mystery writer Yuji Yomisaka, begins with a series of murders in which the ghost character "暃" is written on the bodies of the victims. The music game Beatmania IIDX includes a song titled "閠槞彁の願い" that uses ghost characters. According to the comments on the song, the pronunciation is "unpronounceable to humans" and is tentatively called "Gyokurōka no Negai" (ぎょくろうかのねがい), which is the ateji reading of the ghost characters. Similar cases in Unicode Unicode's CJK Unified Ideographs also have characters whose inclusion history is unknown and are sometimes called "ghost characters" as well. For example, it has been pointed out that the character "螀" (U+8780), which was also registered in Unicode because it was included in the CCITT Chinese Primary Set, may be a typographical error that was adopted without sufficiently checking the source. was added by Monotype Imaging to its mathematical sets in 1972 for unknown reasons. It has since been included in Unicode. In the CJK Compatibility block of Unicode 1.0, there is a square version of the Japanese word for "baht", written in katakana script. The Japanese for "baht" is (). However, the reference glyph and the character name correspond to (, from English "parts"). The CJK codepoint, , is documented in subsequent versions of the standard as "a mistaken, unused representation" and users are directed to instead. Consequently, only a few computer fonts have any content for this codepoint and its use is deprecated. References Character encoding Chinese character lists Error Unicode
Ghost characters
[ "Technology" ]
2,656
[ "Natural language and computing", "Character encoding" ]
74,789,709
https://en.wikipedia.org/wiki/Iodate%20sulfate
Iodate sulfates are mixed anion compounds that contain both iodate and sulfate anions. Iodate sulfates have been investigated as optical second harmonic generators, and for separation of rare earth elements. Related compounds include the iodate selenates and chromate iodates. Iodate sulfates can be produced from water solutions of iodic acid and sulfate salts. List References Iodates Sulfates Mixed anion compounds
Iodate sulfate
[ "Physics", "Chemistry" ]
85
[ "Matter", "Mixed anion compounds", "Sulfates", "Oxidizing agents", "Salts", "Iodates", "Ions" ]
76,357,093
https://en.wikipedia.org/wiki/Inert%20salt
In chemistry, an inert salt is a salt used to adjust the ionic strength of a solution. This is usually done in equilibrium or kinetic studies in order to reduce relative changes in the ionic strength of a solution. The real goal is to reduce changes in the activity coefficients of ionic species which allows the definition of conditional equilibrium or rate constants. Any salt will affect the ionic strength, inert salts have the additional property that both the cations and the anions of the salt do or should not not interfere in any way with the molecules that are investigated. They are supposed to only influence the ionic strength. Typical inert salts that are used include: NaClO4, NaCl, KNO3, NaNO3, triflates (e.g. NaOSO2CF3). Inert salts are never perfectly inert and their use will always interfere with the process under investigation, although the influence may be negligible. References Salts Equilibrium chemistry Chemical compounds Laboratory techniques
Inert salt
[ "Physics", "Chemistry" ]
201
[ "Chemical compounds", "Molecules", "Salts", "Equilibrium chemistry", "nan", "Matter" ]
76,357,529
https://en.wikipedia.org/wiki/Rebecsinib
Rebecsinib (17S-FD-895) is an experimental anticancer medication derived by modification of the natural product Pladienolide B, which acts as an inhibitor of splicing-mediated activation of the enzyme ADAR1, and is in development as a potential treatment for leukemia. See also Enbezotinib Pralsetinib Resigratinib Selpercatinib Zeteletinib References Enzyme inhibitors Lactones Epoxides Acetate esters Methoxy compounds Triols Twelve-membered rings
Rebecsinib
[ "Chemistry" ]
118
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
76,358,134
https://en.wikipedia.org/wiki/Valery%20I.%20Levitas
Valery I Levitas is a Ukrainian mechanics and material scientist, academic and author. He is an Anson Marston Distinguished Professor and Murray Harpole Chair in Engineering at Iowa State University and was a faculty scientist at the Ames National Laboratory. Levitas is most known for his works on the mechanics of materials, stress and strain-induced phase transformations and chemical reactions. Among his authored works are his publications in academic journals, including Science, Nature Communications, Nano Letters as well as monographs such as Large Deformation of Materials with Complex Rheological Properties at Normal and High Pressure. He is the recipient of the 2018 Khan International Award for outstanding contributions to the field of plasticity. Education Levitas earned his M.S. in Mechanical Engineering from Kiev Polytechnic Institute in 1978, followed by a PHD in Materials Science and Engineering from the Institute for Superhard Materials in 1981. In 1988, he completed a Doctor of Science degree in Continuum Mechanics from the Institute of Electronic Machine Building. Furthermore, in 1995, he obtained his Doctor-Engineer habilitation in Continuum Mechanics from the University of Hannover. Career Levitas commenced his academic journey in 1978 at the Institute for Superhard Materials of the Ukrainian Academy of Sciences in Kiev. From 1978 to 1981, he served as an engineer and then as a junior researcher from 1981 to 1984. During his tenure at the institute, he led a research group consisting of researchers and students from 1982 to 1994. Simultaneously, he held the positions of senior researcher from 1984 to 1988 and leading researcher from 1989 to 1994. Additionally, he founded the private research firm, Strength, in 1988. Since 1993 he was a Humboldt Research Fellow at the Institute of Structural and Computational Mechanics at the University of Hannover, serving until 1995. From 1995 to 1999, he continued at the same institution as a research and visiting professor. In 1999, he transitioned to Texas Tech University, where he was an associate professor in the Department of Mechanical Engineering until 2002, and then a professor until 2008. He was also the Founding Director of the Center for Mechanochemistry and Synthesis of New Materials from 2002 till 2007. From 2008 to 2017, he served as the Schafer 2050 Challenge Professor in both the Department of Aerospace Engineering and the Department of Mechanical Engineering at Iowa State University. Between 2017 and 2023, he was the Vance Coffman Faculty Chair Professor in Aerospace Engineering, and since 2023 the Murray Harpole Chair in Engineering. Moreover, he has been the Anson Marston Distinguished Professor in Engineering since 2018, all at the same Departments. In addition, he has served as a faculty scientist at the Ames National Laboratory within the US Department of Energy from 2008 to 2023. Since 2002 he has also run the research and consulting firm Material Modeling. Research Levitas' research has focused on the interplay between plasticity and phase transformations across various scales through the creation of various methodologies. He pioneered the field of theoretical high-pressure mechanochemistry through the development of a comprehensive four-scale theory and simulations spanning from the first principle and molecular dynamics to nano- and microscale phase-field approaches and macroscale treatment. His work includes coupling theoretical frameworks with quantitative in-situ experiments using synchrotron radiation facilities to investigate phase transformations and plastic flow in various materials under high pressure and large deformations. These efforts resulted in the identification of novel phenomena and phases, methods for controlling phase transformations, and the search for new high-pressure materials. Additionally, his research has contributed to the determination of material properties such as transformational, structural, deformational, and frictional characteristics from high throughput heterogeneous sample fields. His research team discovered and harnessed the phenomenon of "rotational plastic instability" to lower the required pressure for producing superhard cubic BN, reducing it from 55 to 5.6GPa. In addition, their theoretical insights enabled a reduction in the transformation pressure from graphite to diamond, dropping it from 70 to 0.7GPa through shear-induced plasticity. Moreover, his team unveiled a new amorphous phase of SiC, the self-blow-up phase transformation-induced plasticity-heating process explaining deep-focus earthquakes, the pressure self-focusing effect, virtual melting at temperatures up to 5000K below melting point as a novel mechanism of solid phase transformation, stress relaxation, and plastic flow. Furthermore, his group introduced a mechanochemical melt dispersion mechanism to explain unusual phenomena in the combustion of Al particles at nano and micro scales, proposing significant advancements in particle synthesis, including the creation of prestressed particles, to enhance their energetic performance. He also advanced phase field approach to various phase transformations, dislocation evolution, fracture, surface-induced phenomena, and their interaction by introducing advanced mechanics, large-strain formulation, strict requirements, and extending to larger sample scale. Patents Levitas holds patents to 11 different inventions. They are mostly related to the development of high-pressure apparatuses for diamond synthesis and physical studies. They include a rotational diamond anvil cell. Awards and honors 1995 – Distinguished Paper Award, International Journal of Engineering Sciences 1998 – Richard von Mises Award, GAMM 2007 – ASME Fellow, American Society of Mechanical Engineers 2010 – Lifetime Achievement Award, International Biographical Centre 2011 – Honorary Doctor in Materials, Institute for Superhard Materials 2018 – Khan International Award 2023 – Member, EU Academy of Sciences 2023 – Member, European Academy of Sciences and Arts 2023 – IAAM Fellow, International Association of Advanced Materials Bibliography Books Large Deformation of Materials with Complex Rheological Properties at Normal and High Pressure (1996) ISBN 1560720859 Selected articles Levitas, V. I. (1998). Thermomechanical theory of martensitic phase transformations in inelastic materials. International Journal of Solids and Structures, 35(9–10), 889–940. Mielke, A., Theil, F., & Levitas, V. I. (2002). A Variational Formulation of Rate-Independent Phase Transformations Using an Extremum Principle. Archive for Rational Mechanics and Analysis, 162, 137–177. Levitas, V. I., & Preston, D. L. (2002). Three-dimensional Landau theory for multivariant stress-induced martensitic phase transformations. I. Austenite↔ martensite. Physical Review B, 66(13), 134206. Levitas, V. I., Asay, B. W., Son, S. F., & Pantoya, M. (2006). Melt dispersion mechanism for fast reaction of nanothermites. Applied Physics Letters, 89(7) 071909. Hsieh S., Bhattacharyya P., Zu C., Mittiga T., Smart T. J., Machado F., Kobrin B., Höhn T. O., Rui N. Z., Kamrani M., Chatterjee S., Choi S., Zaletel M., Struzhkin V. V., Moore J. E., Levitas V. I., Jeanloz R., Yao N. Y. (2019) Imaging stress and magnetism at high pressures using a nanoscale quantum sensor. Science, 366, 1349–1354. Levitas V.I. and Samani K. (2011) Size and mechanics effects in surface-induced melting of nanoparticles. Nature Communications, 2, 284. References Materials scientists and engineers 21st-century American academics Kyiv Polytechnic Institute alumni University of Hanover alumni Iowa State University faculty 21st-century Ukrainian scientists Living people Year of birth missing (living people)
Valery I. Levitas
[ "Materials_science", "Engineering" ]
1,574
[ "Materials scientists and engineers", "Materials science" ]
76,358,228
https://en.wikipedia.org/wiki/Satellite-derived%20bathymetry
Satellite-derived bathymetry (SDB) is the calculation of shallow water depth from active or passive satellite imaging sensors. The technology requires a sensor (hardware) and relevant algorithms (software) to derive bathymetric measurements from the data recorded by the sensor. Methods SDB methods can provide bathymetric data with varying spatial resolution, depending on the analysis method and its underlying physics. The most common methods for coastal and very high resolution (1 to 30 meters) bathymetric data are based on multispectral satellite sensors and the analytical inversion of the radiative transfer equation - often referred to as physical SDB methods or empirical approaches. Other approaches rely on photogrammetry or visual image interpretation. Moderate resolution bathymetric data (50 several 100 meters of spatial resolution) use multispectral or synthetic aperture radar satellite data and generate bathymetric information by the inversion of the wave dispersion equation The International Hydrographic Organisation IHO operates a project team on Satellite-derived bathymetry supporting the Hydrographic Surveys Working Group. It has published the B-13 IHO publication an Guidelines for Satellite-derived bathymetry which provides more detailed information on each method. Applications In contrast to other bathymetric survey methods, such as ship-based echo sounding or airborne lidar bathymetry surveys, advanced Satellite-derived bathymetry methods can be used to map the seabed morphology without physically being on-site. The frequent revisit times of satellites and historical data archives also allow a continuous environmental monitoring of the shallow water zone. Therefore, Satellite-derived bathymetry finds uptake in applications which require to map and monitor shallow waters, which might be not accessible or cover significant geographical areas and support charting in those areas. SDB data are part of the European harmonized bathymetry grid EMODnet bathymetry, and as such integrated into the General Bathymetric Chart of the Ocean. Software tools for the creation of Satellite-derived bathymetry range from desktop-based cookbook solutions up to web-based software solutions References Oceanography
Satellite-derived bathymetry
[ "Physics", "Environmental_science" ]
421
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
76,358,418
https://en.wikipedia.org/wiki/Antoing%20cement%20kiln
The Antoing cement kiln is in the Belgian province of Hainaut. The facility is next to the Scheldt River in the Tournai region. It was built in 1929 under CBR which was later taken over by Heidelberg Materials. History and ownership In 1910 the anonymous association Cimenteries et Briqueteries Réunies Bonne-Espérance et Loën (abbreviated CBR) was formed by the merger of the two companies: La Bonne Esperance and Fabrique de Ciment Portland Artificiel de Loën. In 1929 CBR took over the Société Carrières et Fours à Chaux et à Ciment du Coucou in Antoing. Around 1975, crushing plants were installed on the Antoing site. In 1981, the Société Cimescaut (Société Générale des Ciments Portland de l'Escaut) becomes part of CBR together with its blue limestone quarry. During this year they switch from the wet process to the dry process in Antoing. In 1983, CBR starts building a new clinker factory in Antoing. In 1986, a cement kiln was installed at the Antoing plant. In 1993 CBR was taken over by HeidelbergCement. Heidelberg Materials Heidelberg Materials is a listed building materials company based in Heidelberg, Germany. The company has been operating since 1873 and originated from a brewing family. Heidelberg is the current world leader in producing construction aggregates (293.7 million tonnes/year), 2nd largest in cement production (126.5 million tonnes/year) and 3rd largest producer of ready-mixed concrete (45 million m³/year).   The Heidelberg Group operates in more than 50 countries and has 57,000 employees worldwide working at 3,000 production sites.  Since 1993, they have also been active in Belgium through the acquisition of the company CBR. Because CBR was already a large multinational company, Heidelberg was able to grow strongly internationally. It owns several sites in Belgium including Antoing and Lixhe. Kiln Process At the Antoing site of Heidelberg Materials, there is 1 cement kiln that operates according to the dry process with a capacity of 3250 tons of clinker per day. This kiln consists of 3 zones: the extraction, dosing and mixing of raw materials (step 1 to 3) the kiln (step 4 to 7) the crushing process of the resulting product. (step 8 to 10) The mining of raw materials is done in the quarry next to the cement kiln site. After mining, the raw materials are broken down, mixed and crushed into "raw powder." The crushing of the raw material is done in 2 ball mills with a capacity of 130 tons/h. The crushing process also produces a lot of noise, which can go up to Lp = 140 dB which is louder than a jet engine. The powder produced consists of 3 essential cement components: limestone (CaCO3), silica (SiO2) and alumina (Al2O3). Before the powder goes to the kiln, the composition of the 3 components is carefully monitored to ensure the right ratio. The powder can also sometimes be mixed with iron oxide, blast furnace slag from the steel industry or with fly ash from coal combustion.After composing the raw powder, it is taken to the furnace. The furnace consists of 2 processes: the tower where the calcination process (step 5) takes place and the kiln with the sintering process(step 6). The powder is first brought into the top of the 88m high tower where it is heated up to 900 °C and where the calcination of limestone starts to produce lime (CaCO3 → CaO + CO2). During this process, a lot of CO2 (68% of totale emission) is released. After the calcination process, the materials go to the sintering process in the kiln, which is located at the bottom of the tower. The kiln consists of a 67m long tube of 3.9m in diameter with metal on the outside and covered with a fireproof brick on the inside. It is supported by 3 points and is placed under a slight angle of 2.5%. The kiln slides back-and-forth and rotates at a speed of 4.2 rotations per minute. In the sintering process, the powder is further heated to 1450 °C to activate the silica (SiO2) and allow it to combine with the lime and alumina (and iron oxide/blast furnace slag/flying gas) to form the clinker minerals. The result are clinker granules with a size of 3-25mm, which is first cooled heavily to 150 °C before going into storage (step 7). After the kiln and the manufacturing of the clinker granules, they are stored in 1 large silo of 55000 tons. The granules are then transported by ship to other Heidelberg Materials sites in Ghent, Rotterdam and IJmuiden where they are crushed on these sites (step 8 to 10). CO2 emissions During the processes in the kiln, CO2 is emitted into the atmosphere. 68% of these total emissions are released by the chemical process of calcination of limestone into lime (CaCO3 → CaO + CO2). The other 32% is released during the fossil fuel burning process to reach the temperature of 1800 °C. The kiln must remain constantly active to ensure that the kiln does not expand in different zones and to prevent kiln deformation. A backup burner is provided for this purpose as well as a backup for the backup. Shutting down the kiln is possible but will take 5 days, therefore this is only done once a year during the maintenance period. With a production of 3250 tons of clinker per day or 1.15 million tons of clinker per year and an emission of 0.717 tons of CO2 per tonne clinker, we are looking at an annual emission of 842.55 ktons of CO2 per year in Antoing. This means that 33 million trees needs to be planted or 65.9 thousand hectares of forest (comparable to 85% of the land surface of New York City) to absorb these emissions. To put this further into perspective, a comparison can be made between the CO2 emissions of Antoing and the total CO2 emissions in Belgium. Looking at Belgian industry with an emission of 16863 kton CO2 in 2022, Antoing's cement kiln has a 4.9% share there. To dig even deeper into the total CO2 emissions from cement production in Belgium with an emission of 2443 kton CO2 in 2022, Antoing has a contribution of 34.5% of the total CO2 emissions from Belgian cement production. To reduce environmental impact, fossil fuel is often replaced by industrial wastes as an alternative fuel, however not all fuels are suitable such as sulfates, alkanes and sulfites. The less you use these chemicals as fuel, the less you emit of these pollutants. In Antoing, coal is used as the main component for fuel. The rest (65%) comes from alternative fuels such as fluff, animal meal and dried sludges which are fed to the main burner together with pure O2 injection. The preheater is fed with fluff, saw dust, 3D plastic, dried sludges and animal meal. While these alternative fuels reduce CO2 emissions by reducing the proportion of coal, a critical look should also be taken at the alternative fuels used. For example, dried sludge, burning dried sludge releases a lot of methane (CH4) which is also a harmful greenhouse gas and should also be given attention. Another technique Heidelberg Materials uses to reduce impact is to use cRCP, which is old concrete that has been broken down back to its three basic components: aggregates, sand and powder. The extracted powder can then be reactivated by reacting with CO2. This can serve as a replacement powder for fly ash or blast furnace slag, but cannot completely replace cement. This technology ensures that the CO2 that was released to make the cement for the old concrete is not lost, thus recycling the CO2 that was once emitted, so to speak. Future plans To further reduce environmental impact in the future, Heidelberg Materials says they are working on the Anthemis project (Antoing Heidelberg Emissions Integrated Solutions), which is a CO2 capturing project and should be in place by 2029. For Antoing, they want to aim for a 97% reduction of emissions, which amounts to 800000 ton CO2 per year. This will require an investment of 450 million euros. The process would work according to 3 main steps: first, the emissions are cooled down to 50 °C, then CO2 is unbound from the total emissions and finally this unbound CO2 is compressed and prepared for transport. This captured CO2 would then be transported to harbors where it will be taken by ship to Øygarden. Once it arrives, the CO2 would be further compressed and pumped into pipelines that take it 110 km offshore where it can be stored under the seabed under a depth of 2.6 km. Concrete made with this CO2 captured cement will then be given an evoZero logo. Adjoning quarry The Carrière Cimescaut limestone quarry is one of the three quarries present in Antoing and is an opencast mine where mainly blue limestone is extracted. Carrière Cimescaut has been owned by CBR since 1982 and is operated by Sagrex. Together with adjacent quarries: Carrière Lemay (Sagrex) and Carrière du Milieu (Holcim), the quarry covers an area of 240 hectares, comparable to 240 football fields. Carrière Lemay will in the future be united with Carrière Cimescaut to form one large limestone quarry. Demolition of the separation wall between these 2 quarries has already begun. Annually, Carrière Cimescaut mines 2.3 million tons of limestone for making cement and aggregates (asphalt, concrete applications, etc.). From this, about 900,000 tons of clinker is produced and destined for the cement plants in Ghent, Rotterdam and IJmuiden. The remaining limestone is used as granulates. The entire production is transported via the Scheldt River. The quarry consists of 8 banks with a height of 10-20m. The limestone is mined by controlled explosions. Each blast uses about 5 tons of dynamite and yields about 20,000 tons of limestone. After mining, the limestone is ground with a cone crusher and divided into poor and rich limestone. A conveyor belt transports the limestone to the cement plant's storage silos. The rock in the Antoing region dates from the Triassic and Cretaceous geological eras and was formed between about 245 – 145 million years ago. It consists mainly of dark gray to black, clayey limestones divided into horizontal banks of 20 to 80 cm. This is called the formation of Antoing. 4 layers are distinguishable in this formation, from top to bottom: Member of Warchin Member of Gaurain-Ramecroix Calonne superior Calonne inferieure References Kilns Antoing Buildings and structures in Hainaut (province) Buildings and structures completed in 1929 1929 establishments in Belgium Cement
Antoing cement kiln
[ "Chemistry", "Engineering" ]
2,351
[ "Chemical equipment", "Kilns" ]
76,359,420
https://en.wikipedia.org/wiki/K%C3%A1rm%C3%A1n%E2%80%93Moore%20theory
Kármán–Moore theory is a linearized theory for supersonic flows over a slender body, named after Theodore von Kármán and Norton B. Moore, who developed the theory in 1932. The theory, in particular, provides an explicit formula for the wave drag, which converts the kinetic energy of the moving body into outgoing sound waves behind the body. Mathematical description Consider a slender body with pointed edges at the front and back. The supersonic flow past this body will be nearly parallel to the -axis everywhere since the shock waves formed (one at the leading edge and one at the trailing edge) will be weak; as a consequence, the flow will be potential everywhere, which can be described using the velocity potential , where is the incoming uniform velocity and characterising the small deviation from the uniform flow. In the linearized theory, satisfies where , is the sound speed in the incoming flow and is the Mach number of the incoming flow. This is just the two-dimensional wave equation and is a disturbance propagated with an apparent time and with an apparent velocity . Let the origin be located at the leading end of the pointed body. Further, let be the cross-sectional area (perpendicular to the -axis) and be the length of the slender body, so that for and for . Of course, in supersonic flows, disturbances (i.e., ) can be propagated only into the region behind the Mach cone. The weak Mach cone for the leading-edge is given by , whereas the weak Mach cone for the trailing edge is given by , where is the squared radial distance from the -axis. The disturbance far away from the body is just like a cylindrical wave propagation. In front of the cone , the solution is simply given by . Between the cones and , the solution is given by whereas the behind the cone , the solution is given by The solution described above is exact for all when the slender body is a solid of revolution. If this is not the case, the solution is valid at large distances will have correction associated with the non-linear distortion of the shock profile, whose strength is proportional to and a factor depending on the shape function . The drag force is just the -component of the momentum per unit time. To calculate this, consider a cylindrical surface with a large radius and with an axis along the -axis. The momentum flux density crossing through this surface is simply given by . Integrating over the cylindrical surface gives the drag force. Due to symmetry, the first term in upon integration gives zero since the net mass flux is zero on the cylindrical surface considered. The second term gives the non-zero contribution, At large distances, the values (the wave region) are the most important in the solution for ; this is because, as mentioned earlier, is a like disturbance propating with a speed with an apparent time . This means that we can approximate the expression in the denominator as Then we can write, for example, From this expression, we can calculate , which is also equal to since we are in the wave region. The factor appearing in front of the integral need not to be differentiated since this gives rise to the small correction proportional to . Effecting the differentiation and returning to the original variables, we find Substituting this in the drag force formula gives us This can be simplified by carrying out the integration over . When the integration order is changed, the limit for ranges from the to . Upon integration, we have The integral containing the term is zero because (of course, in addition to ). The final formula for the wave drag force may be written as or The drag coefficient is then given by Since that follows from the formula derived above, , indicating that the drag coefficient is proportional to the square of the cross-sectional area and inversely proportional to the fourth power of the body length. The shape with smallest wave drag for a given volume and length can be obtained from the wave drag force formula. This shape is known as the Sears–Haack body. See also Taylor–Maccoll flow References Fluid dynamics
Kármán–Moore theory
[ "Chemistry", "Engineering" ]
816
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
76,359,810
https://en.wikipedia.org/wiki/Classifying%20space%20for%20SO%28n%29
In mathematics, the classifying space for the special orthogonal group is the base space of the universal principal bundle . This means that principal bundles over a CW complex up to isomorphism are in bijection with homotopy classes of its continuous maps into . The isomorphism is given by pullback. Definition There is a canonical inclusion of real oriented Grassmannians given by . Its colimit is: Since real oriented Grassmannians can be expressed as a homogeneous space by: the group structure carries over to . Simplest classifying spaces Since is the trivial group, is the trivial topological space. Since , one has . Classification of principal bundles Given a topological space the set of principal bundles on it up to isomorphism is denoted . If is a CW complex, then the map: is bijective. Cohomology ring The cohomology ring of with coefficients in the field of two elements is generated by the Stiefel–Whitney classes: The results holds more generally for every ring with characteristic . The cohomology ring of with coefficients in the field of rational numbers is generated by Pontrjagin classes and Euler class: The results holds more generally for every ring with characteristic . Infinite classifying space The canonical inclusions induce canonical inclusions on their respective classifying spaces. Their respective colimits are denoted as: is indeed the classifying space of . See also Classifying space for O(n) Classifying space for U(n) Classifying space for SU(n) Literature External links classifying space on nLab BSO(n) on nLab References Algebraic topology
Classifying space for SO(n)
[ "Mathematics" ]
323
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
76,359,866
https://en.wikipedia.org/wiki/Classifying%20space%20for%20SU%28n%29
In mathematics, the classifying space for the special unitary group is the base space of the universal principal bundle . This means that principal bundles over a CW complex up to isomorphism are in bijection with homotopy classes of its continuous maps into . The isomorphism is given by pullback. Definition There is a canonical inclusion of complex oriented Grassmannians given by . Its colimit is: Since real oriented Grassmannians can be expressed as a homogeneous space by: the group structure carries over to . Simplest classifying spaces Since is the trivial group, is the trivial topological space. Since , one has . Classification of principal bundles Given a topological space the set of principal bundles on it up to isomorphism is denoted . If is a CW complex, then the map: is bijective. Cohomology ring The cohomology ring of with coefficients in the ring of integers is generated by the Chern classes: Infinite classifying space The canonical inclusions induce canonical inclusions on their respective classifying spaces. Their respective colimits are denoted as: is indeed the classifying space of . See also Classifying space for O(n) Classifying space for SO(n) Classifying space for U(n) Literature External links classifying space on nLab BSU(n) on nLab References Algebraic topology
Classifying space for SU(n)
[ "Mathematics" ]
269
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
76,361,821
https://en.wikipedia.org/wiki/Buellia%20oidalea
Buellia oidalea is a species of crustose lichen found along the Pacific coast of North America, from Coos County, Oregon to Baja California Sur. Morphology The thallus of B. oidalea is crustose, varying from thin and rimose-areolate to thick and rugose-verrucose or even subsquamulose. The prothallus is often present, appearing black. The thallus surface is yellowish white to glaucous gray, smooth, and esorediate. The medulla is white and lacks calcium oxalate. The apothecia of Buellia oidalea are lecideine and commonly found, ranging from 0.2 to 2 mm in diameter, and are sessile in nature. Initially, the disc is black, devoid of pruina, and flat, but as it matures, it becomes convex. The margin starts as distinct but eventually becomes excluded and black. The proper exciple measures between 35 and 95 μm in thickness, lacking secondary metabolites, and appears uniformly dark brown throughout, with carbonized cells smaller than 6 μm. It is transient and accompanied by a brown hypothecium, which is less than 280 μm thick. The epihymenium shares a continuous brown pigmentation with the outer exciple. The hymenium is hyaline, generously interspersed with oil droplets, and measures 115 to 165 μm in height. The tips of the paraphyses typically measure less than 3 μm in width and are adorned with distinct apical caps. Asci are clavate, of Bacidia-type, measuring 95 to 112 x 20 to 40 μm, and typically contain 8 spores. The ascospores of Buellia oidalea typically exhibit a hyaline to ±olive coloration, eventually transitioning to brown, while often retaining unpigmented apices. They possess a muriform structure, comprising 13–40 cells in optical section, and are ellipsoid to oblong in shape, measuring (29.5-)33.7-[39.6]-45.5(-57) x (12.5-)13.3-[15.5]-17.7(-26.5) μm. The ascospores feature apical, lateral, and septal wall thickenings, with the apical thickenings often being permanent. Their proper wall is approximately 1.2 μm thick, and they lack a perispore, with no discernible ornamentation visible under DIC. Pycnidia are rare, immersed with only the uppermost part protruding, and the wall is mainly pigmented in the upper part. The conidia are bacilliform, 4-6 x 1 μm. Chemistry Spot tests show the thallus is K-, C+ orange (best seen under the microscope), P-. The medulla is K-, C-, P-. The lichen exhibits UV+ bright or pale yellow to orange fluorescence, and the medulla is nonamyloid in iodine reaction. The secondary chemistry includes diploicin (major), isofulgidin (minor), an unknown (minor) compound, and 2,5-dichloro-3-O-methylnorlichexanthone (trace). Ecology and distribution Buellia oidalea thrives on the bark and wood surfaces of trunks, branches, and twigs found on both broad-leaved and coniferous trees and shrubs. Its habitat encompasses various open environments along the Pacific coast, including dune areas, salt marshes, chaparral, and coastal deserts. This species is predominantly found along the Pacific coast and islands of North America, spanning from Coos County, Oregon to Baja California Sur. Notably, within the Sonoran region, it has been documented in the coastal fog zones of southern California, Baja California, and Baja California Sur. Distinguishing features Buellia oidalea is distinguished by its prominent muriform ascospores. While resembling Buellia oidaliella, it sets itself apart with notably larger spores featuring thickened, frequently unpigmented apices, along with a taller hymenium, and the lack of calcium oxalate. References oidalea Fungus species
Buellia oidalea
[ "Biology" ]
898
[ "Fungi", "Fungus species" ]
76,361,889
https://en.wikipedia.org/wiki/Domaine%20Ylang%20Ylang
Domaine Ylang Ylang is the oldest distillery in Mauritius to produce oil from the ylang ylang (Cananga odorata) tree. The plantation of ylang ylang trees alongside the distillery, provided the flowers from which perfume was distilled. The Domaine Ylang Ylang further up the Kestrel Valley commands a view onto the wide lagoon of Vieux Grand Port of Mahebourg and the neighbouring islets on the central east coast of Mauritius. Although the distillery stopped functioning after 2002 when it had been popular with tourists looking for essential oils and aromatherapy, the laboratory and building can still be seen. Domaine du Chasseur In the adjoining Kestrel Valley (also historically known as the Domaine du Chasseur Game Park and Reserve) forest and nature hiking are available for nature-lovers over an area of more than 200 ha where the Mauritius kestrel (Falco punctatus) can be seen. It is an important protected area for many other endemic plants and animal species in Mauritius. References Grand Port District Tourist attractions in Mauritius Landforms of Mauritius Environment of Mauritius Protected areas of Mauritius Valleys of Africa Companies of Mauritius Insular ecology Important Bird Areas of Mauritius Biodiversity
Domaine Ylang Ylang
[ "Biology" ]
241
[ "Biodiversity" ]
76,361,904
https://en.wikipedia.org/wiki/Calonarius%20viridirubescens
Calonarius viridirubescens is a species of gilled mushroom. First described to science in 1997, this species was previously classified as Cortinarius viridirubescens, and is thus commonly known as the yellow-green cort. This California endemic mushroom's coloration is distinctive, with a chartreuse stipe to go with its yellow-green cap (the color can range from grass green to rusty orange). This mushroom has the "enlarged" bulb at the base that is typical of cortinarias, stains red in age, and according to the authors of Mushrooms of the Redwood Coast, has a "prominent cobwebby cortina of whitish yellow to light greenish yellow fibers over much of cap and stipe when young" but this feature is "ephemeral and often absent at maturity." Typically found in oak woodlands, the fruiting triggers and edibility of the yellow-green calonarius remain undescribed. References Cortinariaceae Fungi of North America Taxa named by Meinhard Michael Moser Fungus species Fungi described in 1997
Calonarius viridirubescens
[ "Biology" ]
226
[ "Fungi", "Fungus species" ]
76,362,010
https://en.wikipedia.org/wiki/Buellia%20nashii
Buellia nashii is a species of lichen characterized by its crustose thallus, typically found in the Sonoran Desert Region and adjacent areas. It was first described by Bungartz et al. The species is named in honor of Dr. Thomas H. Nash III, a notable lichenologist and the Ph.D. supervisor of the author. Morphology The thallus appears as a crust, dense in texture, showcasing a spectrum of hues ranging from ivory to deep brown or gray. Its surface varies from smooth to deeply fissured, sometimes adorned with fine or coarse pruina. Apothecia are lecideine in nature, meaning they are sessile and predominantly black, often with thin to thick margins. As they mature, the disc typically darkens and becomes convex. Ascospores are brown, with a single septum, and are shaped either oblong or ellipsoid. Pycnidia are infrequent and take on an urceolate to globose form, housing bacilliform conidia within. Chemistry Typically, Buellia nashii contains the depside atranorin and the depsidones norstictic and connorstictic acid. However, some specimens may lack norstictic acid and instead contain stictic and hypositictic acids. Spot tests usually result in K+ yellow to red, P+ yellow reactions, and negative reactions for C, KC, and CK. The thallus is not amyloid, but apothecia react amyloid in Lugol's solution. Ecology Buellia nashii is commonly found on a variety of siliceous rock substrates, occasionally on sandstones with small amounts of carbonates. It thrives in arid environments, particularly in the Sonoran Desert Region. Distribution The species has a wide distribution throughout the Sonoran Desert Region and adjacent areas, such as Arizona, southern California, Baja California, Baja California Sur, and Chihuahua. Identification Buellia nashii closely resembles B. dispersa but can be distinguished by its chemistry and exciple pigmentation. While both species have similar thalli, B. nashii contains norstictic acid and exhibits a characteristic aeruginose pigment in the outer exciple cells. References nashii Fungus species Fungi described in 2004
Buellia nashii
[ "Biology" ]
477
[ "Fungi", "Fungus species" ]
76,362,145
https://en.wikipedia.org/wiki/Rolling%20contact%20fatigue
Rolling Contact Fatigue (RCF) is a phenomenon that occurs in mechanical components relating to rolling/sliding contact, such as railways, gears, and bearings. It is the result of the process of fatigue due to rolling/sliding contact. The RCF process begins with cyclic loading of the material, which results in fatigue damage that can be observed in crack-like flaws, like white etching cracks. These flaws can grow into larger cracks under further loading, potentially leading to fractures. In railways, for example, when the train wheel rolls on the rail, creating a small contact patch that leads to very high contact pressure between the rail and wheel. Over time, the repeated passing of wheels with high contact pressures can cause the formation of crack-like flaws that becomes small cracks. These cracks can grow and sometimes join, leading to either surface spalling or rail break, which can cause serious accidents, including derailments. RCF is a major concern for railways worldwide and can take various forms depending on the location of the crack and its appearance. It is also a significant cause of failure in components subjected to rolling or rolling/sliding contacts, such as rolling-contact bearings, gears, and cam/tappet arrangements. The alternating stress field in RCF can lead to material removal, varying from micro- and macro-pitting in conventional bearing steels to delamination in hybrid ceramics and overlay coatings. Basics Testing Testing for RCF involves several methods, each designed to simulate the conditions that cause RCF in a controlled environment. Here are some of the methods used: Twin-Disc Stands: This method uses two discs to simulate the wear the occur for rails and wheels. Scaled RCF Tests: These tests use two discs of different diameters. Three-Ball-on-Rod Tester: This is an economical RCF proof of concept test. It is performed to evaluate the influence of heat treatment, material, lubricant, and coatings on fatigue life. Lundberg-Palmgren Theory and ISO 281 Based Method: This method evaluates RCF reliability considering the contact load, the geometric parameters of contact pairs, the oscillation amplitude, the RCF reliability, and the material properties. See also References Materials degradation Mechanical failure modes Tribology Friction
Rolling contact fatigue
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
460
[ "Mechanical phenomena", "Tribology", "Physical phenomena", "Force", "Friction", "Physical quantities", "Mechanical failure modes", "Structural engineering", "Technological failures", "Materials science", "Surface science", "Mechanical engineering", "Materials degradation", "Mechanical fail...
76,363,889
https://en.wikipedia.org/wiki/Pulveroboletus%20sinensis
Pulveroboletus sinensis is a species of blue staining bolete fungus of the family Boletaceae and the genus Pulveroboletus found in the Guangdong Province in China. It is characterized by its yellow to vivid-yellow pulverulent cap which can darken to a deep orange at the center, and is covered with deep-orange to brown-orange conico-pyramidal scales. The cap is cyanescent when cut or bruised, while the stem does not stain. These scales distinguish P. sinensis from a closely related species Pulveroboletus brunneopunctatus. Singular fruiting bodies form from July to October on soil, underneath conifers mixed with broadleaf trees at altitudes of . Currently this species has only been identified in Heishiding, Guangdong Province. References sinensis Fungi described in 2016 Fungus species
Pulveroboletus sinensis
[ "Biology" ]
178
[ "Fungi", "Fungus species" ]
76,364,102
https://en.wikipedia.org/wiki/Exobasidium%20arctostaphyli
Exobasidium arctostaphyli is a species of parasitic fungus that induces witch's broom galls and leaf spots on manzanita trees. See also Exobasidium vaccinii References Further reading Fungi of North America Gall-inducing fungi Ustilaginomycotina Fungus species Taxa named by H. W. Harkness
Exobasidium arctostaphyli
[ "Biology" ]
74
[ "Gall-inducing fungi", "Fungi", "Fungus species" ]
76,364,624
https://en.wikipedia.org/wiki/Pileolaria%20brevipes
Pileolaria brevipes, also known as poison ivy rust, is a species of autoecious fungus in the order Pucciniales. Pileolaria brevipes parasitizes Toxicodendron diversilobum and Toxicodendron radicans. The color of this rust comes from "its asexual spores called urediospores". Poison ivy rust infections become evident in spring as "light pink to dark red swellings on leaflet veins or petioles. By late June, swellings have darkened to brown, and grown. The leaflet or leaf typically exhibits a curled morphology and oftentimes is wilted. Individuals infected with P. brevipes are less prone to flower compared to adjacent, noninfected congeners". Damaging arthropod associates of Toxicodendron species include Epipaschia zelleri and the poison ivy sawfly. References Fungus species Pucciniales
Pileolaria brevipes
[ "Biology" ]
197
[ "Fungus stubs", "Fungi", "Fungus species" ]
76,366,445
https://en.wikipedia.org/wiki/Secured%20by%20Design
Secured by Design (SBD) is a police initiative in the UK that advises on the construction of buildings and development schemes to encourage the adoption of techniques or designs that are thought to discourage crime. SBD recommendations are included in the National Model Design Code and the programme incorporates training police officers as Designing Out Crime Officers (DOCOs), who are referred to in the National Planning Policy Framework, giving the initiative wide influence over British construction. History Secured by Design was created in 1989 as a response to perceived failings of the estates built in the UK's postwar era, with two focuses: the vulnerability of certain construction methods, such as doors or glazing that were considered easy for burglars to bypass; and the wider design of housing estates or urban areas, which often incorporated pedestrian routes that were thought to create escape routes for criminals. In the modern day, Secured by Design has developed into a wide set of standards and design approaches, including on road and footpath layouts, lighting, street furniture, public or communal spaces (like parks and playgrounds), fencing, planting, building positioning, and the materials used in construction. Urbanist Adam Greenfield argues that Secured by Design is based on the principles of Defensible Space, developed in the 1970s by architect Oscar Newman. The scheme is administered by Police Crime Prevention Initiatives, a limited company set up by the ACPO. After the ACPO's dissolution, ownership was transferred to the London Mayor's Office for Policing and Crime. The Director of ACPO Secured by Design in 2012 described the goals of the scheme as "reducing crime and the fear of crime through a blend of design and realistic physical security", claiming that "independent research shows that SBD properties suffer 50% less burglary" with other benefits across other forms of crime and the fear of crime. Criticism Some urbanists and planners have criticised the power the police have over construction and public spaces. Urbanist Phineas Harper described the impact of the scheme as "rooted in systemic prejudices rather than community-centred design principles" and noted Secured by Design's promotion of cul-de-sacs, which have been increasingly criticised by urban planners for encouraging sprawl and unsustainable living. Architect Russell Curtis criticised the scheme's standards as "diametrically opposed to good placemaking", citing examples of his practice where Designing Out Crime Officers had called for public areas to be gated off and arguing that they had too much influence over local authority planners. Echoing these criticisms, Gloucester City Council decarbonisation lead Jon Burke argued that SBD's recommendation to remove trees and greenery may also harm communities and increase overall crime, citing studies by the Yale School of Public Health that found increasing urban tree canopy cover decreased violent and property crime. Secured by Design has also been criticised for the removal of street furniture and planting, including a flower walkway in Southwark that was said to be blocking CCTV sight lines, benches and shrubs in parks in Ashford, and foot and bike paths in Horsham. However, the UK police have defended Secured by Design's principles. Responding to Harper's criticisms, Matthew Scott, the Police and Crime Commissioner for Kent, claimed local residents in Ashford supported action to reduce anti-social behaviour. Jon Cole, the COO of Police Crime Prevention Initiatives argued that criticisms are based on misconceptions of the scheme, that cul-de-sacs are self-policing, and that overall "there is sometimes a little bit of inconvenience, but one which buys enormous amounts of safety." References Urban design Criminology Crime prevention United Kingdom planning policy Security engineering Environmental psychology Law enforcement in the United Kingdom
Secured by Design
[ "Engineering", "Environmental_science" ]
749
[ "Systems engineering", "Security engineering", "Environmental social science", "Environmental psychology" ]
76,366,904
https://en.wikipedia.org/wiki/Pileolaria%20%28fungus%29
Pileolaria is a genus of autoecious rust fungi. They are considered plant pathogens and preferentially infect members of the sumac family. Selected species There are about 20 species in Pileolaria. Pileolaria brevipes Pileolaria cotini-coggygyriae Pileolaria terebinthi References Pucciniales Fungal plant pathogens and diseases Basidiomycota genera
Pileolaria (fungus)
[ "Biology" ]
87
[ "Fungus stubs", "Fungi" ]
76,367,043
https://en.wikipedia.org/wiki/Viral%20epitranscriptome
The viral epitranscriptome includes all modifications to viral transcripts, studied by viral epitranscriptomics. Like the more general epitranscriptome, these modifications do not affect the sequence of the transcript, but rather have consequences on subsequent structures and functions. History The discovery of mRNA modifications dates back to 1957 with the discovery of the pseudouridine modification. Many of these modifications were found in the noncoding regions of cellular RNA. Once these modifications were discovered in mRNA, discoveries in viral transcripts soon followed. Detections have been aided with the advancement and use of new techniques such as m6A seq. Mechanisms Complexes Viral RNA modifications use the same machinery as cellular RNA. This involves the use of "writer" and "reader" complexes. The writer complex contains the enzyme methyl transferase-like 3 (METTL3) and its cofactors like METTL14, WTP, KIAA1492 and RBM15/RBM15B which adds the m6A modification in the nucleus. The family of proteins known as the YTH like YTHDC1 and YTHDC2 are capable of detecting these modifications within the nucleus. In the cytoplasm, the reading duties are carried out by YTHDF1, YTHDF2, and YTHDF3. The proteins ALKBH5 and FTO remove the m6A modification, functionally serving as erasers, with the latter having a more restricted selectivity depending on the position of the modification. N6-Methyladenosine (m6A) This modification involves the addition of a methyl group (-CH3) group to the 6th nitrogen on the adenine base in an mRNA molecule. This was among the first mRNA modifications to be discovered in 1974. This modification is common in viral mRNA transcripts and is found in nearly 25% of them. The distribution of the modification not uniform with some transcripts containing more than 10. m6A modifications are a dynamic process with many applications ranging from viral interactions with cellular machinery and structural adjustments to viral life cycle control. Studies have shown different regulatory patterns for different viruses depending on the context. For single stranded RNA viruses, the effects of the modifications appear to differ on the basis of the viral family. In the HIV-1 genome, the single stranded positive sense RNA contains m6A modifications at multiple sites in both the untranslated and coding regions. The presence of this modifications in the viral transcript is enough to increase corresponding modifications in host cell mRNA through binding interactions between the HIV-1 gp 120 envelope protein, and the CD4 receptor in T lymphocytes without causing a corresponding increase in. For HIV-1 and other RNA viral families like chikungunya, enteroviruses and influenza, studies show both a positive and negative role for m6A modifications on viral life replication and infection. For other families, the role effects are clearer. For the flaviridae family, the modification had a negative role and hindered viral replication. The modification in respiratory syncytial virus families showed a positive role and enhanced viral replication and infection. The causes of these apparently different roles from different responses within the same family of viruses and why the viral families like flaviridae conserve m6A modifications when they negatively impact their cycles are currently unknown and under investigation. Most of the RNA viruses carry out their cycles in the cytoplasm, away from the required machinery for writing and erasing m6A modifications which are housed in the nucleus. For DNA viruses, that cycle in the nucleus with direct access to said machinery, no clear general positive or negative regulatory role can be attributed to m6A modifications. In the simian virus and hepatitis B viruses, different m6A reading complexes were shown to have different roles in regulation with some having a conserved positive role and others having a neutral or negative effect on replication. O-methylation This modification involves the addition of a methyl group to the 2' hydroxyl (-OH) group of the ribose sugar of RNA molecules. In contrast with the m6A modification, it is the ribose sugar, a part of the backbone rather than the base that is altered. It is present in various kinds of cellular RNA, providing coding and structural support. 2-O-methylation of viral RNA is often accompanied by the addition of an inverted N-7methylguanosine to the 5' end on the phosphate group. These modifications regulate important functions of viral RNA such as metabolism and immune system interactions. Different viruses have their mechanisms for acquiring this modification. Cytoplasmic RNA viruses like flaviridae and coronaviruses encode the required to catalyze cap formation reactions, with some needing one enzyme for the 5' cap and 2-O-methylation while others require two enzymes like poxviruses. Others, like influenza virus can hijack the methylguanosine caps from host cell mRNA and be preferentially translated. 5-methylcytidine (m5C) One viral epitranscriptome modification that has been identified is the 5-methylcytidine (m5C). HIV-1 and MLV transcriptomes contain elevated levels of these residues by approximately 14-30 fold when compared to a cell's normal levels. NSUN2 is the complex that codes the cytidine methyltransferases credited with m5C formation in cells and amplification in viral epitranscriptomes. The NSUN2 affects the translational aspect of the mRNA in the viral cells, boosting the expression of the viral genome. It has also been found that the m5C alters the splicing pattern and locations in the viral transcriptome. This affected the HIV-1 transcript in both early and late infection. Immune system Viral RNA modifications play important roles in interactions with the immune system of host cells. The m6A modification of viral RNAs allows for the viruses to escape recognition by the retinoic acid inducible gene-I receptor (RIG-I), in the type 1 IFN response, a crucial pathway of innate immunity. 5' N-7methylguanisone capping and 2-O-methylation also play vital roles for the viral infections. The cap structures help viral RNA to blend in among modified cellular mRNA and avoid triggering immune response systems. References Molecular biology Virology Viral genes
Viral epitranscriptome
[ "Chemistry", "Biology" ]
1,308
[ "Biochemistry", "Molecular biology" ]
76,367,615
https://en.wikipedia.org/wiki/First%20European%20congress%20of%20astronomers
The first European congress of astronomers took place in August 1798 at the Seeberg Observatory. It lasted around ten days. Invitations The Seeberg Observatory, commissioned in 1790 by Franz Xaver von Zach, quickly became a centre of the European astronomical community. Zach corresponded with almost all colleagues in the field, and the observatory he designed was visited often because of its innovative features. At the beginning of 1798, the French astronomer Jérôme Lalande expressed a desire to visit Gotha Observatory, where he hoped to meet the Berlin astronomer Johann Elert Bode. Zach sent invitations to astronomy-related professionals; the meeting was scheduled for early August. Among the invitees were Taddäus Derfflinger (Kremsmünster), Barry (Mannheim), Rüdiger (Leipzig), M. A. David (Prague) and Strnadt. In most cases, these invitations were received positively and supported by the respective sovereigns. However, some feared the influence of revolutionary French ideas. Jurij Vega from Vienna, who was invited by Lalande, was not allowed to travel to Gotha. Johann Hieronymus Schroeter in Lilienthal and Heinrich Wilhelm Olbers in Bremen stayed away on their own initiative because they suspected that the metric system of units was being propagated. Participants These were the participants at the congress: Johann Elert Bode (1747–1826), Berlin, astronomer George Butler (1774–1853), Cambridge, traveller, student Johannes Feer (1763–1823), Zürich, surveyor Ludwig Wilhelm Gilbert (1769–1824), Halle, professor of physics Johann Kaspar Horner (1774–1834), Gotha, assistant to Zach Johann Jakob Huber (1733–1798), Basel, astronomer Georg Simon Klügel (1739–1812), Halle, professor and optician Johann Gottfried Köhler (1745–1800), Dresden, Mathematial-Physical Salon Jérôme Lalande (1732–1807), Paris, astronomer Marie-Jeanne de Lalande (1768–1832), Paris, astronomical calculator Carl Philipp Heinrich Pistor (1778–1847), Berlin, postal secretary and instrument maker Johann Konrad Schaubach (1764–1849), Meiningen, high school director Karl Felix Seyffer (1762–1822), Göttingen, astronomer Johann Heinrich Seyffert (1751–1817), Dresden, finance secretary Johann Friedrich Wurm (1760–1833), Nürtingen, astronomical calculator, priest Franz Xaver von Zach (1754–1832), Gotha, astronomer possibly also Martin van Marum (1750–1837), Haarlem, physician and chemist Proceedings and results Jérôme Lalande arrived at the Seeberg early, on 25 July, together with his niece, the astronomical calculator Marie-Jeanne de Lalande. Most of the other participants followed between the beginning of August and 9 August, when Bode arrived. Wurm and Huber arrived after Bode; Seyffer may have left before 9 August. Zach could accommodate most of the participants in the observatory buildings, but some had to stay at the inn Zur Schelle on Gotha's Hauptmarkt square. On clear evenings, everyone gathered in the Seeberg Observatory for observations and discussions. The scope of the discussions was broad. It was clear from the outset that only closer cooperation could secure the desired successes. Star atlases and the reduction of star positions for aberration and nutation were discussed. Several participants were working on star catalogues and atlases or contributed data. The comparison of instruments brought along, especially chronometers and sextants, was a topic of discussion. An excursion to the Inselsberg on 14 August 1798 provided an opportunity for practical exercises. Duchess Charlotte of Saxe-Gotha-Altenburg also participated in this working trip. The benefit of a common, decimal system of units (the metric system) and of a common time (Central European Time) was evident to those present, and they adopted these for their work. Introducing these more widely, beyond science, was politically difficult, as it was seen as a product of the French Revolution. Proposals for new constellations were controversial among astronomers. Lalande and Bode had designed new constellations before and brought new proposals to the congress. Others, including Olbers, opposed new constellations. Astronomical journals were likely also discussed. Although there was already the Berliner Astronomisches Jahrbuch, edited by Bode, this series of publications took too long to make new research results known. Further, comparatively little space was given to descriptive texts. Von Zach started editing the Allgemeine Geographische Ephemeriden the same year, 1798. Not on the agenda were emerging fields like spectroscopy, or William Herschel's work on stellar statistics and the structure of the Milky Way. The social gathering was also not neglected. As the duke's brother, Prince August, reported, Lalande's niece's name day was celebrated with a banquet, dance and small cannon. Johann Jakob Huber, who travelled from Basel, fell ill shortly after his arrival and died unexpectedly on 21 August. His son Daniel Huber, who was a mathematician and, like his father, an astronomer, arrived in Gotha and made the acquaintance of Lalande and other scholars. By the end of August 1798 all participants had left. Aftermath A second congress was held in 1800 in Lilienthal, with six participants who, apart from von Zach, were not present in 1798. This meeting founded the Vereinigte Astronomische Gesellschaft, better known as the Celestial police. Eventually, European countries followed the scientists’ lead and adopted their standards for units and time. New constellations met with gradually increasing opposition among astronomers but were abolished only in 1925 by the International Astronomical Union, when a variation of the spherical rectangles of John Herschel, Airy and Baily were implemented. The Astronomische Gesellschaft was founded in 1863 in Heidelberg. On the occasion of the 200th anniversary of the first European congress of astronomers, the Astronomische Gesellschaft held its 1998 spring meeting in Gotha. More than 120 astronomers from 15 countries attended. In honour of the anniversary, the asteroid (8130) Seeberg was named. A globe and a metric ruler, presented by Lalande, are among the memorabilia in the Gotha museum of regional history. References History of astronomy Gotha 1798 in science
First European congress of astronomers
[ "Astronomy" ]
1,321
[ "Astronomy conferences", "Astronomy events", "History of astronomy" ]
76,368,098
https://en.wikipedia.org/wiki/Lingyun%20%28rocket%20engine%29
The Lingyun (, lit. Soaring above the clouds) is a gas-generator cycle rocket engine burning liquid methane and liquid oxygen under development by Jiuzhou Yunjian. History In 2018, Jiuzhou Yunjian's Lingyun engine successfully passed tests for its gas-generator, ignitor, and valves. In 2021, the Lingyun engine completed multiple starts and deep variable thrust tests, demonstrating a breakthrough in reusability technology for the company. References Rocket engines of China Rocket engines using methane propellant Rocket engines using the gas-generator cycle
Lingyun (rocket engine)
[ "Astronomy" ]
118
[ "Rocketry stubs", "Astronomy stubs" ]
76,368,225
https://en.wikipedia.org/wiki/Longyun%20%28rocket%20engine%29
The Longyun (, lit. Dragon Clouds) is a gas-generator cycle rocket engine burning liquid methane and liquid oxygen under development by Jiuzhou Yunjian. History In 2018, the gas generator test for the Longyun engine was completed. The engine completed multiple start-up hot test runs in May 2021. In October 2021, launch startup Rocket Pi signed a deal to use the Longyun engine to power its Darwin-1 reusable launch vehicle. Space Epoch conducted a ground test of a 4.2-meter stainless steel stage powered by the Longyun engine in January 2023. In April 2024, Longyun passed a long duration hot firing test with the servo mechanism rotating to the maximum angle of ±8°. The engine has also reportedly passed the customer acceptance test. References Rocket engines of China Rocket engines using methane propellant Rocket engines using the gas-generator cycle
Longyun (rocket engine)
[ "Astronomy" ]
186
[ "Rocketry stubs", "Astronomy stubs" ]
76,368,952
https://en.wikipedia.org/wiki/Figure%20AI
Figure AI, Inc. is a United States-based robotics company specializing in the development of AI-powered humanoid robots. It was founded in 2022, by Brett Adcock, the founder of Archer Aviation and Vettery. Figure AI's team is composed of experts from robotics, artificial intelligence, sensing, perception, and navigation, blending experiences from notable companies like Boston Dynamics and Tesla. History In 2022, the company introduced its prototype, Figure 01, a bipedal robot designed for manual labor, initially targeting the logistics and warehousing sectors. In May 2023 the company raised $70 million from investors led by Parkway Venture Capital. On January 18 2024 Figure announced a partnership with BMW to deploy humanoid robots in automotive manufacturing facilities. In February 2024, Figure AI secured $675 million in venture capital funding from a consortium that includes Jeff Bezos, Microsoft, Nvidia, Intel, and the startup-funding divisions of Amazon and OpenAI. The funding valued the company at $2.6 billion. It also announced a partnership with OpenAI. The collaboration includes OpenAI building specialized AI models for Figure's humanoid robots, allowing them to accelerate Figure's development timeline by enabling its robots to "process and reason from language". Products Figure 02 On August 6, 2024, Figure AI unveiled the next generation of its humanoid robot, named Figure 02. Described by the company as the next step deploying humanoids for industrial use. Figure 02 features a sleeker and slimmer design compared to its predecsesor, with integrated cabling in its limbs. The battery capacity has increased by 50% over the previous generation. Additionally, the robot is equipped with 6 RGB cameras paired with an onboard vision language model. Powered by NVIDIA RTX GPU-based modules, its inference capabilities provide 3x of the computing power of the previous model. It is also equipped with microphones and speakers combined with a custom AI model, developed with Open AI, to facilitate conversational capabilities with humans. The redesigned five-fingred robotic hands have 16 degrees of freedom (DoF) and the ability to carry objects up to 25kg. Figure 02 robots are currently deployed to a BMW plant in South Carolina for testing and to collect training data for AI models. References Robotics Artificial intelligence American companies established in 2022 Companies based in Sunnyvale, California
Figure AI
[ "Engineering" ]
484
[ "Robotics", "Automation" ]
76,369,743
https://en.wikipedia.org/wiki/Erysiphe%20platani
Erysiphe platani, also known as sycamore powdery mildew, is a fungus native to North America that now infects sycamore tree species worldwide. Infections may spread rapidly in urban settings with large groups of young trees or in plant nurseries. This mildew thrives when there are high humidity conditions during the growing season. Symptomatic trees show leaf discoloration and puckering as the mildew spreads across buds and leaf surfaces. The most visible effects, which include "leaf curling, stunting, and distortion," appear on vulnerable newly emerged leaves. This infection only appears on leaves, it has no obvious effect on stems and branches. Fertilization and pollarding increase the number of young shoots, which are the parts of the trees most vulnerable to infection. References Erysiphales Fungi of North America Fungal plant pathogens and diseases Fungus species Fungi described in 1874 Taxa named by Elliot Calvin Howe
Erysiphe platani
[ "Biology" ]
193
[ "Fungi", "Fungus species" ]
59,696,444
https://en.wikipedia.org/wiki/Relational%20developmental%20systems
Relational developmental systems (RDS) is a developmental psychological metatheory and conceptual framework. It is an extension of developmental systems theory that is based on the view that relationism is a superior alternative to Cartesian mechanism. RDS is the leading framework in modern developmental science. According to RDS metatheory, interactions between individuals and their environments, rather than either entity acting separately, are the cause of all aspects of human development. The term "relational developmental systems paradigm" has been used to refer to the combination of the RDS metatheory and the relationist worldview. The RDS framework is also fundamentally distinct from that of quantitative behavioral genetics, in that the former focuses on the causes of individual development, while the latter focuses on individual differences. RDS theorists reject the dichotomies associated with Cartesian dualism, such as those between nature and nurture, and between basic and applied science. Origins Relational Developmental Systems (RDS) is a set of rules for the theories in developmental psychology. It is based on a worldview known as relationism. Worldviews are approaches taken to understand how the world works. Relationism is a worldview suggesting that no element is separate to the context around it, including its relations to other elements. It is against the Cartesian worldview, which splits opposite ideas into divisions such as 'nature-versus-nurture', 'mind versus the body' and 'culture versus biology'. Relationism instead forms explanations by combining ideas, even if they are separate and conflicting. RDS and relationism can be combined to form an overall scientific framework for human development. RDS can also be considered as an extension to developmental systems theory, which suggests that factors such as genes and the environment interact to influence development. Assumptions Relational Developmental Systems proposes that human development cannot be understood without understanding the multiple relationships between individuals and their biological, psychological, social and historical contexts. It therefore rejects the idea that development is primarily influenced by one factor, such as genetics. Current developmental psychologists explore the various types of relationships between the individuals and their context. Individuals can have an active role in choosing which contexts to engage with based on the benefits they provide. From an evolutionary perspective of psychology, this can be beneficial for survival. RDS also emphasises that individuals can constantly develop across their life-span. These changes can occur across time and across locations. RDS also suggests that experiences, thoughts and emotion are influenced by the link between a person, their biology and their culture. Research into RDS involves flexible research approaches considering associations across multiple variables, moving away from methods attempting to explain behaviour in terms of one causal variable. RDS uses longitudinal studies to measure an individual's development across time, as well as methods that consider individuals, rather than variables, as the key focus of the study. Research indicates that RDS-based research approaches do not have to be in conflict with research methods into quantitative behaviour genetics, which is a field considering genes and the environment as separate influences on behaviour. The 4-H study is a longitudinal study investigating how RDS can explain the development of adolescents' positive behaviours. It investigated factors influencing adolescents' development of five key traits: confidence, caring, connection, character and ability to perform a task. This study was conducted across 7 years of the adolescents' lives. Researchers found that positive youth development was influenced by contextual factors such as relationships with family and friends, as well as individual factors such as natural motivation and engagement levels. The 4-H study also provided evidence for the individual having an active role in their development. Adolescents were able to optimise their development by adjusting their personal goals and expectations based on the social situation and the environmental resources that they had access to. Applications The principles behind RDS have useful applications for developmental science. RDS presents adolescents in a more positive light than some previous developmental science research, which portrayed adolescents as trouble-makers and poor contributors to society. Therefore, RDS can be used to encourage positive development in adolescents. RDS suggests that due to our ability to constantly change, adolescents have the potential to develop co-operative and considerate behaviours. There has been increasing research into how policies can encourage adolescents' use of this potential by altering the context that individuals are in. Research also applies RDS to understanding the development of adolescents' health. In addition, RDS can be used to understand how senior citizens' involvement with sport can change. Research from the 'European Review of Ageing and Physical Activity' indicated that a combination of individual-related and context-related factors can influence sports participation in the elderly. Individual-related influences on senior citizens' sports involvement included: Income Education level Living arrangements Individuals' competitiveness levels Context-related influences included: The opportunity to connect with others through sport Family members' attitudes towards the individuals' engagement with sport Within developmental psychology, RDS can be applied to understanding multiple forms of development, including our moral development and our development of consciousness. When considering the broader fields of psychology and behavioural science, RDS parallels new approaches taken to evolution and to the mind-body problem. A recent approach to evolution indicates that we can evolve through our genes' constant adaptations to environmental contexts. Findings from the International Journal of Epidemiology links this research to RDS, suggesting that our ability to change over time involves interaction between genes and the environment. The approach of RDS also influences the view that the mind is not separate from the context of our physical body. Research suggests that RDS is currently considered to be the "leading framework in developmental science". It can provide a foundation for recent discoveries in the fields of genetics, evolution and cultural psychology, that are based on interactions between elements. Criticisms The approach of RDS to methodology can be practically difficult to commit to and therefore pose a "challenge" to researchers. It can also be difficult to select factors influencing each individuals' development. In practice, this difficulty create uncertainty when evaluating programs influencing young peoples' development. It can also be hard to apply results from research that gathered data from a particular point in time to the whole duration of a person's development. This is further supported by research from the journal 'Human and Development', which suggests that it may be hard to apply research based on RDS to all situations that an individual can be in. Although RDS rejects the Cartesian worldview, this worldview has been "influential" in developmental science in the past. For example, a cognitive approach to the mind-body problem suggesting that the mind is the brain and separate from external environments is "framed" by a Cartesian approach. Moreover, some cross-cultural research in developmental psychology has considered culture as being separate from the individual. References Metatheory Psychological theories Developmental psychology Systems theory
Relational developmental systems
[ "Biology" ]
1,368
[ "Behavioural sciences", "Behavior", "Developmental psychology" ]
59,696,933
https://en.wikipedia.org/wiki/Goddard%20Earth%20Observing%20System
The Goddard Earth Observing System (GEOS) is an integrated Earth system model and data assimilation system developed at the Global Modeling and Assimilation Office (GMAO) at NASA's Goddard Space Flight Center. The components of the model use the Earth System Modeling Framework (ESMF), enabling them to be connected in a flexible manner and supporting the investigation of many different aspects of Earth science, in particular questions related to coupled processes involving the atmosphere, ocean, and/or land. Uses of GEOS span a range of spatiotemporal scales and include the representation of dynamical, physical, chemical and biological processes. References Earth system sciences Goddard Space Flight Center Numerical climate and weather models Weather prediction
Goddard Earth Observing System
[ "Physics" ]
142
[ "Weather", "Weather prediction", "Physical phenomena" ]
59,697,221
https://en.wikipedia.org/wiki/Madeleine%20Nobbs
Madeleine Marie Nobbs (14 December 1914 – 10 December 1970) was a building services engineer, responsible for the reprovision of services to the Old Bailey in London after the Second World War, and president of the Women's Engineering Society (1959–60). Life Nobbs started her working life as a shorthand typist but very much felt that this was the "wrong job" for her. Her father, Walter William Nobbs, was a well known London heating and ventilation engineer, who had worked on many London buildings, including New County Hall for the (then) London County Council, the new premises for the RIBA and the headquarters of the (then IEE) at Savoy Hill, as well as being President of the Institution of Heating and Ventilating Engineers in 1920, and his father was a civil engineer. This family history, and a belief that her personal talents lay in mathematics and geometry, encouraged Madeleine, having read a book about technical drawing, to declare she wished to be an engineer. Her father was somewhat skeptical, as he feared that the drinking culture of consulting engineering would not be conducive to progression by his daughter. However, after her mother (Francoise Leonie Thebault) met with Adria Buchanan, the first women to become a member of the Institution of Heating and Ventilating Engineers, she interceded, telling her husband "with a charming smile and in her delightful French accent" that he could no longer refuse to support his daughter in her ambition. Madeleine persuaded a firm of heating and ventilation engineers that she would be suited to the drawing office, where she started as a tracer (tracing architectural drawings to copy them and add details for heating and ventilating systems). Her studies at Borough Polytechnic enabled her to progress to H&V work for an architect’s office including estimating and supervising installation. During the WW2 she designed air raid shelters, factories and boat ventilation. In her spare time, she drove an ambulance. She went on to do a variety of jobs that allowed her to get practical bench and site experience until she was a fully qualified engineer and joined her father’s firm as a junior partner in 1945. Her father died in 1951, whilst engaged upon a major contract for the rebuilding of the Central Criminal Court, Old Bailey after war damage, so Madeleine stepped up to become senior partner of W. J. Perkins & Partners, Consulting Engineers, and take over the firm on her own account and completed the contract. Women's Engineering Society She joined the Women’s Engineering Society in 1941 and was soon active on the council, running the London Branch (1950–52), and becoming president in 1959. She succeeded Marjorie Bell in the role and was succeeded in turn by Isabel Hardwich. She contributed many papers on heating and ventilation to The Woman Engineer and was active as a full member of a number of other engineering institutions. Although she was unable to attend the inaugural International Conference of Women Engineers and Scientists (ICWES) in New York in 1964, she did produce a most detailed survey of women engineers in the UK for the Congress. Her work was cited by the survey by the Institution of Civil Engineers in 1971. Personal life Nobbs married Denis Moody, also an engineer, in 1961. After Moody's death a few years later, she immersed herself in major building work to convert an old barn in Ipsden, Oxfordshire, into a home, doing most of the work herself. Nobbs died suddenly at the age of 56 in 1970. References 1914 births 1970 deaths People in building engineering British women engineers Presidents of the Women's Engineering Society English civil engineers Women's Engineering Society
Madeleine Nobbs
[ "Engineering" ]
742
[ "Building engineering", "People in building engineering" ]
59,697,878
https://en.wikipedia.org/wiki/Journal%20of%20Complex%20Networks
The Journal of Complex Networks is a peer reviewed academic journal of complex networks. It is published by Oxford University Press. The journal was established in 2013, with Ernesto Estrada as its editor-in-chief. References Oxford University Press academic journals Graph theory journals
Journal of Complex Networks
[ "Mathematics" ]
52
[ "Mathematical relations", "Graph theory", "Graph theory journals" ]
59,699,993
https://en.wikipedia.org/wiki/Rehabilitation%20psychology
Rehabilitation psychology is a specialty area of psychology aimed at maximizing the independence, functional status, health, and social participation of individuals with disabilities and chronic health conditions. Assessment and treatment may include the following areas: psychosocial, cognitive, behavioral, and functional status, self-esteem, coping skills, and quality of life. As the conditions experienced by patients vary widely, rehabilitation psychologists offer individualized treatment approaches. The discipline takes a holistic approach, considering individuals within their broader social context and assessing environmental and demographic factors that may facilitate or impede functioning. This approach, integrating both personal (e.g., deficits, impairments, strengths, assets) and environmental factors, is consistent with the World Health Organization's (WHO) International Classification of Functioning, Disability and Health (ICF). In addition to clinical practice, rehabilitation psychologists engage in consultation, program development, teaching, training, public policy, and advocacy. Rehabilitation psychology shares some technical competencies with the specialties of clinical neuropsychology, counseling psychology, and health psychology; however, Rehabilitation Psychology is distinctive in its focus on working with individuals with all types of disability and chronic health conditions to maintain/gain and advance in vocation; in the context of interdisciplinary health care teams; and as social change agents to improve societal attitudes toward individuals living with disabilities and chronic health conditions. Rehabilitation psychologists work as advocates with persons with disabilities to eliminate attitudinal, policy, and physical barriers and to emphasize employment, environmental access, social role, and community integration.   Rehabilitation psychologists provide clinical services in varied healthcare settings, including acute care hospitals, inpatient and outpatient rehabilitation centers, assisted living centers, long-term care facilities, specialty clinics, and community agencies. They typically work in interdisciplinary teams, often including a physiatrist, physical therapist, occupational therapist, and speech therapist. A nurse, social worker, prosthetist, chaplain, and case manager also may be included depending on individual needs. Members of the team work together to create a treatment plan, set goals, educate both the patient and their support network, and facilitate discharge planning. In the United States, the specialty of Rehabilitation Psychology is coordinated by the Rehabilitation Psychology Specialty Council (RPSC), which comprises five professional organizations that represent the major constituencies in Rehabilitation Psychology: Division 22 of the American Psychological Association (APA), the American Board of Rehabilitation Psychology (ABRP), the Foundation for Rehabilitation Psychology (FRP), the Council of Rehabilitation Psychology Postdoctoral Training Programs (CRPPTP), and the Academy of Rehabilitation Psychology (ARP). RPSC represents the specialty to the Council of Specialties in Professional Psychology (CoS). Rehabilitation Psychology is its official journal. Rehabilitation Psychology is certified as one of 14 specialty competencies by the American Board of Professional Psychology (ABPP). History The specialty of rehabilitation psychology was established well before psychologists were regularly involved in healthcare settings. In the 1940s and 1950s, psychologists became increasingly involved in caring for persons with disabilities, often the result of combat injuries. Advances in medical care had led to an increased number of people surviving injuries and illnesses that would have been fatal in previous generations. Individuals living with disabilities and chronic health conditions needed help to adjust, and rehabilitation psychology emerged to meet these needs using psychological knowledge to help maximize independence, health, and welfare. In 1954, the Vocational Rehabilitation Act was passed, providing grant funding for research and program development. As a result of this act, many universities opened vocational rehabilitation counseling programs within their graduate schools. yow In 1958, Rehabilitation Psychology was established as Division 22 of the American Psychological Association, as an organization of psychologists concerned with the psychological and social consequences of disability, and with the development of ways to prevent and resolve problems associated with disability. By the 1960s, rehabilitation psychology was considered a mature specialty and was prominent throughout the United States. However, it was not until 1997 that the American Board of Professional Psychology approved the establishment of the American Board of Rehabilitation Psychology. Key principles and models Theoretical models are important in rehabilitation psychology for understanding and explaining impairments, aiding treatment planning, and facilitating the prediction of outcomes. Models help organize, understand, explain, and predict phenomena. The models used integrate information from a number of disciplines, such as biology, psychology, and sociology. A wide array of models is needed because of the diverse problems and concerns faced by individuals with disabilities and chronic health conditions. Often, more than one model must be applied to properly understand an individual's condition. Biopsychosocial model: The biopsychosocial model examines the interaction of medical conditions, psychological stressors, the environment, and personal factors to understand an individual's adaptation to disability. This interdisciplinary model is an acknowledgement that disability only can be understood within a larger context, and reflects the longstanding belief of rehabilitation psychologists that cultural attitudes and environmental barriers influence an individual's adaptation and accentuate disability. Notably, the tenets of this model are reflected in the World Health Organization's International Classification of Functioning, Disability and Health (ICF). The framework is holistic and to apply it providers must learn about the disabled person's home life and broader social context. Psychoanalytic model: In the context of rehabilitation psychology, Freud's concept of castration anxiety can be applied to severe losses, such as the loss of a limb. This concept is reflected in Jerome Siller's stage theory of adjustment, designed to increase understanding of acceptance and adjustment following sudden disability. Social psychology: The pioneers in rehabilitation psychology were a diverse group, but many came from the field of social psychology. Kurt Lewin is one example. As a Jew living in Germany during the early years of the Nazi regime, Lewin's experiences shaped his psychological work. This is reflected in his conceptualization of the insider-outsider distinction, as well as his understanding of stigma. Lewin is known for his conceptualization B = f(p,e), where behavior (B) is a function of both the person (p) and their environment (e). Tamara Dembo and Beatrice Wright, two of Lewin's students, are recognized as pioneering figures in the history of rehabilitation psychology. Wright authored two of the field's seminal texts, Physical Disability: A Psychological Approach and the extensively revised second edition, Physical Disability: A Psychosocial Approach. She also proposed the somatopsychological model, which advocates for interpreting disability within its social context. The somatopsychological model is derived from Lewin's field theory and holds that the environment can either aid or hinder an individual's adjustment. Wright's insights and her articulation of the beliefs and principles underlying rehabilitation psychology practice have come to be known as the "foundational principles of rehabilitation psychology" and her work continues to inform contemporary rehabilitation psychology research, theory, and practice. Cognitive-Behavior Theory: Cognitive behavioral therapy (CBT) approaches such as problem-solving treatment have shown promise in promoting adjustment, well-being, and overall health among individuals with disabilities and chronic health conditions. This model holds that thoughts and coping strategies directly impact feelings and behaviors. By emphasizing, identifying, and changing maladaptive thoughts, CBT works to change an individual's subjective experience and their resulting behavior. A variety of empirical studies have demonstrated CBT's effectiveness in cases of traumatic brain injury, spinal cord injury, and a variety of other conditions common to individuals living with disability and chronic health conditions. Clinical specialty areas In clinical settings, rehabilitation psychologists apply psychological expertise and skills to improve outcomes for individuals living with disabilities or chronic health conditions. Common populations treated include individuals with: AIDS Acquired brain injury Cancer Chronic pain Concussion Limb loss Multiple sclerosis Neuromuscular disorders Spinal cord injury Stroke Traumatic brain injury When addressing these chronic health conditions and disabilities, rehabilitation psychologists offer a variety of services with the goal of increasing an individual's functioning and quality of life. Specific services may include: Assessment To enhance the rehabilitation process, one must not only identify barriers to recovery, but also personal strengths and resiliency factors that foster continued recovery and social reintegration. Rehabilitation psychology's focus on personal strengths and resiliency has been influential in the field of positive psychology. Rehabilitation psychologists take into consideration the medical diagnosis, referral question, background history, pre-morbid functioning (independence with basic and instrumental activities of daily living), current functioning (physical, cognitive, psychological), personality characteristics, and goals (career, academic, personal). Depending upon the referral question and individual patient goals, a structured and focused assessment may include any combination of the following components: cognitive function (decisional capacity, mental status, neurocognitive function); physical function (fatigue, health behavior, pain, sleep); psychological function (emotional adjustment, interpersonal/social functioning, personality, mental health conditions). Aspects of the individual's environment also are assessed, including cultural, community, home, rehabilitation, school, vocational, and social environments. In addition to clinical assessment and interview, standardized measures can be helpful for understanding each of these component areas in greater detail. Specifically, rehabilitation psychologist use data from standardized cognitive assessments to assess both cognitive limitations and positive cognitive abilities such as problem-solving skills. Cognitive rehabilitation Cognitive rehabilitation, also known as cognitive remediation therapy, or neuropsychological rehabilitation, refers to the broad range of evidence-based interventions designed to improve cognitive functioning impaired as a result of changes in the brain due to injury or illness.  Because of their specialized training in the nuances of impaired cognitive abilities, within the context of personality and emotional factors, rehabilitation psychologists are uniquely qualified to provide interventions for cognitive, behavioral, and psychosocial difficulties following brain injury. Cognitive rehabilitation interventions have been used with people who have sustained brain injury, stroke, brain tumor, Parkinson's disease, multiple sclerosis, mild cognitive impairment, ADHD, and a variety of other medical conditions that affect cognitive functioning. Cognitive functions targeted may include processing speed, attention, memory, language, visual-perceptual skills, and executive functioning skills such as problem solving and emotional self-regulation. Cognitive rehabilitation can include computer-based tasks, with the caveat that such tasks are most effective when administered under the guidance of a trained clinician in an individualized setting. Consistent with the foundational principles of rehabilitation psychology, contemporary rehabilitation psychology approaches to cognitive rehabilitation incorporate the subjective experience of the patient while targeting meta-cognition or self regulation. The ultimate goal of all cognitive rehabilitation interventions is to improve the everyday functioning of people in the setting in which they live or work, consistent with their own values and priorities. Ethical and legal considerations Rehabilitation psychologists adhere to the same general principles and ethical codes of conduct as all psychologists, under guidelines set forth by the American Psychological Association (http://www.apa.org/ethics/code/). Rehabilitation psychologists also must follow federal laws relevant to individuals with disability. Rehabilitation psychologists often are faced with ethical and legal considerations when assisting patients with concerns such as end-of-life decision making, ability to return to driving (e.g., following acquired brain injury, stroke, or other medical conditions that may impair driving ability), and the role of faith/religion in the individual's health-care decision making. Relevant federal legislation includes: Rehabilitation Act of 1973: This Act prohibits discrimination of persons based on disability status in programs conducted by Federal agencies, those receiving Federal financial assistance, in Federal employment, and in the employment practices of Federal contractors. Americans with Disabilities Act (ADA): This Act was an extension of the Rehabilitation Act of 1973. The ADA's five titles prohibit discrimination on the basis of disability in employment, government, public and commercial facilities, transportation, and telecommunications. Health Insurance Portability and Accountability Act (HIPAA): This Act was initiated in 1996 in an effort to protect the privacy of patient information. It affects rehabilitation psychologists in a variety of important ways and occasionally contradicts aspects of the APA Ethical Code. For example, under the Act, tests designed to measure psychological and neurocognitive function may not be released to the general public. Instead of releasing the tests themselves, rehabilitation psychologists typically provide summaries of the data, interpretation, and treatment recommendations. Education and training In the United States, rehabilitation psychologists complete doctoral degrees (e.g., PhD or PsyD) in fields such as clinical psychology, counseling psychology, neuropsychology, or school psychology, plus pre-doctoral and post-doctoral clinical training in healthcare settings. Rehabilitation psychologists must be licensed in order to provide services in their state of practice and to receive reimbursement from health insurance payers. In most states, obtaining a license requires a doctoral degree from an approved program, a minimum number of hours of supervised clinical experience, and a passing score on the Examination for Professional Practice in Psychology (EPPP), a standardized knowledge-based examination. Most states also require a prescribed number of continuing education credits per year to renew a license. By the 1960s, the need for standardized guidelines for postdoctoral training in rehabilitation psychology was recognized during the speciality's national conferences. The APA Division of Rehabilitation Psychology (Division 22) and the American Congress of Rehabilitation Medicine spent four years developing guidelines leading up to the 1992 Ann Arbor Conference in Postdoctoral Training in Professional Psychology. Patterson and Hanson outlined the entrance requirements, training length, curriculum requirements, supervision, and evaluations: Trainees are accepted only from doctoral programs approved by the American Psychological Association. Minimum length of training is one year There are a minimum of two supervisors during training Curriculum includes supervised practice, seminars, and coursework Patient populations and didactics are related to disabilities and chronic health conditions There is a minimum of two hours of supervision per week All trainees are funded There are written objectives for the training program Formal trainee evaluations occur at least twice a year Program evaluations occur annually In 1997, the American Board of Professional Psychology approved the establishment of the American Board of Rehabilitation Psychology. Subsequently, the board elaborated on the guidelines from 1995 by requiring a board certification that assesses an individual on the expected competencies. Expected competencies were the capability to assess and treat disability adjustment, cognitive functioning, personality functioning, family functioning, social environment, social functioning, educational functioning, vocational functioning, recreational functioning, sexual functioning, substance abuse, and pain. In addition to displaying these competencies, rehabilitation psychologists are expected to collaborate and consult with other rehabilitation professionals within the interdisciplinary team throughout the treatment process. The ABRP Board Certification process recognizes, certifies, and promotes competence in the specialty. The American Board of Professional Psychology specifies that in order to meet the standards of the speciality, an individual must complete a recognized internship program, have three years of experience within the field, and have supervised experience within the specialty. Notable rehabilitation psychologists Roger Barker Tamara Dembo Beatrice Wright Stephen T. Wegener See also Neurorehabilitation Rehabilitation Psychology (journal) References External links American Board of Rehabilitation Psychology Foundation for Rehabilitation Psychology Council of Rehabilitation Psychology Postdoctoral Training Programs Council of Specialities in Professional Psychology Applied psychology Behavioural sciences Health care occupations
Rehabilitation psychology
[ "Biology" ]
3,074
[ "Behavioural sciences", "Behavior" ]
59,700,043
https://en.wikipedia.org/wiki/NGC%203511
NGC 3511 is an intermediate spiral galaxy located in the constellation Crater. It is located at a distance of circa 45 million light years from Earth, which, given its apparent dimensions, means that NGC 3511 is about 70,000 light years across. It was discovered by William Herschel on December 21, 1786. It lies two degrees west of Beta Crateris. NGC 3511 features two very diffuse, thick, and patchy spiral arms that emanate from the bulge, while there are also other spiral arm fragments. Dark dust lanes can be seen across the spiral pattern. The bulge appears elliptical and is weak. The galaxy is seen at a high inclination, estimated to be 70°. In the centre of the galaxy lies a supermassive black hole, whose mass is estimated to be 106.46 ± 0.33 (1.3 - 6.2 million) , based on the pitch angle of the spiral arms. The galaxy had been classified as a type 1 Seyfert galaxy, however it features only narrow emission lines, and has been reclassified as a HII region galaxy. The Infrared Spectrograph (IRS) on the Spitzer Space Telescope has detected polycyclic aromatic hydrocarbon (PAH) emission. NGC 3511 forms a pair with NGC 3513, which lies 10.5 arcminutes away from NGC 3511. The two galaxies form a small group, known as the NGC 3511 group, which also includes the galaxy ESO 502-024. See also NGC 4088 and NGC 2427 - two similar spiral galaxies Gallery References External links NGC 3511 on SIMBAD Intermediate spiral galaxies Crater (constellation) 3511 UGCA objects 33385 Astronomical objects discovered in 1786 Discoveries by William Herschel
NGC 3511
[ "Astronomy" ]
358
[ "Crater (constellation)", "Constellations" ]
59,700,258
https://en.wikipedia.org/wiki/Lola.com
Lola.com is a software as a service (SaaS) company based in Boston, Massachusetts. It is best known for developing corporate travel management and expense software for web browsers, the App Store and Google Play. The company was founded in 2015 by former Kayak.com executives, Paul M. English and Bill O'Donnell. The website operates under a travel agency model for hotel and flight search information as well as booking services for businesses. It also has administrative analytics on employee travel and associated costs. Lola has received more than $80 million in funding since its foundation. History In July 2015, Blade, a Boston-based incubator, began focusing on a single startup. By December, English announced that Lola had emerged from stealth mode. The company's name was derived from a combination of the words "latitude" and "longitude". It acquired HopOn, a travel booking company, in 2015 and Room77, a hotel metasearch website, in 2016. The company launched an iOS application in April 2016 where users chatted with human travel agents. That same month, it completed a $20 million Series A funding round led by General Catalyst and Accel. The company had more than $44 million in total funding after a December 2016 Series B round led by Charles River Ventures. GV and Tenaya Capital each invested $5 million in the round, while previous investors General Catalyst and Accel also participated. In July 2017, Lola had its second major release on iOS and the Android operating system. This iteration of the application focused on business travel by adding self-service hotel and flight booking and personalized travel recommendations. In July 2018, English announced he would assume the role of chief technology officer at Lola, with Mike Volpe, the chief marketing officer at Cybereason, becoming the company's chief executive officer. Lola announced a five-year exclusive partnership with American Express Global Business Travel in November 2018 to sell its travel management software. In March 2019, the company announced a $37 million Series C round led by General Catalyst and Accel. The round also included participation from all previous investors – Charles River Ventures, GV, and Tenaya Capital. In February 2021, due to the impact of the COVID-19 pandemic on tourism, the company pivoted to developing software for the financial technology industry. Lola ceased operations in September 2021 according to a notice on its website citing "new things to come" for the company. In October 2021, it was announced that Capital One was acquiring Lola. References External links Official website From CMO To CEO: Lola.com's Mike Volpe Software distribution Software industry Companies based in Boston
Lola.com
[ "Technology", "Engineering" ]
541
[ "Computer industry", "Software industry", "Software engineering" ]
59,700,893
https://en.wikipedia.org/wiki/Antenna%20types
This article provides a summary description of many of the different antenna types used for radio receiving or transmitting systems. Different types of antennas are made with properties especially optimized for particular uses, and the electrical design of antennas serves as a way to group them: Most often, the greatest design constraint is the size (wavelength) of the radio waves the antenna is to intercept or emit. A competing second influence is optimization criteria for either receiving or for transmitting; the distinction has practical differences for shortwaves and longer wavelengths. A competing third criterion is the number and bandwidth of the that a single antenna intercepts or emits. A fourth design goal is to make the antenna directional: To project or intercept radio waves from only one vertical and / or horizontal direction as exclusively as possible. Antenna categories and article section summary This section lists the article's main sections and subsections in the order that they occur. Each group of antennas fit together due to some commonly used electrical operating principle: In at least one regard, the grouped antennas all work in the same way. The list below summarizes the several parts of this article; the bold-face links lead into the named sections and subsections. In turn, the links within the linked sections themselves lead further on, to other Wikipedia antenna articles. Antennas can be classified in various ways, and various writers organize the different aspects of antennas with different priorities, depending on whether their text is most focused on specific frequency bands; or antenna size, construction, and placement feasibility; or explicating principles of radio theory and engineering that underlie, guide, and constrain antenna design. The classification and sub-classifications below follow those typically used in most antenna engineering textbooks. Simple antennas There are three types of "simple antennas": dipoles, monopoles, and loops. The three simple antenna types are all typically (but not necessarily) used on frequencies where they self-resonate. "Simple" antennas are also used as building-blocks for the more complicated antenna types, such as composite antennas, which is analogous to using multiple simple optical lenses to make a single compound lens. Simple antennas are usually further subdivided into Linear antennas ("electric" antennas) "Straight-wire" or "straight-line" antennas are on rare occasions called "electric antennas", since they exclusively couple to the electric part of the electromagnetic radio waves that they emit and absorb. (The term "linear" is not strict: End-sections of a linear antenna far from its center can be bent away from a straight line with only slight or unnoticeable loss in performance.) Dipole Two-armed antennas, like "rabbit ears". For resonance, each arm is slightly under a quarter-wave base to end, which makes the whole antenna nearly a half-wave end to end. Monopole Single-armed antennas, like a single "telescoping" antenna. At the lowest resonant frequency that arm is slightly under a quarter-wave long. Both dipoles and monopoles are often built large enough to be self-resonant; usually each arm is a quarter-wave long. However a few types of linear antennas are specifically made too small to resonate – short whip antennas, and unplanned random wire antennas, for example. Loop antennas ("magnetic" antennas) Loops are ring-like antennas made out of segments of wire or metal tubing bent into a circle or polygon – any regular or irregular two-dimensional figure that closes in on itself. On rare occasions all loops are generically called "magnetic antennas", since they exclusively interact with the magnetic portion of the radio waves passing through them. Large loops "Large" loops are loop antennas whose perimeter is slightly over one full wavelength at their design frequency; they are naturally resonant on all frequencies that are whole number multiples of that design frequency. Halo antennas "Halos" are loops with a small gap cut in them, that naturally resonate at the frequency where their perimeter length is a half-wavelength. Small loops "Small" loop antennas are loops of wire or metal tubing designed for use as antennas at frequencies where their perimeter is smaller than a half-wave; they are not naturally resonant on any frequency they are used on, and must be resonated artificially, usually by attaching a capacitor across their feedpoint. Composite antennas Composite antennas are made by combining one or more simple antenna(s) either with other simple antenna(s) or with some kind of a reflecting surface formed into a screen, or curtain, or curved dish. Usually only one of the component antennas is resonant on the design frequency, and in that typical case, the feedline connects only to the resonant component. Broadbanded composite antennas Antennas can be made to be "broadband" or "wideband" in several different ways. Perhaps the most common method of broadbanding is to combine two or more different antennas, connected at a single shared feedpoint, with each separate component readily accepting transmit power on a different collection of frequencies. The combined antenna then operates tolerably well on at least twice as many frequencies as a simple antenna. Array antennas Array antennas are made out of combinations of several simple antennas that function as a single antenna; most compact but highly directional / "high gain" / beam antennas are some type of an array antenna Aperture antennas Aperture antennas are made of an outer, surrounding reflective surface many wavelengths wide, whose shape concentrates waves striking the surface onto a small, inner, simple antenna; the inner antenna can be either resonant or non-resonant and of any type The two subtypes of composites, array and aperture antennas, are otherwise not especially closely related, and are often separately listed as distinct types. Traveling wave antennas Traveling wave antennas are notably one of the few types of antennas that are normally not self resonant: Electrical waves induced by received radio waves travel through the antenna wire in the direction that the arriving RF signals are travelling. Only electrical waves traveling toward the feedpoint are collected; waves traveling away from the feedpoint are grounded through a terminating resistor at the end of the wire opposite the feedpoint. The resistive termination makes the antenna receive in only one direction, similar to an aperture antenna but much simpler to build. In order to make them even more directional, they are made several wavelengths long, hence unsteerable. Absorption in the terminating resistor makes them inefficient radiators, but still sometimes used for transmitting since they work on any frequency. "Other" antennas Inevitably some antennas won't conveniently fit into any one basic type, so the last section on real antennas is an "everything else" category for a few peculiar antennas that don't fit cleanly into any of the categories or subcategories used in this article; for example, random wire antennas and antennas that are laid down on the ground instead of raised up in the air. Isotropic antenna The last section is for a unique type of "fake" antenna, called an isotropic antenna or isotropic radiator. It is a convenient fiction used as a "worst possible case" to compare the directivity performance of real antennas against. Although no real antenna can be exactly isotropic, a few antennas are built to be as near to isotropic as possible; they are used for emergency backup antennas and for test equipment for other antennas: Because the received and transmitted signal strength is (almost) the same in every direction, they work without any need for them to be any better than very crudely oriented, if at all. Simple antennas The category of simple antennas consists of dipoles, monopoles, and loop antennas. Nearly all can be made with a single segment of wire (ignoring the break made in the wire for the feedline connection). Dipoles and monopoles called linear antennas (or straight wire antennas) since their radiating parts lie along a single straight line. On rare occasions they are called electric antennas since they engage with the electric part of RF radiation, in contrast to loops, which correspondingly are magnetic. Dipoles The dipole consists of two conductors, usually metal rods or wires, usually arranged symmetrically, end-to-end, with one side of the balanced feedline from the transmitter or receiver attached to each, and usually elevated as high as feasible above the ground. Some varieties of dipoles differ only in having off-center feedpoints or feedpoints at their ends, others vary the alignment or shape of the dipole arms. Although dipoles are used alone as omnidirectional antennas, they are also a building block of many other more complicated directional antennas. Half-wave dipole The most common type of dipole consists of two resonant elements, each just under a quarter wavelength long, hence a total length of about a half-wave. This antenna radiates maximally in directions perpendicular to the antenna's axis, giving it a small directive gain of 2.15 dBi (2.15 dBi means that in the direction of maximum radiation, signal strength is 1.64× the signal from a directionless "isotropic" antenna). Doublet "Doublet" is a name radio amateurs sometimes use for a dipole antenna that is used on a frequency below the antenna's lowest self-resonance. It is not necessary for an antenna to be resonant to transmit well, rather resonance is preferred to easily feed power to it; using a transmatch may make feeding power to an antenna on its nonresonant frequencies possible. Some "doublets" are carefully sized to avoid resonance, in order to make impedance matching less challenging. (The term is "doublet" is not strictly distinguished; many use it as a synonym for "dipole".) Folded dipole A typical folded dipole is two half-wave dipoles mounted parallel to each other, a few inches apart, with the far ends connected. Only one of the dipoles is fed, and the second dipole connects straight through the center where the first has the usual feedpoint. The two-wire version is often described as a "squashed loop antenna", since the total length of wire is one wavelength, and efficiency / radiation resistance of the folded dipole is very high: 4× that of a single dipole, analogous to the high efficiency of large loops. Any number of similar parallel wires may be added, with the efficiency rising as the square of the number of parallel wires; hence a three-wire folded dipole would be 9× more efficient. Inverted-'V' antenna When the two arms of a dipole are individually straight, but bent towards each other in a 'V' shape, at an angle noticeably less than 180°, the dipole is called a 'V' antenna, and when the dipole arms' end closer to the ground than their center branch-point, the antenna is called an inverted-'V' ('Λ'). The inverted-'V' is popular since it provides some of the good electrical performance of a dipole, but only requires erecting one high mounting point, whereas an ordinary dipole requires at least two, often three. Due to ground reflections the inverted-'V' tends to be mostly omnidirectional, but depending on the center angle, can be slightly directional along the midline between the two arms of the 'V'. Sloper A sloper or sloper dipole is a half-wave wire slanting down from a single elevated mounting point. It is usually fed at its center with the feedline cable itself slanting away at a perpendicular counter-slope from the sloping antenna wire, towards a small pole or a ground anchor near the base of the mast. The sloper's far end is attached by a cord to a short pole or fastened by an insulated cord to a ground anchor. It is popular because it requires only a single mast, and with a good ground system below it, has a nearly omnidirectional pattern. Modern Windom antenna More formally called an off-center-fed dipole. The modern 'Windom' is a dipole which is fed approximately one third of the distance from one of its ends, but otherwise erected like an ordinary dipole, including most dipole variations (such as inverted-'V' and sloper dipoles). The strategically chosen offset feed location has a fairly high impedance, but fortuitously shows roughly the same high impedance on most of its harmonics. The Windom antenna is popular because it has all of the advantages of an ordinary dipole, but functions well on almost twice as many shortwave frequencies as an identical sized center-fed dipole. The price for the extra working frequencies is the needed to match a feed impedance 5–7 times higher than the standard 50 Ohm transmitter impedance. End-fed dipole A dipole can be fed from very near its end (needing to be only about of the dipole length from the actual end) but the end impedances are exceedingly high – a few thousands of ohms, depending on the average height of the antenna and thickness of its wire. The end location has an inconveniently high impedance, but it is roughly the same high impedance for all the harmonics, and accommodation for any one harmonic will be near to right for all the other harmonics. The benefit of the extensive measures needed for matching to the high impedance is that the antenna can then function well on every harmonic (no exceptions, unlike a "Windom"), and hence used for transmitting on exactly twice as many frequencies as a same-size center-fed dipole (only odd harmonics feasible). Turnstile Two dipole antennas mounted at right angles, fed with a phase difference of 90°. This antenna is unusual in that it radiates in all directions (no nulls in the radiation or reception pattern), with horizontal polarization in directions coplanar with the elements, circular polarization normal to that plane, and elliptical polarization in other directions. Used for receiving signals from satellites, as circular polarization is used by most satellites for both transmit and receive, and since it can emit and receive signals in all directions, can operate from a simple, fixed mount, without needing to be aimed or steered towards the target satellite. Patch (microstrip) A type of antenna with elements consisting of metal sheets mounted over a ground plane. Similar to dipole with gain of 6–9 dBi. Integrated into surfaces such as aircraft bodies. Their easy fabrication using PCB techniques have made them popular in modern wireless devices. Often combined into arrays. Biconical antenna A dipole with cone-shaped arms, with the feedpoint where their tips meet; they are sometimes called "fat dipoles" or "double bowling pins". They show broader bandwidth than ordinary dipoles, up to three octaves above their base frequency. The monopole version is called a discone antenna. Bow-tie antenna A "bow-tie" is a flattened version of a biconical antenna, with similar broad-band advantages. Also called butterfly antennas, they are dipoles with arms shaped like triangles or arrow-heads (); the antenna feedpoint is where the tips of the triangles meet. The triangles can either be a metal sheet with solid metal centers (), or two wires with their far ends connected () outlining the shape of a bow-tie, or with unconnected ends in an "X" shape (). Dipole and monopole design variability Designs of linear antennas can be modified by using segments made of a bundled "cage" of wires instead of just a single wire, in order to simulate a single very "fat" wire. Another adaption is to bend otherwise straight segments near their ends, instead of only using completely straight wire, and exploiting the bent, folded, and zig-zagged dipole ends to fit in a tight space. Both of these adaptions to linear antenna designs are considered relevant, but minor distinctions between antennas. Cage sections The elongated segments of dipole antenna are most often made of either thin, strong wire or hollow metal tubing. However they can also be made out of "cages" of wire: Several segments of fairly close spaced electrically parallel wires, used to simulate the electrical behavior of a much wider metal tube, most particularly lower "Ohmic" resistance and wider bandwidth, without as much trouble from wind loading. Although conical dipole (and monopole) segments are treated in this summary article as separate designs, wires bundled into "cage" segments are treated here as minor variations. Any antenna design nominally for a slim singe-conductor wire can equally well be made using a "fat" cage with only slight length adjustments (slightly shorter segments), if any. End folding Almost all of the radiation from a dipole comes roughly from the half of its total length closest to its center, around the usual feedpoint where the two arms meet; approximately the last third of each of the dipole arms only radiates a minuscule amount of the outgoing signal, so for the purpose of emitting radio waves, the shape of each outer end is not important. This shape-indifference allows otherwise prohibitively long dipoles to have their far ends bent sideways, folded over, or zig-zagged, in order to shorten the antenna to fit inside an available space. This apparent mangling has very little affect on the antenna's radiation. The only serious constraint on end folding is electrical safety: The dangerous high-voltage antenna tips (remarkably high, even for modestly low power transmission) must be out of harm's way, including anywhere a dangling wire might reach if blown by a strong wind. For the most part, fold shapes are freely improvised by the person raising the antenna; various possible end folds are not listed in this article as a separate design, and should be considered a normal, electrically inconsequential, convenience modification for every type of linear antenna. Monopoles A monopole antenna is a half-dipole (see above); it consists of a single conductor such as a metal rod, usually mounted over electrically conductive ground, or an artificial conducting surface (called a ground plane, ground system, or a counterpoise). They are sometimes classed together with dipoles (see above) in the broader category of linear antennas, or more plainly straight wire antennas, since their radiating section is normally a straight (linear) wire or section of metal tubing. Rarely, both dipoles and monopoles are called electric antennas, since they interact with the electric field of a radio wave, to contrast them against all sizes of loops, which are correspondingly magnetic antennas. One side of the feedline from the receiver or transmitter is connected to the radiating arm of the antenna, and the other side to ground or the artificial ground plane. The radio waves from the monopole reflected off the ground plane appear as if they came from a fictitious image antenna seemingly below the ground plane, with the monopole and its phantom image effectively forming a dipole. Hence, the monopole antenna has a radiation pattern identical to the top half of the pattern of a similar dipole antenna, and a radiation resistance a bit less than half of a dipole. Since all of the equivalent dipole's radiation is concentrated in a half-space, the antenna has twice the gain (+3 dB) of a similar dipole, neglecting power lost in the ground plane. Quarter-wave monopole The most common monopole is a vertical, tall, which is the minimum size for it to self-resonate. A one-quarter wave monopole has a gain of 5.12 dBi when mounted over a good ground plane. A single monopole's radiation pattern is omnidirectional, so they are used for broad coverage of an area, and when mounted vertically, they have vertical polarization. Vertically polarized outgoing radiation is important for long-distance transmissions in the mediumwaves and lower: The ground waves that carry radio signals at frequencies below about 2 MHz must be vertically polarized to reduce signal absorption by the Earth. Large vertical monopole antennas are used for broadcasting in the lower half of the HF band, and all of the MF, LF, and VLF bands. Small monopoles ("whips") are used as compact, but low-gain antennas on portable radios in the HF, VHF, and UHF bands. Whip Type of antenna used on mobile and portable radios in the VHF and UHF bands such as FM "boom boxes", consists of a flexible rod, often made of telescoping segments. In the HF band, "whip" typically refers to a flexible antenna or a terminal antenna segment that is too short to resonate naturally; when a whip is long enough to self-resonate (a quarter wavelength or more), it is instead usually just called by the generic name "monopole". "Rubber ducky" It's more formal technical name is normal-mode helix. Most common antenna used on portable two-way radios and cordless phones due to its compactness. Consists of an electrically short wire helix that resembles a narrow, inch to half-inch long coiled wire spring, such as one might find in a retractable ballpoint pen. The helical shape adds inductance to cancel the capacitive reactance of the short radiator, making it resonant. Like all electrically short antennas it is nearly isotropic – has very low gain, if any. Not to be confused with the similar shaped, but much larger axial mode helix (see below), nor to be confused with loop-type antennas. Ground plane A whip antenna with several rods extending horizontally from base of the whip in a star-shaped pattern, similar to an upside-down radiate crown, that form the artificial, elevated ground plane that gives the antenna its name. The ground plane rods attach to the ground wire of the feedline, the other wire feeds the whip. Since the whip is mounted above ground, the horizontal rods form an elevated ground plane just below the whip to reflect its radiation away from the earth and increase its gain. Used for elevated base station antennas for land mobile radio systems such as police, ambulance, and taxi dispatchers. Mast radiator A radio tower in which the tower structure itself serves as the antenna. Common form of transmitting antenna for AM radio stations and other MF and LF transmitters. At its base the tower is usually, but not necessarily, mounted on a ceramic insulator to isolate it from the ground. Folded monopole A folded monopole antenna is the monopole version of a folded dipole: It is an ordinary quarter-wave monopole with a second wire run parallel to the first, a few inches apart, with the top ends of the two wires connected. The second wire connects directly to the ground system instead of connecting to feedpoint as the first wire does. Adding the second wire raises the efficiency of the monopole by 4×, and correspondingly raises the feedpoint impedance, giving the added benefit of making impedance matching to standard coaxial cable somewhat easier. Similar to a folded dipole, one can add a third wire to get 9× the efficiency, and so on. Although the name is similar to the folded unipole, the two antennas are electrically different: The folded monopole is a much simpler antenna. Discone antenna The discone is a monopole version of a biconical antenna. The name of the antenna describes its shape: A metal disk above a metal cone. The cone points upwards and is made of solid metal, wire mesh, or a skirt of about a dozen sloping wires that outline a cone. The cone measures near one quarter-wave long along the side from tip to bottom rim, at the antenna's lowest frequency. There is a smaller, flat metal disk mounted horizontally, slightly above the tip of the cone; sometimes the solid disc is replaced by a radiate crown of metal rods, similar to the base of a ground plane antenna. One of the feed wires connects to the tip of the cone, the other wire to the center of the disk. A discone is exceptionally wideband, offering a frequency range ratio of up to over three octaves above the antenna's lowest frequency, but otherwise only functions just as well as other quarter-wave monopoles: It is omnidirectional, vertically polarized, equally efficient as a monopole, and has gain similar to a ground plane antenna. Folded unipole A modified mast antenna, usually grounded at its base, augmented by one or several parallel wires called "skirt wires" that attach to the mast part-way up the antenna. The skirt wires can attach at any height between part-way up and the top of the mast. One or more of the skirt wires is fed with the signal, similar to a gamma match. The number and relative thickness of the mast and the skirt wires adjusts the feedpoint impedance. It is much more elaborate and not electrically the same as the similar-sounding folded monopole. Half sloper A half-sloper is a quarter-wave wire slanting down from a single elevated mounting point. It is fed at its top mounting point, with the low, far end attached by an insulated cord to a short pole or to a ground anchor. It is a monopole version of a sloper dipole (see above); like the sloper dipole it is popular because it requires only a single tall mast. Also like a sloper dipole it has a nearly omnidirectional pattern if used with a good ground system. Because its strongest currents (near the top-end feedpoint) are high up, it tends to have a stronger signal toward the horizon (better low angle gain) than a monopole fed near its base. It is somewhat like a monopole version of an inverted 'V' dipole. 'T' antenna Consists of a long horizontal wire crossing the gap between two towers, with a vertical wire attached to the center of the horizontal wire, hanging down from its center; the dangling vertical wire is the radiating part of the antenna. The wires outline the shape of the letter 'T', hence the name. The dangling wire may run directly into the back of a radio, but more often a is fed by a cable attached near where the dangling wire comes to ground. Additionally, the antenna requires a low resistance ground system, normally centered directly below the bottom of the 'T'. The typical height of a 'T' antenna is shorter than the quarter wavelength required for resonance. A 'T' antenna is distinguished from the similar 'L' antenna by the place where the dangling, radiating wire attaches to the horizontal cross wire: For the 'T' antenna the dangling wire attaches to the exact center of the top horizontal wire. Used on MF and the lower HF bands. Since at these frequencies the vertical wire is electrically short – much shorter than a quarter wavelength – the horizontal wire serves as a "capacitance hat" to increase the current in the vertical radiator, improving the efficiency and gain. Inverted 'L' Similar in construction to a 'T' antenna described above, but with the dangling vertical wire attached to one end of the horizontal wire instead of the center. The altered connection point gives the antenna the shape of the Greek letter 'Γ'. It can be thought of as an ordinary monopole that has been bent over somewhere in the middle, with its lower antenna segment vertical, as usual, but the "upper" part running horizontally – out sideways instead of up. Unlike the 'T' antenna, both the vertical and horizontal wires radiate, with their respective radiation being vertically and horizontally polarized, and their combined radiation diagonally polarized, usually at a steep angle. Although all parts of the antenna radiate, the strongest radiation comes from the vertical wire, so the horizontal wire serves both as a "capacitance hat" and as a weak radiator. Inverted 'F' Effectively a shunt-fed inverted-L, with the feed point attached to the horizontal wire, making the antenna shape like the letter 'F' tilted to the right by 90°, so it has the shape of the Hangul letter , or the line-drawing character . The unusual feedpoint with its adjustable location along the horizontal section gives the inverted 'F' the good feedpoint matching of a unipole, and the compact size of an inverted-L. The antenna is grounded at the base and fed at some intermediate point, and the position of that feed point determines the antenna impedance, so the feedpoint impedance can be matched to the feedline without needing a separate transmatch. Umbrella An elaborated and expanded version of a 'T' antenna; it is a very large wire transmitting antenna used on VLF bands for VLF time signals or long-range submarine communications. Relative to the even larger wavelengths it is used for, although the antenna is enormous on human-scale it is paradoxically an ultra-short antenna. Being much smaller than a wavelength gives the antenna many troublesome properties: An extremely narrow bandwidth, low radiation resistance, and excessive capacitive feedpoint reactance. It consists of a central radiating tower with multiple wires attached at the top as a "capacitance hat", that extend out radially from the mast and are insulated at their ends; the overhead configuration resembles an open metal umbrella frame, hence the name "umbrella antenna". Like other ultra-short antennas it requires a large loading coil and a meticulously constructed low-resistance counterpoise system to cope with the extremely high reactance and minimal radiation resistance. Loop antennas Loop antennas consist of a loop (or coil) of wire. Loop antennas interact directly with the magnetic field of the radio wave, rather than its electric field as linear antennas do; for that reason they are on rare occasions categorized as magnetic antennas, but that generic name is confusingly similar to the term magnetic loop normally used to describe small loops. Their exclusive interaction with the magnetic field makes them relatively insensitive to electrical spark noise within about of the antenna. There are essentially two broad categories of loop antennas: large loops (or full-wave loops) and small loops. The halo is the only loop antenna does not exclusively fit in either the large loop or small loop category. Large loops Full-wave loops have the highest radiation resistance, and hence the highest efficiency of all antennas: Their radiation resistances are a few hundreds of Ohms, whereas dipoles and monopoles are tens of Ohms, and small loops and short whip antennas are a few Ohms, or even fractions of an Ohm. Large loops Large loops have a perimeter of one full wavelength, or larger. When they are one, two, or three wavelengths, or any whole-number multiple of a wavelength, they are naturally resonant and act somewhat similarly to the full-wave or multi-wave dipole. When it is necessary to distinguish them from small loops, they are called "full-wave" loops. Half-loop the upper half of a vertical full-wavelength loop antenna mounted on the ground (not to be confused with the visually similar but electrically different half-square antenna described below, under array antennas, nor to be confused with the halo antenna, described next). The full loop is cut at two opposite points along its perimeter, and the lower half is omitted; the upper half mounted on the ground at the cut points, sticking up from the ground like a satchel handle. It is shaped like the Greek letter Π or an upside-down capital letter U, and is the loop antenna analog of a ground-mounted monopole antenna. Similar to how a vertical monopole uses its ground system to produce a "phantom" image of the rest of a dipole, the missing lower half of the half-loop is replaced by its image in the ground-plane. If shaped like a half of a square, a half-loop can operate either as a loop antenna or on its first harmonic as a dipole antenna whose ends have been bent over and grounded. Halo antennas Halo antennas They are loop antennas that uniquely sit in-between large and small loops; they are one half-wavelength in perimeter, with a small gap cut into the loop rim. For practical purposes, "halos" are naturally resonant on one frequency. They are intermediate in size and function between small and large loops, and are often described as a half-wavelength dipole that has been bent from a straight line into a round circle, but with its far ends left unconnected. The approximately-omnidirectional pattern of halos resembles small loops; their radiation efficiency lies between the extreme high efficiency of large loops and the generally poor efficiency of small loops. Halos are self-resonant like full-wave loops, but have no practical higher harmonics. In some regards they represent the extreme upper size limit of small transmitting loops. Small loops Small loop antennas have very low radiation resistance – typically much smaller than the loss resistance of the wire they are made of, making them inefficient for transmitting. Their directionality and low radiation efficiency is drastically different from full-wave loops. In the expected case that the loop perimeter is smaller than a half-wavelength, if the loop needs to be resonant it must be electrically modified in some way to resonate it artificially – usually by attaching a shunt capacitor across the feedpoint. Despite their drawbacks, small loops are widely used as receiving antennas, especially at frequencies below 10~20 MHz, where their inefficiency is not an issue and their small size makes them a useful solution to the excessive sizes even of quarter-wave antennas. The fact that they can be efficiently tuned to accept only a very narrow frequency range (similar to a preselector) helps alleviate much of the trouble caused by the pervasive static always encountered on the mediumwaves and lower shortwaves where small loops are most popular. Small loops are called "magnetic loops"; they are also called "tuned loops" since small loops typically must be modified by adding capacitance to make them resonate on some frequency lower than any that they would "naturally" resonate on. Small receiving loops Small receiving loops are sized at ~ perimeters, sometimes with many turns of wire around the same supporting frame. Small loops are widely used as compact direction finding antennas, since their "null" direction is exceptionally precise, and their small size makes them much more compact as hand-carried equipment than dipole-based directional antennas. Ferrite loop antennas Also called "loopsticks", they consist of a wire coiled around a cylindrical ferrite core (the "stick"). The ferrite increases the coil's inductance by hundreds to thousands of times, and likewise magnifies its effective signal-capturing area. The improvement makes them even more compact than (ordinary) small loops made without ferrite, and yet receive RF just as well (or better). Loopsticks' radiation pattern is identical to a dipole antenna, with a maximum in all directions perpendicular to the ferrite rod. These are used as the receiving antenna in most portable and desktop consumer AM radios made for the medium wave broadcast band, and for lower frequencies. Small transmitting loops Small transmitting loops are loop antennas whose perimeters are smaller than a half-wave, that have been specifically optimized for transmitting. Their much smaller size than dipole antennas (only ~10% as wide) sometimes makes them a viable choice when space is limited, despite their lower efficiency. Small transmitting loops are made larger in size than most small receiving loops, with perimeters near ~ in order to improve on their generally poor efficiency. For that same reason, their parts are carefully joined by brazing or welding to reduce losses from contact resistance. Because of their larger size, small transmitting loops lack the sharp nulls of small receiving loops, so they are not as useful for direction finding, and also are more bulky (roughly double the size) so would not be as convenient as the more accurate small loops for hand-held use in radio searches. Highly accurate small loop null directions The nulls in the radiation pattern of small receiving loops and ferrite core antennas are bi-directional, and are much sharper than the directions of maximum power of either loop or of linear antennas, and even most beam antennas; the null directionality of small loops is comparable to the maximal directionality of large dish antennas (aperture antennas, see below). For accurately locating a signal source, this makes the small receiving loop's null direction much more precise than the direction of the strongest signal, and the small loop / ferrite core type antennas are widely used for radio direction finding (RDF). The null direction of small loops can also be exploited to exclude unwanted signals from an interfering station or noise source. Several construction techniques are used to ensure that small receiving loops' null directions are "sharp", including making the perimeter (or at most Small transmitting loops' perimeters are instead made as large as possible, up to or even , if feasible, in order to improve their generally poor efficiency; however, doing so blurs or erases small transmitting loops' directional nulls, unlike the precise nulls of smaller receiving loops. Composite antennas Composite antennas are made up of combinations of several simple antennas configured to function as a single antenna, similar to how a compound optical lens combines multiple simple lenses. Likewise, for antennas that combine one or more simple antennas with a curved metal surface or flat reflecting screen, the metal dish or curtain functions for radio waves similar to a mirror in optical systems, hence those antennas are analogous to reflecting telescopes and Kleig lights. Broadbanded composite antenna Composite antennas are most often designed to bolster directivity beyond that of simple antennas. But a different performance-enhancement is to broaden an antenna's usable frequency range. Just by wiring together several simple antennas at a shared feedpoint, the resulting combined antenna can be made broadbanded or widebanded or multibanded – that is, made to operate well either on several distinct frequencies, or over a single wider frequency span. Fan dipole Also called a multi-dipole – a common broadband and / or wideband dipole variant that superficially resembles the bow-tie antenna, but is electrically different. It is a composite of pairs of dipole arms; both arms of one of the dipoles are equal-length, but each dipole pair is a different length from every other pair. The several dipole arms extend away () from the common central connection point of the combined antenna. Log-periodic dipole array Log-periodic dipole arrays are made broadband by having multiple dipoles share a common feedpoint, and so warrant mention in this section. However, they are separately made directional by the front-to-back interaction between a single resonant dipole in some middle-position and wave reflection by its immediate rearward too-long dipole, and in front of the resonant element the focusing or "directing" action of the several too-short elements; so log-periodic arrays are described in the subsection on endfire arrays, below. Fan monople A fan monopole, or multi-monopole is a half of a fan dipole: It combines several different-sized monopole antennas, all sharing the same feedpoint, with each sized to transmit well on a different band or sub-band. Like all monopoles, it requires a ground system to function. Its design for wideband or broadband behavior is essentially identical to the fan dipole. Array antenna Array antennas are composites of multiple simple antennas, either linear, or loops, or combinations of each. The multiple parallel-aligned simple antennas work together as a single compound antenna. The constituent simple antennas can be dipoles, monopoles, or loops, or mixed loops and dipoles. There are three or four types, called broadside arrays, endfire arrays, and parasitic arrays, among others. Broadside arrays Broadside arrays consist of multiple, parallel, identical driven elements, usually dipoles, fed in phase, radiating a beam perpendicular to the plane containing the simple antennas, analogous to a firing line of musketeers, all simultaneously shooting in the same direction, perpendicularly out from the line of shooters. Vertical collinear A broadside array that consists of several dipoles fed in phase, with their axēs stacked atop each other, in a single vertical line. It is a high-gain omnidirectional antenna, meaning more of the power is radiated in horizontal directions and less wasted radiating up into the sky or down onto the ground. Gain of 8–10 dBi. Used as base station antennas for land mobile radio systems such as police, fire, ambulance, and taxi dispatchers, and sector antennas for cellular base stations. Curtain array A curtain array is any one of several designs for large, directional, long-distance, broadside transmitting array antennas used at HF by shortwave broadcasting stations. It consists of a vertical rectangular array of identical dipoles suspended in a parallel row in front of a flat reflector screen (the "curtain"). The screen or curtain consists of a second row of vertical parallel wires, all supported between two metal towers. It is aligned to efficiently radiate a horizontal beam of vertically polarized radio waves into the sky just barely above the horizon; once the signal reaches the ionosphere past the horizon the beam is refracted (or "bounced") off the F layer back towards Earth, to reach equally far beyond and over the horizon, perhaps to be reflected off the ground for another "hop". There are several designs for curtain arrays, among them Sterba curtains, bobtail curtains, and HRS antennas; the half-square antenna (below) is a minimal curtain array, with only two radiating elements and no reflecting screen. Reflective array Multiple dipoles in a two-dimensional broadside array mounted in front of a flat reflecting screen, usually called a "curtain". Used for radar and UHF television transmitting and receiving antennas. Half-square A broadside array made of two "upside down" vertical monopoles. Their dangling tips / bases correspond electrically to the tops of ordinary monopoles, and are not connected to the ground. The two top ends that each monopole hangs from electrically corresponds to the base of a normal monopole, and are the monopole's nominal feedpoints (the actual feedpoint for the combined system is often placed elsewhere). The attachment points at the tops are interconnected by a wire one half-wavelength long, which serves as both a counterpoise wire and a crossover phasing feedline. The verticals are the radiators and function as a minimal two-element curtain array, similar to a bobtail curtain. The structure is shaped like the Greek letter Π (not to be confused with the similar-looking half-Loop antenna described above). Unlike a half-loop, neither monopole element has any DC connection to the ground beneath it (although there typically is considerable RF capacitive coupling, which can be exploited to shorten the verticals). The top-to-top half-wave connecting wire serves as a phasing line that keeps radiation from the two antennas in-phase; even if the system feedpoint is connected elsewhere. Since a quarter-wave monopole's current is highest nearest its feedpoint, the nominal top-feed puts the maximum radiating current up high, at the top of each monopole. Because they are top-fed, inverted monopoles produce a strong signal lower to the horizon than an ordinary bottom-fed monopole, whose maximum radiation must angle-up from the base, to pass above surrounding obstructions. Batwing Also called a superturnstile, is a specialized broadside array antenna used for VHF television broadcasting. It is a hybrid flattened biconical and turnstile antenna, that consists of perpendicular pairs of dipoles with radiators resembling bat wings. The batwing shape is a flattened biconic ("butterfly antenna") that gives wide bandwidth which whole-channel TV transmission needs. Stacking the batwings vertically on a mast concentrates more of the combined antennas' radiation in the horizontal direction, and with matched pairs at right angles, each pair fills-in the nulls of its counterpart, making their combined radiation pattern more nearly omnidirectional. Microstrip A small-sized microwave antenna printed on a circuit board (PCB). Because of the short wavelengths it handles, the small antenna can still be shaped to achieve large gains in compact space, as an array of patch antennas on a substrate fed by microstrip feedlines. Often the antennas printed on a PCB are composites of multiple different small antennas, each shaped to have complementary performance advantages supplementing the others'. Further, components' beamwidth and polarization can be made actively reconfigurable by switching and phasing circuitry printed on the same board. Ease of fabrication by modern PCB manufacturing techniques have made them popular in modern wireless devices. Both broadside and endfire This subsection could also be named "phased arrays", a type of composite directional antenna where the various component simple antennas are laid out a large fraction of a wavelength apart, with each antenna's feedline's phase individually shifted, so that the signal from a radio wave moving in some selected direction across the layout of the several simple antennas arrives simultaneously at the receiver, and hence constructively re-enforce; conversely, waves arriving from other directions will interfere destructively to either suppress or eliminate signals from the unwanted directions. The same phasing technique works in reverse, with signals transmitted from the several antennas combining to form a wave front departing mostly in one direction. Phase change can electrically steer the radiation receive and transmit direction without physically moving the antennas. Within limits, how narrowly a particular direction may be selected improves with a greater number of antennas, and / or with antennas spaced more widely apart. Phased array Is a high gain antenna used at UHF and microwave frequencies which is electronically steerable by phase adjustments, from being an endfire array to a broadside array, and every direction in between. It consists of multiple dipoles in a two-dimensional array, each fed through an electronic phase shifter, with the phase shifters controlled by a computer control system. The beam can be instantly pointed in any direction over a wide angle in front of the antenna. Used for military radar and jamming systems. Adcock antenna An Adcock antenna is a pair of side-by-side endfire arrays, hence it is also broadside. It is made of four parallel dipole (or monopole) antennas, all equal size and equidistant, vertically aligned at four corners of a square. All four dipoles are driven, but with opposing phases for adjacent dipole elements, and identical phases for elements at opposite corners. The combination of spacing and phasing of the dipole elements makes the combination of the arrayed elements moderately directional. Unlike phased arrays, Adcock antennas are typically physically rotated towards a given direction, rather than being steered by changing phase on the feedlines. Endfire arrays Endfire arrays have their driven elements fed out-of-phase, with the phase difference corresponding to the distance between them; they radiate within the plane that the consitituent parallel antennas all lie in. Continuing with the musketeer analogy, an endfire array works similarly to a column of shooters, one behind the other; three, for example: One lying on the ground, the next kneeling behind the first, and the last standing at their backs, all aiming in the same direction they are lined up in, those in back firing over the heads of the musketeers in front of them. Each analogy musketeer fires his / her shot just as the bullets from the rearward musketeers' shots pass overhead, so that the combined volley of bullets are all bunched in an single aligned group, and all arrive at the target simultaneously. Log-periodic dipole array An endfire array of multiple dipole elements along a boom with gradually decreasing lengths, back to front, all connected to the transmission line with alternating polarity. It is a directional antenna with a wide bandwidth, which makes it ideal for use as a rooftop television antenna, although its gain is much less than a Yagi of comparable size. Sometimes called a "fishbone" antenna because of it looks like the ribs of a fish. For long wavelengths in the lower HF band the array may be made of ground-mounted monopoles instead of dipoles. Parasitic arrays Parasitic arrays are a specific type of endfire array that consist of multiple antennas, usually dipoles, with one driven element and the rest parasitic elements, which re‑radiate the beam they intercept along the line of the antenna rods. It is parasitic arrays that are the closest RF analogs of compound optical lenses made from combinations of simple lenses. They can also be compared to a column of a team of especially skillful badminton players, with the server standing at or near the back, and each team-mate in front taking a swing at the shuttlecock as it passes by, to further it along with better aim. Yagi–Uda Also called a "Yagi", is a parasitic array that is one of the most common directional antennas at UHF, VHF, and upper HF frequencies. Consists of multiple half-wave dipole elements aligned with their axēs parallel, in the same plane, with a single resonant-length driven element wired to the feedline, usually the next-to-last, next-to-longest element in the array. The multiple other elements are parasitic, which reflect and direct the radiated signal into a narrower beam, hence the name beam antenna. The simple antennas used to make a Yagi-Uda can either all be linear or bent linear antennas, or all loops (a quad antenna) or (rarely) a mixed combination of loops and straight-wire antennas. Yagi–Udas are used for rooftop television antennas, point-to-point communication links, and long distance shortwave communication using skywave ("skip") reflection from the ionosphere. They typically have gains between 10 and 20 dBi depending on the number of director elements used, but their bandwidths are very narrow. Moxon antenna Also called a Moxon rectangle; it is a rectangular-shaped, folded version of a two-element Yagi-Uda, hence a minimal parasitic array. Quad Although "quad" can refer to a single quadrilateral-shaped loop, the term usually refers to two or more loops stacked side by side as a parasitic array; at first glance, quads resemble a box kite frame. Only one of the loops in the quad is connected to the feedline, and that loop functions as the driver for the antenna and is the original source for the radiated signal. The other loops are parasitic elements that act as reflectors or directors, focusing the radiated waves in a narrower, single direction and thereby increasing the gain. Quad antennas are Yagi-Uda antennas made from loops instead of dipoles or monopoles, and are likewise used as a directional antennas on the HF bands for shortwave communication. They are sometimes preferred for longer wavelengths because (if square) they are half as wide as a Yagi built from dipoles and have slightly better directivity. Aperture antenna An aperture antenna consists of a small dipole or loop feed antenna embedded inside a larger, three-dimensional surrounding structure that guides the radio waves from the feed antenna in a particular direction, and vice versa. The guiding structure is often dish-shaped or funnel-shaped, and quite large compared to a wavelength, with an opening, or aperture, to emit the radio waves in only one direction. Since the outer antenna structure is itself not resonant, it can be used for a wide range of frequencies, by replacing or retuning the inner feed antenna, which often is resonant. Corner reflector A directive antenna with moderate gain of about 8 dBi often used at UHF frequencies. Consists of a dipole mounted in front of two reflective metal screens joined at an angle, usually 90°. Used as a rooftop UHF television antenna and for point-to-point data links. Parabolic The most widely used high gain antenna at microwave frequencies and above. Consists of a dish-shaped metal parabolic reflector with a feed antenna at the focus. It can have some of the highest gains of any antenna type, up to 60 dBi, but the dish must be large compared to a wavelength. Used for radar antennas, point-to-point data links, satellite communication, and radio telescopes. Horn A horn antenna has a flaring metal horn attached to a waveguide. It is a simple antenna with moderate gain of 15 to 25 dBi, used for applications such as radar guns, radiometers, and as feed antennas for parabolic dishes. Slot Consists of a waveguide with one or more slots cut in it to emit the microwaves. Linear slot antennas emit narrow fan-shaped beams. Used as UHF broadcast antennas and marine radar antennas. Lens A lens antenna is made from a layer of dielectric, or a metal screen, or multiple waveguide structure of varying thickness, mounted in front of a feed antenna. The waveguide / screen / dialectric refracts the radio waves, focusing them on the feed antenna, similar to a focusing lens placed in front of a flashlight. Dielectric resonator The "resonator" part consists of small ball or puck-shaped piece of dielectric material, placed at the opening of a waveguide, where the material is excited by waves fed into the other end of the guide. If well-designed, the resonating material efficiently re-radiates the absorbed waves. Used at millimeter wave frequencies (). Traveling wave antenna Unlike the antennas discussed so-far, traveling-wave antennas are not resonant so they have inherently broad bandwidth. They are typically wire antennas that are multiple wavelengths long, through which the voltage and current waves travel in a single pass, in one direction, as opposed to resonant antennas in which waves instead bounce back-and-forth, and form standing waves. In order to make traveling-wave antennas receive in a single direction, they are normally terminated by a resistor at one end, with the resistor's resistance matched to the antenna wire's characteristic impedance. Matching the impedance of the termination to the antenna wire maximizes the resistor's absorption of the waves traveling towards it along the antenna wire, hence almost no signals from unwanted directions are reflected backwards toward the feedpoint. Since the resistor absorbs the intercepted waves traveling towards its end of the antenna, the antenna feedpoint opposite the terminating resister only receives waves traveling in a direction away from the resistor and toward the feedpoint. When used for receiving the resistive termination removes more than half of the noise coming in from all directions, while preserving all signal power from the desired direction. The longer a traveling wave antenna is (in wavelengths) the more narrow its receive direction becomes, approaching or exceeding the performance of compound beam antennas. The great lengths typical of traveling wave antennas makes them unsteerable, hence a fixed antenna must be erected for every desired direction. If used for transmitting, the resistor makes traveling-wave antennas inefficient, since the resistor absorbs any radio wave after the wave has made a single pass through the antenna wire, as opposed to a resonant antenna in which radio waves cycle back-and-forth several times, giving the signal multiple opportunities to radate. However, because they are made non-resonant by the terminating resistor, traveling-wave antennas can easily be fed power regardless of frequency – unlike resonant antennas without transmatches, which are limited to frequencies very near their resonances. Because they have no practical restrictions on frequency, traveling-wave antennas may still be favored for transmitting if it is legally and electrically possible to raise the transmit power enough to compensate for the considerable amount of power wasted as heat in the terminating end resistor. Beverage Simplest unidirectional traveling-wave antenna. Consists of a straight wire one to several wavelengths long, suspended near the ground, connected to the receiver at one end and terminated at the other end by a resistor equal to its characteristic impedance (typically 400~800 Ω). Its radiation pattern has a main lobe at a shallow angle in the sky off the terminated end. It is used for reception of skywaves reflected off the ionosphere in long distance "skip" shortwave communication. Rhombic Consists of four equal wire sections shaped like a rhombus (). It is fed by a balanced feedline at one of the acute corners, and the two sides are connected to a resistor equal to the characteristic impedance of the antenna at the other acute corner. It has a main lobe in a horizontal direction off the terminated end of the rhombus. Used for skywave communication on shortwave bands in circumstances where it is practical to increase transmit power enough to compensate for power dissipated in the terminating resistor. Leaky wave Leaky wave antennas are used for microwave frequencies where microwave signals are normally passed through waveguides rather than solid wires. They are made by cutting slots, or "apertures", in a waveguide or coaxial cable, that allows the signal to radiate out along the length of the slot (hence "leaking" waves). Axial mode helix Consists of a wire in the shape of a helix mounted above or in front of a reflecting screen () whose total coiled length is on the order of at least one wavelength. Out of the open end of the helix, the antenna radiates a beam of circularly polarized waves, with a typical gain of 15 dBi. It is used at VHF and UHF frequencies where antenna sizes are feasible. Often used for satellite communication, which uses circular polarization because it is insensitive to the relative rotation on the beam axis. Not to be confused with a "rubber ducky" antenna (normal mode helix), which is much smaller. Other antenna types The following are some antenna types that don't comfortably fit into any of the simplified types listed above. Note that although it might well seem like a joke to describe antennas that are laid down on the ground (or even buried in it!) instead of put up high in the air, they actually do work, although with limits. Resistively loaded antennas One method for making a broadband antenna is to place a resistive element somewhere in the antenna. The extra resistance is used to dampen resonance and consequently reduce bothersome reactance found on most antennas at frequencies far from a resonance, and frequencies very near an antiresonance; the resistor's resonance dampening allows adequate operation on any frequency, but at the cost of wasting some transmit power in the resistor. Some examples are the terminated coaxial cage monopole (TC²M), the tilted terminated folded dipole (T²FD), and the similar Robinson-Barnes antenna (essentially a T²FD with a second radiating wire parallel to the first). Earth antennas, buried antennas, and ground antennas Earth antennas are made of wires actually buried under the soil, hence also called buried antennas; if laid onto the soil instead of buried in it, they are called ground antennas. Most amateur use is limited to non-directional MF and LF receiving antennas, but transmitting ground dipoles are used for military communication with submarines. In order to work, the wire must be near enough to the soil surface for the radio waves to penetrate and reach it; mediumwaves and longwaves are much better at penetrating soil, and those are the frequencies where buried antennas are most used, although still rare. Random wire antenna describes the random-wire antenna as an "odd bit of wire". It is the typical informal antenna erected for receiving shortwave and AM radio. It consists of a random length of wire either strung outdoors between elevated supports, or indoors across a ceiling, running in an erratic zigzag pattern along walls or between supports. The near-end of the antenna wire is usually directly connected to the back of the radio. Snake antenna Random wire antennas laid out on the surface of the ground are called "snake antennas", which are not clearly distinguished as any particular type. B.O.G. antenna A "Beverage on the ground" – often called a "B.O.G." or bog antenna – is a "snake antenna" laid in a straight line that has its end opposite the feedpoint grounded. It is a travelling wave antenna, and might technically be considered an extreme instance of a low-hanging Beverage antenna. Isotropic An isotropic antenna (isotropic radiator) is not a real antenna: It is a hypothetical, completely directionless antenna that radiates equal signal power in all vertical, horizontal, and transverse directions. An old-fashioned incandescent light bulb is often used as an example of a nearly isotropic radiator (of heat and light). Paradoxically, every antenna of any type, shorter than in its longest dimension is approximately isotropic, but no real antenna can ever be exactly isotropic. An antenna that is exactly isotropic is only a mathematical model, used as the base of comparison to calculate either the directivity or gain of real antennas. No real antenna can produce a perfectly isotropic radiation pattern, but the isotropic radiation pattern serves as a "worst possible case" reference for comparing the degree to which other antennas, regardless of type, can project some extra radiation in a preferred direction. All simple antennas approach closer and closer to being isotropic, as the waves they are transmitting or receiving increase in length beyond several times the antennas' longest side. Nearly isotropic antennas can be made by combining several small antennas. Nearly isotropic antennas are used for field strength measurements and as standard reference antennas for testing other antennas, since their alignment is a non-issue: Their signal strength measures exactly the same for almost any orientation. They are used as emergency antennas on satellites, since they work even if the satellite is tilted out of alignment with its communication station. Omnidirectional is not isotropic Isotropic antennas, which don't actually exist, should not be confused with omnidirectional antennas, which are real and fairly common. An isotropic antenna radiates equal power in all three dimensions, while an omnidirectional antenna radiates equal power in all horizontal directions, but little or none vertically. An omnidirectional antenna's radiated power varies with elevation angle: Maximum in the horizontal, and diminishing as the azimuth rises to align with the antenna's vertical axis. Several types of antennas do not radiate at all in the exactly vertical direction, even despite wavelength increasing; compare that preservation of the null response in the vertical direction to the idealized isotropic antenna, which would radiate equally in every direction. Notes References Antennas Radio frequency antenna types
Antenna types
[ "Engineering" ]
13,030
[ "Antennas", "Telecommunications engineering" ]
59,703,859
https://en.wikipedia.org/wiki/A-center
An A-center is a type of crystallographic defect complex in silicon which consists of a vacancy defect and an impurity oxygen atom. In general, oxygen in silicon is interstitial, in which the oxygen atom breaks the covalent bond between two adjacent silicon atoms and is attached in the middle. A-centers - another type of defect, in which oxygen takes the place of the absent silicon atom, that is, it becomes a kind of replacement defect. The A-center is visible in infrared spectra with a wavelength of 12 μm. References Crystallographic defects
A-center
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
117
[ "Materials science stubs", "Crystallographic defects", "Materials science", "Crystallography stubs", "Materials degradation", "Crystallography", "Condensed matter physics", "Condensed matter stubs" ]
59,704,320
https://en.wikipedia.org/wiki/Contextual%20architecture
Contextual architecture, also known as Contextualism is a philosophical approach in architectural theory that refers to the designing of a structure in response to the literal and abstract characteristics of the environment in which it is built. Contextual architecture contrasts modernist architecture, which value the imposition of their own characteristics and values upon the built environment. Contextual architecture is usually divided into three categories: vernacular architecture, regional architecture, and critical regionalism all of which also inform the complementary architecture movement. Etymology The term contextualism is derived from the Latin , meaning to weave together or to join. The term was first applied to the arts and architecture by the aesthetician and philosopher Stephen C. Pepper in the 1960s, who originally coined the word as applied to philosophy. History The essential ideas of Contextualism in architecture long preceded the term's coinage. The Roman notion of genius loci, Renaissance decorum, and Beaux Arts tirer parti mirror modern definitions of contextualism. The 1920s development of Gestalt psychology, which investigated the ways in which independent parts could be combined to make a cohesive result, provided the intellectual foundation for the philosophy. Contextualism as applied to architecture was first championed in the 1960s by architect Colin Rowe as a reaction to modernist architecture, which valued universality and the projection of utopian ideals onto sites. Pushing back against the perceived failure of modernist buildings to adapt cohesively with their environments – in particular with cities' historic buildings, Rowe advocated for architecture that was designed with a focus on existing in continuity with the surrounding features of the built and natural environments. Rowe notably advocated for the use of figure-ground diagrams as a method of understanding the existing features surrounding a site's surrounding environment. Contextualist philosophy experienced a revival later in the 20th century with the advent of the New Urbanism movement, which stressed "context-appropriate architecture" in urban design, particularly in the context of environmentalism. Criticism Contextualism, particularly in the decades following the 1980s, has faced criticism for its association with postmodernism and perceived conformism. Architectural pragmatist Rem Koolhaas' assertion "fuck context" served as an infamous rallying cry against contextualism. In 1988, while curating an exhibition on Deconstructivism at MoMA, architects Philip Johnson and Mark Wigley denounced the philosophy, stating "contextualism has been used as an excuse for mediocrity, for a dumb servility to the familiar." Notable examples Olympic Archery Range, Barcelona, Carme Pinós and Enric Miralles (1992) Water (Honpuku) Temple, Awaji, (Japanese: **E1-. *ME), Tadao Ando (1991) City Gate (Valletta), Malta, Renzo Piano (2015) Kingo Houses, Helsingør, Jørn Utzon (1958) Phantom Ranch, Grand Canyon, Mary Colter (1922) Fallingwater, Frank Lloyd Wright (1935) References Architectural theory
Contextual architecture
[ "Engineering" ]
613
[ "Architectural theory", "Architecture" ]
59,705,796
https://en.wikipedia.org/wiki/Trichoderma%20koningii
Trichoderma koningii is a very common soil dwelling saprotroph with a worldwide distribution. It has been heavily exploited for agricultural use as an effective biopesticide, having been frequently cited as an alternative biological control agent in the regulation of fungi-induced plant diseases. They are endosymbionts associated with plant root tissues, exhibiting mycoparasitism and promoting plant growth due to their capacity to produce different secondary metabolites. Trichoderma koningii is a species belonging to the genus Trichoderma. Fungi in this genus are able to adapt to different ecological niches and can colonize their habitats effectively, allowing them to be powerful antagonists and biocontrol agents. Typical of Trichoderma species is having a fast growth rate and the production of green or hyaline conidia on a branched conidiophore structure. History and taxonomy Trichoderma koningii was first described by the Dutch mycologist Oudemans in 1902 as one of the species in the microbial flora he obtained from a nature preserve in The Netherlands. After the genus was erected in 1794, there was difficulty in distinguishing and identifying the different species apart due to their very similar morphological characteristics. It wasn't until 1969 that a concept of classification was proposed by Rifai to reduce confusion on the taxonomy of Trichoderma. He recognized T. koningii as one of the nine "aggregates" or groups of species in the genus. This aggregate consists of 12 species within three lineages that have similar morphology as the "true" T. koningii but can be differentiated from each other by their phenotypic characters and geographic distributions. In 1991, Bissett divided the genus into five sections to classify the species on the basis of the branching of conidiophores. He included T. koningii in Trichoderma sect. Trichoderma. In 2004, Chaverri and Samuels proposed another taxonomic classification based on molecular phylogenetic analysis. T. koningii and its aggregates were included in the T. viride clade. Growth and morphology The conidiophores of T. koningii are branched and organized in a pyramidal structure with longer branches at the base that progressively shortens as it nears the tip. Primary and secondary branches arise in a right-angle degree and are often symmetrical on either side of the node along the main axis. Phialides are usually in 3–4 whorls that arise from the tip of the main branch and from lateral branches at intercalary positions on the conidiophore. Some phialides on widely-spaced branches are flask-shaped, resembling a wine bottle, whereas some tend to have a very swollen middle when in dense clusters or "pseudo-whorls". T. koningii typically produces smooth and ellipsoidal (egg-shaped) conidia, with a mean length of 4.1–4.3 μm, that aggregates in a slimy green mass at the tip of the phialides. The chlamydospores are pale brown, globe-like in shape, and are located at terminal and intercalary positions on the hyphae. In culture, colonies display rapid growth on potato dextrose agar (PDA), as cream-coloured in the beginning but later turns green because of sporulation. T. koningii grows at an optimum temperature of 25 °C in darkness, producing white mycelium with a radius of 50–60 mm. During conidial production, colouration first begins at the centre then later spreads outward in dark or dull green concentric rings that are vague to noticeable. Maximum temperature for growth is observed at 33 °C, which reduces their pathogenic potential in humans. Like most Trichoderma species, this fungus has a sexual state. The teleomorph, Hypocrea koningii, is characterized by cushion-shaped stromata (sing. stroma) that are broadly attached to the surface of a substrate but are free at the margins. The surface of the stroma appears slightly-wrinkled. Mature stromata are brown to brownish-orange, whereas the young ones have a tan color with villi sprouting from the surface. These short hairs are lost during development. Perithecia (fruiting bodies) are elliptic, 160–280 μm long and 100–185 μm wide. The perithecial neck has a length of 53–90 μm. Asci (sing. ascus) within the fruiting bodies are typically cylindrical, with dimensions of 60–70 x 4–5.7 μm and thickening at the apex. The ascospores of H. koningii are hyaline and fill up the ascus in a single row. They are initially bicellular but have become separated into part-ascospores. The proximal part of the ascospore is ellipsoidal while the distal part is globe-like and longer. Physiology Trichoderma koningii is employed as a biological control agent because of its mycoparasitic and antagonistic ability. This fungus is capable of biosynthesizing silver nanoparticles, volatile organic compounds and secondary metabolites such as trichokonins, koninginins, and pyrones. Silver nanoparticles (AgNPs) are produced via the reduction and capping of Ag+ to Ag0 by the enzymes and proteins released by T. koningii. Koninginins are substances capable of inhibiting the process of inflammation. Koninginins isolated from T. koningii are identified to be A, E, F, L and M (KonA, KonE, KonF, KonL, KonM). Trichokonins are peptaibols that exhibit antimicrobial property. Other polyketides reportedly isolated from T. koningii are Trichodermaketones A-D, 7-O-Methylkoninginin D, and 6-pentyl alpha pyrone which can inhibit the germination of other fungal spores. T. koningii is also reported to produce calcium oxalate crystals, particularly weddellite, via biomineralization. The process occurs intracellularly and extracellularly with respect to the fungus. The intracellular process involves the vegetative growth of the mycelium. The extracellular activity occurs through the reaction between the calcium in the environment and oxalic acid secreted by the fungus, leading to the production of biomineral species. Habitat and ecology Typical of Trichoderma, T. koningii is a good colonizer of its habitat. Saprophytic growth occurs in acidified soils and soils with high water content (i.e. chernozem, podzol). It is often isolated from under pine and coniferous trees, vegetation, plantations, grasslands, marshes, swamps, and peats. T. koningii also thrives in other environments, including growing on decaying wood, in marine species, estuarine sediments, and in mines and caves. The fruiting bodies commonly grow on tree bark and stromata tend to be scattered, often solitary than in clusters. It is distributed widely in Europe, the United States, and Canada. Recent surveys have reported that some strains of T. koningii are also found to be present in New Zealand and South Africa. Applications Agriculture Trichoderma koningii are plant symbionts that induce resistance against fungal pathogen attack and stimulate growth. It acts as a parasite to other fungi, particularly those that cause diseases to plants, by inhibiting their growth or attacking them directly. It is antagonistic to various plant pathogens such as Gaeumannomyces graminis var. tritici (Ggt), Sclerotium rolfsii, and Sclerotium cepivorum. It inhibits the growth of Ggt by releasing its microbial compounds. It colonizes the rhizospheres to interact with the roots of seedlings and plants, preventing S. rolfsii from damping-off the seedlings before they can germinate. T. koningii antagonizes S. cepivorum by acting as a secondary colonizer of the infected plant roots and secreting enzymes that cause the degradation and lysis of the pathogen. Medicine Several studies have described the ability of T. koningii to produce enzymes that exhibit antifungal and antibacterial properties. Koninginins bear similar structural elements as flavonoids and vitamin E. They can inhibit the process of inflammation caused by snake bites. They can block the effects of myotoxins and induction of edema because they can inhibit phospholipase A2, one of the proteins found in venoms. AgNPs produced using T. koningii are recognized as alternatives to antibiotics and are tools for gene delivery and drug delivery. They also show antagonism against Gram-positive and Gram-negative bacteria, respectively Candida albicans and Salmonella typhimurium. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Fungal pest control agents Root vegetable diseases Trichoderma Fungi described in 1902 Fungus species
Trichoderma koningii
[ "Biology" ]
1,932
[ "Fungi", "Fungus species", "Fungal pest control agents" ]
59,705,923
https://en.wikipedia.org/wiki/Emily%20Bernhardt
Emily S. Bernhardt is an American ecosystem ecologist, biogeochemist, and professor at Duke University. Bernhardt studies the effects of land use change, global change, and chemical pollution on aquatic and terrestrial ecosystems and is the co-author of an award-winning text book on biogeochemistry. She also served as the president of the Society for Freshwater Science from 2016 to 2017. Education and early career Bernhardt received her Bachelor of Sciences degree in biology with a minor in chemistry from the University of North Carolina, Chapel Hill in 1996. Her love for nature, including hiking in the Appalachian Mountains, as well as many research experiences as an undergraduate (including an REU at University of Michigan Biological Station) inspired her to become an ecologist. In her final year at UNC, Bernhardt was awarded an NSF Graduate Research Fellowship to pursue a PhD at Cornell University, co-advised by Cornell faculty Bobbi Peckarsky and Institute of Ecosystem Studies director Gene Likens. Bernhardt conducted her dissertation research at the Hubbard Brook Experimental Forest, in New Hampshire, USA, studying how headwater streams modify watershed nutrient export. Bernhardt also conducted research in Venezuela and Chile during her graduate career. While presenting a poster at the Ecological Society of America conference, Bernhardt met her future postdoctoral advisor, Bill Schlesinger, who was a professor at Duke University at the time and offered her a position on the spot. As a postdoc, Bernhardt continued to work on nitrogen cycling, however, this time focusing in the rooting zones of pine trees in poorly drained soils rather than in streams. She returned to working in aquatic systems as a postdoc in 2002, organizing the National River Restoration Science Synthesis under guidance from Margaret Palmer and Dave Allen which resulted in a highly cited publication in the journal Science. As a postdoc in Palmer's lab, Bernhardt also organized the Ecological Society of America's "Visions" project which identified future priorities for ecological sciences in the 21st century, stating that "Ecological knowledge can and must play a central role in helping achieve a world in which human populations exist within sustainable ecological systems". Career Bernhardt became a professor at Duke University in 2004 in the Department of Biology, and as of 2019, has mentored 15 graduate students and 11 postdocs while at Duke. Broadly, Bernhardt and her lab members research how ecosystems retain and transform elements and energy and how these ecosystem processes may be changing as the result of human activities. The ecosystems that Bernhardt studies include both aquatic and terrestrial systems, and her lab strives to make their research applicable to "political, legal and regulatory discussions about the protection and management of ecosystems". Stream ecosystem function Bernhardt started studying stream ecosystem function beginning in graduate school, when she examined how headwater streams modify watershed nutrient export at Hubbard Brook Experimental Forest, and she has continued to work on questions related to stream ecosystem function throughout her career. Bernhardt and colleagues synthesized over 37,000 stream restoration projects across the US to identify the common elements of successful restoration projects finding that on average greater than one billion US dollars are spent on stream restoration each year since 1990. Most stream restoration projects are small in scale and cost (~$45k) but poorly reported, and collectively, these small projects' costs are greater and their impact is broader than higher-cost projects, and Bernhardt and colleagues urged for better effort to collect and disseminated data on small restoration projects. Leveraging long-term datasets at Hubbard Brook and other sites, Bernhardt and her colleagues have studied the effects of climate change and whole-ecosystem experimental treatments on watershed nitrogen export. Bernhardt and colleagues have leveraged a network of in situ sensors and created a database for hosting open-access stream sensor datasets to address questions relating to stream ecosystem function. There work has primarily focused on variation and patterns of stream metabolism across hundreds of U.S. streams, but plan to expand to measure and host data from streams globally. Mountaintop coal mining Funded by the National Science Foundation from 2014 to 2017 and from the Foundation for the Carolinas, Bernhardt and her colleagues have studied the impacts of mountaintop removal mining with valley fills (MTMVF) on stream ecosystems. Mountaintop removal mining uses explosives to remove up to 400 vertical feet of mountain to expose underlying coal seams for extraction and excess rock is dumped into nearby valleys where headwater streams reside. It's estimated that nearly 1,800 miles of headwater streams have been buried by mountain top mining since 1990. Bernhardt's research showed that the extent of surface mining in West Virginia catchments was highly correlated with stream sulfate concentrations and ionic strength, causing biological impairment when only 5.4% of a stream's contributing catchment is occupied by surface coal mines. In 2005, 22% of West Virginia's regional stream network length drained catchments with >5.4% of their surface area converted to mining operations. Bernhardt and colleagues have also shown that mountaintop removal mining can have significant impacts on terrestrial ecosystems, for example, they estimate that previously forested mine sites would take around 5,000 years for a hectare of reclaimed mine land to sequester the same amount of carbon that is released when the coal is extracted and burned. Bernhardt's lab has also used trace elements found in fish otoliths as biogenic tracers to track coal ash contamination in affected lakes, marking the first time that strontium isotope ratios have been used to track coal ash's impacts in living organisms. Bernhardt wrote an article for PBS in which she explained what Clean Coal is and some of the myths behind clean coal, ending with an urge to use the label 'clean energy' more sparingly. Scientific training and culture In addition to writing about scientific results, Bernhardt also writes about scientific career trajectories, academic training, science culture, and work–life balance in academic positions across many career stages. In an article in The Chronicle of Higher Education, Bernhardt and co-authors urge scientists to prioritize intellectual curiosity, societal impact, and creativity rather than focusing only on traditional academic success metrics (e.g. H-index). As president of the Society for Freshwater Science, Bernhardt wrote an essay titled "Being Kind" which was featured in the journal Nature. In this essay, Bernhardt addresses two issues surrounding the Society for Freshwater Science 2017 annual meeting, 1) concerns of the meeting being held in North Carolina after the state passed the controversial Public Facilities Privacy & Security Act, and 2) reported incidents from Society for Freshwater Science members in which senior scientists said unpleasant of hurtful things to junior members at annual meetings. Bernhardt expresses her disgust of both issues and offers her thoughts on how to amend the culture within the Society for Freshwater Science, focusing on a quote that was popular on Twitter stating, "Everyone here is smart, distinguish yourself by being kind." Bernhardt goes on to reflect on specific instances in her career when her mentors and colleagues expressed kindness to her and how those acts impacted her graduate school experience and career trajectory. She encourages everyone to counteract implicit biases by being kind to everyone with whom we interact, ending the essay with an unofficial and aspirational motto for the 2017 SFS meeting of "Everyone here is smart and kind". Awards Member of the National Academy of Sciences (2023). Fellow of the American Geophysical Union (2022). Fellow of the Ecological Society of America (2018) Bass Society Fellow of Duke University (2017) Mercer Award for best paper by scientists under 40 by the Ecological Society of America (2015). Marcelo Ardón was lead author. Link to paper. Friedrich Wilhelm Bessel Award, Alexander von Humboldt Foundation, (Germany) (2015) Leopold Leadership Fellow (2015) International IGB Fellowship in Freshwater Sciences, Leibniz-Institute for Freshwater Ecology and Inland Fisheries, Berlin, Germany (2014) Textbook Excellence Award (Texty) from the Text and Academic Authors Association for Biogeochemistry: An Analysis of Global Change, Third Edition (2014) Yentsch-Schindler Early Career Award, Association for the Sciences of Limnology and Oceanography (2013) Thomas Langford Lectureship, Duke University (2010) Outstanding Postdoctoral Mentor Award, Duke University Postdoctoral Association (2008) NSF CAREER Award for New Investigators (2005) Hynes Award for New Investigators from the Society for Freshwater Science (2004) Publications Books Biogeochemistry: An Analysis of Global Change, Third Edition Selected journal articles Bernhardt, E.S., et al. 2005. Synthesizing US river restoration efforts. Science 308: 636–637 Bernhardt, E.S. and Palmer, M.A., 2007. Restoring streams in an urbanizing world. Freshwater Biology, 52(4), pp. 738–751. Bernhardt, E.S., et al. 2007. Restoring rivers one reach at a time: results from a survey of US river restoration practitioners. Restoration Ecology, 15(3), pp. 482–493 Bernhardt, E.S. and Palmer, M.A., 2011. River restoration: the fuzzy logic of repairing reaches to reverse catchment scale degradation. Ecological applications, 21(6), pp. 1926–1931. Personal life Bernhardt is married and has two children. References American ecologists American women ecologists American limnologists University of North Carolina alumni Duke University faculty Biogeochemists Cornell University alumni Living people Year of birth missing (living people) Women limnologists Presidents of the Society for Freshwater Science
Emily Bernhardt
[ "Chemistry" ]
1,950
[ "Geochemists", "Biogeochemistry", "Biogeochemists" ]
59,706,318
https://en.wikipedia.org/wiki/Explorable%20explanation
An explorable explanation (often shortened to explorable) is a form of informational media where an interactive computer simulation of a given concept is presented, along with some form of guidance (usually prose) that suggests ways that the audience can learn from the simulation. Explorable explanations encourage users to discover things about the concept for themselves, and test their expectations of its behaviour against its actual behaviour, promoting a more active form of learning than reading or listening. Definition The term "explorable explanation" was first used in passing by Peter Brusilovsky in a 1994 paper, but did not enter into common use until 2011, when Bret Victor published an eponymous essay (the essay included an explorable explanation of a digital filter). Victor distinguishes explorable explanations from isolated interactive widgets and visualizations by the fact that they deliberately guide the attention of their audience towards particular phenomena within the simulation. In characterizing the concept, Victor explains: Some of the ideas Victor espoused in the essay occurred to him while during work with Al Gore on the app version of the 2009 book Our Choice. He had proposed that the app should contain interactive models, but this idea was rejected on the basis that all numerical values proposed regarding climate change needed to have a citation, and the interactive models would generate un-cited numbers. The term has since also been characterized as being about learning through play. The related term "active essays" was used by Alan Kay to refer to text-based explorable explanations, and a major goal of Squeak (the precursor to Scratch) was to allow for the creation of them. A few video games may be considered explorable explanations. For example, Sim City uses a complex city simulation that is intended to present issues that appear in real-world urban planning. Many other games in the simulation genre have a similar intention, although with many it is not a necessity that the simulation be scientifically accurate. In the puzzle genre, games such as Incredipede also involve interacting with systems with the intention of learning. Video games may not involve explanatory text or narration. Educational video games have an overlap with explorable explanations, summarized as: They are similar in that both involve a computer simulation that is visualized, and both have the intended goal that the audience learns something. However, in an educational video game, the simulation is not necessarily a simulation of the game's intended learning content. Instead, learning content in educational video games is usually put in a non-interactive form such as text or voiceover; the educational game then usually has some schedule whereby the audience alternates between seeing the text and, separately, playing a game, usually a game with mechanics from a standard genre, such as a platformer. Explorable explanations are also distinct from gamification, which has the stated intention of improving the structure of rewards in learning. An explorable explanation may or may not involve rewards, and most involve none. History Board games such as The Landlord's Game (the precursor to Monopoly) involve a simulation and so can be described as analogue precursors to explorable explanations. Many explorable explanations predate the popular use of the phrase. For example, the Plato system, a computer-assisted instructional system created in 1960 depicted to the right, used interactive examples to teach concepts to students. In 1996, Mitchel Resnick created an explorable explanation of emergence using Conway's Game of Life as an example. The target audience for explorable explanations has historically been limited by available software distribution platforms (although some have been made for specific museums, without any intention of wider distribution, including some created by Karl Sims). Due to the fact that explorable explanations have not previously been successfully monetized, physical media such as CD-ROMs could not be considered. Since the 2000s, explorable explanations have become more common, because of widespread internet access and increased computer graphics possibilities within web browsers, for example via SVG, WebGL, and HTML5 canvas API. This allows complex simulations to be accessed instantly and shared on social media. Wikipedia has some examples of basic explorable explanations. Subject matter The most prevalent examples of explorable explanations concern topics within mathematics or computer science. There are numerous explanations of concepts within statistics and machine learning as well as of specific algorithms. Explorable explanations have a bias towards focusing on these topics, and when the subject matter comes from disciplines of empirical science, there is a tendency to focus on quantitative models from within the discipline. This is true even in the case of explorable explanations about disciplines where quantitative models are less common, such as social science. The bias is due to the fact that explorable explanations involve a programmed simulation which is required to follow a consistent mathematical model or formal system. Jonathan Blow has argued that this requirement forces subject matter to be dealt with more rigorously than other mediums such as speculative fiction. Additionally, since the simulation requires a visualization, there is a certain bias towards subject matter close to geometry. For example, there are at least three explorable explanations about special relativity including A Slower Speed of Light. Use in media Explorable explanations are increasingly being created by journalists, sometimes by organisations that formerly focused on print news media and radio. In 2015, FiveThirtyEight collaborated with The Marshall Project to produce an article on prison parole assessment that included an explorable explanation of the effects of policy changes on prison populations. The article was cited by the Columbia Journalism Review as an example of how explorable explanations could be used to advance digital storytelling. Newsgames may be considered explorable explanations. Other newsrooms such as Bloomberg Businessweek, The New York Times, and The Guardian are also notable for their use of explorable explanations to tell stories, for example covering topics like climate change, drug overdoses, and economics. FiveThirtyEight has also used explorable explanations to cover topics such as gun violence and p hacking. Structure Explorable explanations can differ widely in the kind of "guidance" that they give regarding how to interact with and think about their simulations. In some cases, guidance is intended to come from teachers in a school setting; this is the approach advocated for using PhET Interactive Simulations created by Carl Wieman, and they have been found to be an effective complement to traditional chalk and talk lessons. Most explorable explanations provide guidance using prose. This is the approach used in several explorable explanation creation platforms, including Observable created by Mike Bostock. Some others use voice-over narration. See also Newsgame PhET Interactive Simulations Persuasive Games References External links http://explorabl.es, a website compiling many examples https://distill.pub/, an online peer reviewed based on explorable explanations https://beta.observablehq.com, a creation and sharing platform for explorable explanations with a notebook interface https://minutelabs.io/, a set of explorable explanations connected with the YouTube channel MinutePhysics https://github.com/stared/science-based-games-list, a collaborative list of science-based games in physics, chemistry, biology, computer science, health, mathematics, sociology, economy, and humanities New media Educational programs
Explorable explanation
[ "Technology" ]
1,533
[ "Multimedia", "New media" ]
59,706,524
https://en.wikipedia.org/wiki/NGC%203640
NGC 3640 is an elliptical galaxy located in the constellation Leo. It is located at a distance of circa 75 million light years from Earth, which, given its apparent dimensions, means that NGC 3640 is about 90,000 light years across. It was discovered by William Herschel on February 23, 1784. It is a member of the NGC 3640 Group of galaxies, which is a member of the Leo II Groups, a series of galaxies and galaxy clusters strung out from the right edge of the Virgo Supercluster. It lies 2 degrees south of Sigma Leonis and is a member of the Herschel 400 Catalogue. It is condensed and can be spotted with a small telescope from suburban skies. Characteristics NGC 3640 is an elliptical galaxy with a highly disturbed stellar component. The galaxy features boxy isophotes, and patchy shell-like features. These features indicate a recent merger with a smaller gas-poor galaxy. A dust lane is observed along the minor-axis, spanning 30 arcseconds in a north-south direction. The galaxy has a high rotational velocity, estimated to be 120 ± 10 km/s, higher than that of other elliptical galaxies of similar luminosity. The HI mass of the galaxy is estimated to be and the mass of HII less than . In the centre of NGC 3640 lies a supermassive black hole whose mass is estimated to be roughly 100 million (107.99 ± 0.39) based on the Sérsic profile. Nearby galaxies NGC 3640 is the foremost galaxy in a galaxy group known as the NGC 3640 group. Other members of the group include NGC 3630, NGC 3641, NGC 3643, and NGC 3664. NGC 3640 forms a pair with NGC 3641, which lies 2.5 arcminutes south from NGC 3640. The group belongs to the Leo II groups, a large collection of galaxies belonging to the Virgo Supercluster scattered across 30 million light years of space west of the Virgo Cluster. See also NGC 2865 and NGC 5018 - two other elliptical galaxies with disturbed morphology References External links NGC 3640 on SIMBAD Elliptical galaxies Shell galaxies Leo (constellation) 3640 06368 34778 Astronomical objects discovered in 1784 Discoveries by William Herschel
NGC 3640
[ "Astronomy" ]
465
[ "Leo (constellation)", "Constellations" ]
59,708,753
https://en.wikipedia.org/wiki/Statice%20limonium
Statice limonium may refer to: Statice limonium Bigelow Statice limonium Cav. ex Willk. & Lange Statice limonium L., accepted as Limonium vulgare Mill. Statice limonium Pall. Statice limonium Thunb. Statice limonium var. californica (Boiss. ex DC.) A.Gray, accepted as Limonium californicum (Boiss.) A.Heller Statice limonium var. caroliniana (Walter) A.Gray, accepted as Limonium carolinianum (Walter) Britton References
Statice limonium
[ "Biology" ]
127
[ "Set index articles on plants", "Set index articles on organisms", "Plants" ]
59,709,418
https://en.wikipedia.org/wiki/Q-PACE
CubeSat Particle Aggregation and Collision Experiment (Q-PACE) or Cu-PACE, was an orbital spacecraft mission that would have studied the early stages of proto-planetary accretion by observing particle dynamical aggregation for several years. Current hypotheses have trouble explaining how particles can grow larger than a few centimeters. This is called the meter size barrier. This mission was selected in 2015 as part of NASA's ELaNa program, and it was launched on 17 January 2021. As of March 2021, however, contact has yet to be established with the satellite, and the mission was feared to be lost. The mission was eventually terminated. Overview Q-PACE was led by Joshua Colwell at the University of Central Florida and was selected NASA's CubeSat Launch Initiative (CSLI) which placed it on Educational Launch of Nanosatellites ELaNa XX. The development of the mission was funded through NASA's Small Innovative Missions for Planetary Exploration (SIMPLEx) program. Observations of the collisional evolution and accretion of particles in a microgravity environment are necessary to elucidate the processes that lead to the formation of planetesimals (the building blocks of planets), km-size, and larger bodies, within the protoplanetary disk. The current hypotheses of planetesimal formation have difficulties in explaining how particles grow beyond one centimeter in size, so repeated experimentation in relevant conditions is necessary. Q-PACE was to explore the fundamental properties of low‐velocity (< ) particle collisions in a microgravity environment in an effort to better understand accretion in the protoplanetary disk. Several precursor tests and flight missions were performed in suborbital flights as well as in the International Space Station. The small spacecraft does not need accurate pointing or propulsion, which simplified the design. On 17 January 2021, Q-PACE launched on a Virgin Orbit Launcher One, an air launch to orbit rocket that was dropped from the Cosmic Girl airplane over the Pacific Ocean. As of March 2021, however, contact was not established with the satellite after it reached orbit, and the spacecraft was declared lost and the mission ended. Objectives The main objective of Q-PACE was to understand protoplanetary growth from pebbles to boulders by performing long-duration microgravity collision experiments. The specific goals are: Quantify the energy damping in multi-particle systems at low collision speeds (< to ) Identify the influence of a size distribution on the collision outcome. Observe the influence of dust on a multi-particle system. Quantify statistically rare events: observe a large number of similar collisions to arrive at a probabilistic description of collisional outcomes. Method Q-PACE was a 3U CubeSat with a collision test chamber and several particle reservoirs that contain meteoritic chondrules, dust particles, dust aggregates, and larger spherical particles. Particles will be introduced into the test chamber for a series of separate experimental runs. The scientists designed a series of experiments involving a broad range of particle size, density, surface properties, and collision velocities to observe collisional outcomes from bouncing to sticking as well as aggregate disruption in tens of thousands of collisions. The test chamber will be mechanically agitated to induce collisions that will be recorded by on‐board video for downlink and analysis. Long duration microgravity allows a very large number of collisions to be studied and produce statistically significant data. References Astrophysics Celestial mechanics Solar System dynamic theories Spacecraft launched in 2021 CubeSats Nanosatellites
Q-PACE
[ "Physics", "Astronomy" ]
714
[ "Celestial mechanics", "Astronomical sub-disciplines", "Classical mechanics", "Astrophysics" ]
59,709,536
https://en.wikipedia.org/wiki/Effects%20bargaining
Effects bargaining is a type of bargaining which involves certain decisions that are within the management’s right to make. This has impact on mandatory subjects of bargaining. This is common to some business decisions like laying off and transferring employees. The bargaining on these impacts or effects is called effects bargaining. For example, a contract may give an employer the ability to integrate new technology however, if the new technology will have a significant impact on employment, the employer is required to give the union notice in advance to allow bargaining on the effects prior to the technology being put in place. References Bargaining theory
Effects bargaining
[ "Mathematics" ]
117
[ "Game theory", "Bargaining theory" ]
59,710,504
https://en.wikipedia.org/wiki/MicroDragon
MicroDragon was a satellite built by the Vietnam National Satellite Center (VNSC). Satellite MicroDragon satellite was the product of "Preventing natural disasters and climate change using Earth observation satellite" () project. It was manufactured by 36 Vietnamese engineers. It costed 600 million USD. MicroDragon's weight and dimension were and , respectively. The satellite was successfully launched on 18 January 2019, along with six Japanese satellites. The satellite decayed on 1 October 2024. See also PicoDragon NanoDragon References Satellites of Vietnam Spacecraft launched in 2019 2019 in Vietnam Earth observation satellites Satellites of Vietnam
MicroDragon
[ "Astronomy" ]
126
[ "Astronomy stubs", "Spacecraft stubs" ]
59,710,588
https://en.wikipedia.org/wiki/Tlahuelilpan%20pipeline%20explosion
On 18 January 2019, a pipeline transporting gasoline exploded in the town of Tlahuelilpan, in the Mexican state of Hidalgo. The blast killed at least 137 people and injured dozens more. Mexican authorities blamed fuel thieves, who had illegally tapped the pipeline. The explosion was particularly deadly because large crowds of people had gathered at the scene to steal fuel. Security forces tried to persuade people to move away from the scene, but they were outnumbered and asked not to engage with civilians for fear of causing a violent confrontation. The leak was reported at 17:04 CST (23:04 UTC), and the explosion occurred two hours later at 19:10. It took about four hours for responders to extinguish the fire. Background Fuel theft from pipelines owned by Pemex, the state oil company, has been a long-term problem in Mexico. The problem worsened in the 2010s as organized crime groups in Mexico began including gasoline theft as part of their main streams of revenue. With the international soaring of fuel prices, this criminal activity became a lucrative business for thieves. Gasoline theft crime groups used bribery and violence to corrupt government officials. Investigators suspect that several officials within Pemex are involved in facilitating the operations of these criminal groups. Complicity includes activities such as employees sharing the exact time when the fuel will flow through the pipelines, the maps of the pipelines, and how to successfully perforate them. Illegally extracting, possessing, or safeguarding petrochemicals from pipelines, vehicles, equipment, or installations is a federal crime in Mexico and is punishable with up to 20 years in prison. The gasoline thieves are known in Mexico as huachicoleros, a name derived from the slang term huachicol, or poor-quality alcohol. The gasoline they steal is often sold in public at a discounted price. These groups have gained support from impoverished communities because they provide low-cost gasoline and give some locals a venue for employment as fuel carriers and lookouts. Their supply of illegal fuel is believed to sustain entire communities in some parts of the states of Veracruz, Puebla and Hidalgo, where the explosion occurred. By mid-2018, the rate of pipeline perforations had risen considerably to slightly over 40 perforations per day, compared to 28 perforations a day in 2017. In the first 10 months of 2018, 12,581 illegal perforations were reported across pipelines in Mexico. The states in Mexico with the highest illegal perforations reported between 2016 and 2018 were Hidalgo, Puebla, Guanajuato, Jalisco, Veracruz, State of Mexico, and Tamaulipas. In 2018, Hidalgo was the state with the most illegal perforations reported with 2,121. In Tlahuelilpan alone, at least 70 illegal perforations were reported from 2016 to the day of the explosion. However, Tula headed the count in the state of Hidalgo with 500 perforations reported in 2018 alone. As a result of the increase in fuel theft, the federal government has spent approximately US$3 billion a year on pipeline repairs and maintenance, as well as compensation to buyers for whom the product was intended. When President Andrés Manuel López Obrador took office in December 2018, he launched a campaign against gasoline theft gangs, and dispatched close to 5,000 troops from the Mexican Armed Forces and the Federal Police to guard pipelines across Mexico. Part of his strategy was to divert the flow of fuel from pipelines, detect leaks when they occurred, and transport the fuel by trucks. Most of the thieves operate in remote areas, and drill at the pipelines during the night to avoid detection. These measures were intended to stop the thieves from illegally tapping fuel pipes. When implemented, however, the measures led to logistical problems, resulting in fuel shortages and long lines at gas stations nationwide in January 2019. Explosion Tlahuelilpan is crossed by one of the country's main fuel pipelines, connecting the port of Tuxpan, Veracruz, with the Pemex complex at Tula, Hidalgo, some to the southwest of the town. Reports of residents collecting what appeared to be fuel in the San Primitivo district of the town started circulating on social media during the afternoon of 18 January 2019. A video shot earlier and later uploaded to YouTube showed local residents with buckets, jerrycans and water bottles swarming around a large petrol fountain coming from a rupture in the pipeline; witnesses interviewed later on local television spoke of a crowd numbering in the hundreds, "or perhaps even a thousand". Other accounts stated that some residents, including children, were jokingly throwing gasoline at each other and horseplaying near the breach zone. A 911 call to local police reported the leak at 17:04 hours, and the explosion occurred at 19:10 hours. When first informed of the leak, Pemex did not initially close the valve because they did not consider the leak "important". It took four hours to extinguish the explosion's fire. Residents from the surrounding areas were evacuated. It is believed that the fountain caused fumes of the fuel to fill the air which later ignited in a huge fireball that consumed the surrounding fields which had been soaked with fuel. The pipeline at the rupture point was estimated to carry around 10,000 barrels of gasoline at . The exact cause of the fire that ignited the spill is not yet known. Investigators' first hypothesis was that the gases produced by the leak and the electricity sparks caused by the friction of people's synthetic clothes may have caused the explosion. The explosion was particularly deadly because the free gasoline attracted large numbers of people to the breach zone. Residents also stated that the shortage of gasoline in the area may have prompted many to attend the scene when they heard of the breach. There were about 25 troops at the site, but Secretary of Defense Luis Cresencio Sandoval stated that there were not enough personnel to turn back the 600–800 residents who had reached the site in search of fuel. He stated that the Army tried to persuade villagers to not get near the breach zone, and that some became aggressive when asked to turn back. None of the residents were carrying firearms, but some were armed with sticks and stones. The soldiers did not intervene with civilians as they were outnumbered. They were also asked by their leadership not to engage with potential fuel thieves at the scene when residents started flocking to the breach, because they feared a shootout would break out and unarmed civilians would be hurt or that soldiers would be killed by an angry mob. Reactions The military's DN-III Plan for civilian assistance and disaster relief was activated on the evening of 18 January. Several of the wounded victims were transported via helicopter to hospitals in Hidalgo, State of Mexico, and Mexico City. Some of the minors were expected to be sent to Shriners Hospitals for Children in the U.S. state of Texas. Hidalgo Governor Omar Fayad said that the government would take care of the medical and funeral expenses. He stated that the government would not pursue legal action against the civilians who were taking the stolen gasoline from the pipeline that exploded. Fayad confirmed that the government had information booths at the cultural center in Tlahuelilpan with the lists of the victims and the hospitals where they were to receive treatment. The list was also available on the government's website. Investigators helped recover the corpses and ordered the bodies to be sent to morgues in Tula and Mixquiahuala for their official identification. They stated that the bodies would take several months to be fully identified. Given the fact that many bodies experienced high degree burns, face-to-face recognition was deemed nearly impossible. Families were asked to identify the bodies by the belongings found at the scene. Mexican authorities asked the victim's family members to complete a DNA test to assist in the identification process. Investigators stated they would likely take the remains to institutions in the U.S. or Europe to facilitate the process. The victims' family members, however, confronted the investigators at the breach scene and asked them to not take the bodies. They criticized the investigators for wanting to take the bodies to morgues under the assumption that they were doing it for monetary gains. The families stated that traveling outside of Tlahuelilpan to recover the bodies would be difficult for them due to the shortage in gasoline and because it would prove expensive for them. President López Obrador canceled meetings he had in Jalisco and Guanajuato upon hearing about the explosion. He visited Tlahuelilpan on the morning of 19 January, to oversee relief operations. He also promised to step up efforts to counter cartels running fuel theft. Part of his promise included a commitment to continue to crack down on petrochemical theft groups, as well as finding alternatives for citizens so that they do not depend on illegal fuel. He asked the residents of Tlahuelilpan to give their testimonies of the events, and also to provide information to law enforcement on the black market in the region, including the names of those involved in the gasoline theft gangs and details of their operations. Pemex chief executive stated that pipelines in the central Mexico area had been subject to at least 10 breaches in the past three months. One of those perforations caused a fire, which took 12 hours to extinguish on 12 December 2018. Following these incidents, the pipeline was put under repair on 23 December and remained suspended until 15 January, the official confirmed. During its suspension, Romero Oropeza said that the pipeline was breached four times. It resumed operations on 17 January, a day before the explosion. The mayor of Tlahuelilpan, Juan Pedro Cruz Frías, asked residents to take precautionary measures after a toxic smog was present in the town's air after the explosion. When asked about his thoughts on the incident, the mayor stated that the victims acted out of "necessity" when they learned about the fuel leak. He stated that fuel leaks were common in the area, but also said that "irresponsibility" played a role in this incident. Investigation Attorney General Alejandro Gertz Manero said there were indications that the incident was "deliberate". His position rested on the fact that the first incident, the leak, was intentional; however it was yet to be confirmed whether or not the explosion was deliberate. He stated that the investigation would be difficult because those involved in the incident died during the explosion. President López Obrador stated that all possibilities were being considered for the investigation, and did not discard the involvement of major criminal groups that operate in Hidalgo, like Los Zetas or the Jalisco New Generation Cartel, as well as local gasoline thieves not involved with major drug cartels. The first hypothesis proposed by Gertz Manero's investigatory team was that the fire may have been caused by a static electricity spark product of the friction of people's synthetic clothes and the gases produced by the leak. The gasoline that was tapped into was a premium one with high octane levels, which produces lethal gases when exposed to the air. He stated that this hypothesis was not conclusive. To facilitate their investigations, the authorities stated they would step up legal proceedings to seize property involved in fuel thefts, including the land where the explosion occurred. The owners of the land said this measure was unfair and criticized the government's inability to stop fuel thieves. On 21 January, Mexico's National Human Rights Commission (CNDH) received a complaint about the "inaction" of the Army during the events leading to the explosion. President López Obrador did not give more details to the press as to what the complaint consisted of and only confirmed it was issued. The CNDH confirmed the request and stated that they were going to question several personnel who were present during the explosion to gather more details on what exactly occurred. They clarified that this investigation did not mean the Army was at fault, and that that decision would be made after the investigation concluded and when more details on the causes of the explosion became known. Aftermath On 8 May 2019, an illegal gasoline truck exploded in Reforma, Chiapas. Mayor Herminio Valdez Castillo said the explosion occurred in an uninhabited area and there were no victims. In January 2020, the Secretariat of the Interior (SEGOB) announced it would build a memorial for the 137 victims of the Tlahelipan explosion. Each family had earlier been compensated with MXN 15,000 (US$800). See also 1992 Guadalajara explosions 2010 Puebla oil pipeline explosion List of 21st-century explosions References 2019 disasters in Mexico Deaths caused by petroleum looting Explosions in 2019 Gas explosions in Mexico History of Hidalgo (state) January 2019 events in Mexico Organized crime events in Mexico Pipeline accidents Crime in Hidalgo (state) 2019 fires 2010s fires in North America Dust explosions 2019 crimes in Mexico January 2019 crimes in North America Looting in North America Presidency of Andrés Manuel López Obrador
Tlahuelilpan pipeline explosion
[ "Chemistry" ]
2,630
[ "Dust explosions", "Explosions" ]
53,525,324
https://en.wikipedia.org/wiki/United%20States%20herbicidal%20warfare%20research
Herbicidal warfare research conducted by the U.S. military began during the Second World War with additional research during the Korean War. Interest among military strategists waned before a budgetary increase allowed further research during the early Vietnam War. The U.S. research culminated in the U.S. military defoliation program during the Vietnam war known as Operation Ranch Hand. World War II The use of a chemical or biological agents to destroy Japan's rice was contemplated by the Allies during World War II. In 1945 Japan's rice crop was terribly affected by rice blast disease. The outbreak as well as another in Germany's potato crop coincided with covert Allied research in these areas. The timing of these outbreaks generated persistent speculation of some connection between the events however the rumors were never proven and the outbreaks could have been naturally occurring. A U.S. War Departments report notes that "in addition to the results of human experimentation much data is available from the Japanese experiments on animals and food crops". Cold War In the mid-1950s, the Chemical Corps continued the search for anti-crop agents in order destroy the food and economic crops of enemy nations in wartime as part of the secret programs started during World War II. Several chemical and biological anti-crop agents were standardized. Former Professor Emeritus of Forage Crops Ecology at University of Tennessee's Department of Plant Sciences, Dr. Henry Fribourg, was an Army scientist in the mid-1950s who helped develop the most efficient dispersal techniques for anti-crop fungus spores and herbicides in labs at Ft. Detrick and in field tests in South Dakota, Texas, and Florida. Dr. Fribourg said, "The idea in those days was that the enemy's crops could be killed and this would be a much more humane way of winning a war than using atomic bombs." However, in 1957 the Army found it had no funds to carry on the anti-crops research and the program nearly halted. Even though the anti-crop program had been phased out, the Chemical Corps continued to produce the new agent under an Industrial Preparedness Measure that permitted laboratory production of the agent to increase to limited production capability, testing the adequacy of the agent against varieties on rice found in the Orient and to determine the effectiveness of the agent by means of large scale field tests. Anti-crop herbicides It was also found certain phenoxyacetic acids were effective at reducing the yield of crops. Olin Mathieson Chemical Corporation produced esters of these compounds for Fort Detrick's test program. By 1958 the Army adopted the chemical Butyl 2-Chloro-4-Fluorophenoxyacetate or Agent LNF, (also 4-Fluorophenoxyacetic acid or simply "KF") as a standard chemical agent effective against rice crops. Both Agent LNA (Agent GREEN) or 2,4-D and Agent LNB (Agent PINK) or 2,4,5-T or had also been standardized by the Army as anti-crop agents. In 1963 the two agents LNA and LNB were combined to make a new anti-crop Agent called LNX that was also known as Herbicide ORANGE. Defoliants During the Second World War limited test use of aerial spray delivery systems was employed only on several Japanese-controlled tropical islands to demarcate points for navigation and to kill dense island foliage. Despite the availability of the spray equipment, herbicide application with aerial chemical delivery systems were not systematically implemented in the Pacific theater during the war. In addition to work done in the anti-crop theater, the screening program for chemical defoliants was greatly accelerated in the 1960s. By FY 1962 contracts for synthesis and testing of a thousand chemical defoliants were in the process of negotiation. Approximately 1600 compounds had been examined since July 1961 with the results entered in a Remington-Rand computer system. Of these 1600 compounds, 100 showed defoliant activity and 300 exhibited herbicidal effects in the primary defoliation process. Rice blast Sufficient work had been done on Pyricularia oryzae to also warrant the organism in the BW arsenal. In March 1958 P. oryzae was classified as a standard anti-crop BW agent. It then known as anti-crop Agent LX. During this time period, Rice Blast spores were produced under contract to Charles Pfizer and Company and shipped to Fort Detrick for classification, drying and storage. Agricultural BW doctrine was re-developed by the Air Force and Army during the early 1960s. At the outset of FY 1962 an important increase in emphasis in this field for technical advice on the conduct of the defoliation and anti-crop activities in Southeast Asia. Both field tests and process research were maintained for the agent of rice blast disease. P. oryzae is a parasitic, spindle-shaped fungus of rice causing the destructive plant disease known as rice blast. "Rice blast disease causes lesions to form on the plant, threatening the crop, and the fungus is estimated to destroy enough rice to feed 60 million people a year." A number of strains were known to research scientists and the U.S. Army planned to use a mixture of races as the new agent. During 1960 research on anti-crop agents proceeded at the pace dictated by the limited resources available. Field tests for stem rust of wheat and rice blast disease were begun at several states in the Midwest and South U.S. and in Okinawa with partial success and the accumulation of useful data. The research gained on rice blast fungus from the field and laboratory experiments conducted in Okinawa and Florida by Fort Detrick's Crops division, Directorate of Biological Research and Biomathematics division and Directorate of Technical Services increased the knowledge required to use this crop disease a strategic weapon of war and limit an enemy's food supply. The focus of this research was sources of inoculum and the minimum amount required to cause the disease, spore dispersal, meteorological and other conditions required for establishment of infection and disease buildup, spread, yield reduction, control measures, and the present ability to predict disease outbreak, buildup, and yield losses. Between 1961 and 1962 U.S. documents reveal the testing of militarized rice blast agent on Okinawa was conducted over a dozen times. Rice blast fungus was disseminated on rice paddies to determine how the agent affected rice crop production. The Okinawa project test sites included Nago and Shuri and directly was associated with similar research at the Avon Park Air Force Range near Sebring, Florida, and in Texas and Louisiana. During the biological field testing for rice blast documents reveal the U.S. Army "used a midget duster to release inoculum alongside fields in Okinawa and Taiwan", and took measurements regarding the effectiveness of the agent against the rice crop. In 1962 international research on the pathogenic races of rice blast disease was taken up as a three-year project beginning in 1963, when cooperation in scientific research was conducted in concert by the governments of the U.S. and Japan. A new investigation to find the pathogen suitable for use against the opium poppy began in the third quarter of financial year 1962. Wheat stem rust Stripe rust of wheat was also under investigation, and the usual screening program for chemical anticrop agents was continued. A gradual increase in the scope of the rest of the anti-crop program accompanied this development. Large scale greenhouse experiments on stripe rust of wheat yielded considerable information on the degree of crop injury in relation to the time and number of inoculations. Rocky Mountain Arsenal, from January 1962 to October 1969, "grew, purified and militarized" the plant pathogen wheat stem rust (Puccinia graminis, var. tritici) known as Agent TX for the Air Force biological anti-crop program. Agent TX-treated grain was grown at Edgewood Arsenal and from 1962–1968 in Sections 23-26 at Rocky Mountain Arsenal The unprocessed agent was transported to Beale AFB in refrigerated trucks for purification and storage and was kept refrigerated until loaded into specialized bombs adapted from the Leaflet bombs used to deliver propaganda. The M115 anti-crop bomb, also known as the feather bomb or the E73 bomb, was a U.S. biological cluster bomb designed to deliver wheat stem rust. The deployment of the M115 represented the United States' first, though limited, anti-crop biological warfare (BW) capability. By the mid-1970s the Central Intelligence Agency (CIA) acknowledged that it had developed and field tested methods for conducting covert attacks that could cause severe crop damage. Rain-making component Operation Pop Eye / Motorpool / Intermediary-Compatriot was a highly classified weather modification program in Southeast Asia during 1967–1972 that was developed from research conducted on Okinawa and other locations. A report titled Rainmaking in SEASIA outlines use of lead iodide and silver iodide deployed by aircraft in a program that was developed in California at Naval Air Weapons Station China Lake and tested in Okinawa, Guam, Philippines, Texas, and Florida in a hurricane study program called Project Stormfury. The chemical weather modification program conducted from Thailand over Cambodia, Laos, and Vietnam was allegedly sponsored by Secretary of State Henry Kissinger and CIA without the authorization of Secretary of Defense Melvin Laird who had categorically denied to Congress that a program for modification of the weather for use as a tactical weapon even existed. The program was used to induce rain and extend the East Asian Monsoon season in support of U.S. government efforts related to the War in Southeast Asia. The use of a military weather control program was related to the destruction of enemy food crops. Whether the weather modification program was related to any of the CBW programs is not documented. However, it is certain that some of the military herbicides used in Vietnam required rainfall to be absorbed. In theory, any CBW program using mosquitoes or fungus would have also benefited from prolonged periods of rain. Rice blast sporulation on diseased leaves occurs when relative humidity approaches 100%. Laboratory measurements indicate sporulation increases with the length of time 100% relative humidity prevails. References Biological warfare Chemical warfare Herbicides Weather modification Chemical weapons of the United States
United States herbicidal warfare research
[ "Chemistry", "Engineering", "Biology" ]
2,094
[ "Planetary engineering", "Herbicides", "Biological warfare", "Weather modification", "nan", "Biocides" ]
53,527,471
https://en.wikipedia.org/wiki/Hangprinter
Hangprinter is an open-source fused deposition modeling delta 3D printer notable for its unique frameless design. It was created by Torbjørn Ludvigsen residing in Sweden. The Hangprinter uses relatively low cost parts and can be constructed for around US$250. The printer is part of the RepRap project, meaning many of the parts of the printer are able to be produced on the printer itself (partially self replicating). The design files for the printer are available on GitHub for download, modification and redistribution. Versions Version 0 The Hangprinter v0, also called the Slideprinter, is a 2D plotter. It was designed solely to test if a 3D version could realistically be created. Version 1 The Hangprinter v1 uses counter weights to stay elevated. Version 2 All parts of the Hangprinter Version 2 are contained within a single unit which uses cables to suspend the printer within a room, allowing it to create extremely large objects over 4 meters tall. Version 3 Version 3 of the Hangprinter has the motors and gears attached to the ceiling, making the carriage lighter. Version 4 Version 4 includes upgrades from version 3 including flex compensation, better calibration and automatic homing. Fused Particle Fabrication/Fused Granular Fabrication Hangprinters To enable 3D printers to economically use recycled plastic feedstocks to enable distributed recycling and additive manufacturing (DRAM) several types of fused granular fabrication (FGF)/fused particle fabrication (FPF) -based 3D printers have been designed and released with open source licenses. First, a large-scale printer was demonstrated with a GigabotX extruder based on the open source cable driven hangprinter concept. Then detailed plans using recyclebot auger techniques were released in HardwareX to build such a printer for under $1700. This approach would further reduce the cost of using hangprinters to make large scale products as the cost of recycled shredded plastic is ~$1–5/kg while filament is generally around $20/kg. Makers that have built open source granulators or have access to other types of waste plastic shredders (e.g. from Precious Plastic) can generate feedstock for hanging waste printers for under $1/kg, which makes large scale production with a hangprinter competitive with any conventional manufacturing process. Patent dispute In 2022, a patent describing the “Sky Big Area Additive Manufacturing” (SkyBAAM) system was granted to UT-Battelle, LLC, a nonprofit corporation that operates the Oak Ridge National Laboratory (ORNL). The patent describes the core features already featured in HangPrinter, causing controversy in the open source community. The RepRap project established a GoFundMe campaign to cover the legal costs in their upcoming action to challenge the patent. In May 2023 it was announced that the US Patent Office rejected the wide claims of the SkyBAAM patent and would be settling on a much narrower patent instead. Per a post on Torbjørn Ludvigsen's blog "They largely agreed with our analysis. They rejected all the patent's original claims. They accepted a narrower version of them." Per the interpretation provided in that post the narrower patent would only cover cases where every detail provided is included in the design, instead of those designs with any of the described details. External links Development The Hangprinter Project's Webpage Torbjørn Ludvigsen's RepRap blog Hangprinter on Bountysource Hangprinter Facebook page Repositories Hangprinter files on GitHub Hangprinter on RepRap wiki Videos Hangprinter presentation, Torbjørn Ludvigsen The Hangprinter #3DMeetup, Thomas Sanladerer, YouTube RepRap HangPrinter Workshop at E3D with Torbjørn Ludvigsen, Richard Horne (RichRap), YouTube References 3D printers Delta robots RepRap project
Hangprinter
[ "Engineering", "Biology" ]
827
[ "RepRap project", "Self-replication", "3D printers", "Industrial machinery" ]
53,529,240
https://en.wikipedia.org/wiki/Nitronic
Nitronic is the trade name for a collection of nitrogen-strengthened stainless steel alloys. They are austenitic stainless steels. History Nitronic alloys were developed by Armco Steel. The first of these alloys, Nitronic 40, was introduced in 1961. Since 2022, the trademark has been owned by Cleveland-Cliffs Steel Corp., successor to AK Steel. Electralloy is the licensed producer in North America for a wide range of Nitronic products. The Nitronic name is due to the addition of nitrogen to the alloy, which enhances the strength internally rather than being nitrided on the surface, as some steel are treated. The nitrogen is homogeneous throughout the material. Nitronic materials have about twice the yield strength of 304L and 316L. Uses Nitronic 30 is used to lighten transportation vehicles. Buses and railcars benefit from the high strength-to-weight ratio for weight savings. Nitronic 40 is used at cryogenic temperatures. and in the aerospace industry as hydraulic tubing. Nitronic 50 is used in marine environments, including boat shafting and solid rod rigging. Nitronic 60 and a similar alloy Gall-Tough have high resistance to galling, a form of wear caused by adhesion between sliding surfaces, and metal-to-metal wear. Composition Nitronic alloys have widely varying compositions, but all are predominantly iron, chromium, manganese and nitrogen. References Superalloys Aerospace materials Chromium alloys
Nitronic
[ "Chemistry", "Engineering" ]
307
[ "Aerospace materials", "Superalloys", "Alloys", "Aerospace engineering", "Chromium alloys" ]
53,530,154
https://en.wikipedia.org/wiki/Aoussou
Aoussou () is the period of the year extending, according to the Berber calendar, over 40 days from 25 July. It is known to be a very hot period. Event In Tunisia, the Carnival of Aoussou is celebrated during this period, a festive and cultural event taking place in Sousse. References Berber culture Culture of Tunisia Weather lore
Aoussou
[ "Physics" ]
74
[ "Weather", "Physical phenomena", "Weather lore" ]
53,530,163
https://en.wikipedia.org/wiki/Ben%20Hawkes
Ben Hawkes is a computer security expert and white hat hacker from New Zealand, previously employed by Google as manager of their Project Zero. Hawkes has been credited with finding dozens of flaws in computer software, such as within Adobe Flash, Microsoft Office, Apple's iOS and the Linux kernel. His role was acknowledged, for instance, in an Adobe 2015 security bulletin, which announced updates that addressed critical vulnerabilities that allowed hackers to take control of the affected system. In 2019, he reported two vulnerabilities that could allow hackers to tap iPhone microphones and spy on calls. Before Hawkes became part of Project Zero, he was first part of the Google team tasked with the security of Google's product launches. Hawkes regularly publishes research on his works, particularly on vulnerability analysis and software exploitation such as novel heap exploitation techniques on Windows. References External links Google employees New Zealand computer specialists Year of birth missing (living people) Living people
Ben Hawkes
[ "Technology" ]
195
[ "Computing stubs", "Computer specialist stubs" ]
53,531,390
https://en.wikipedia.org/wiki/Amoriguard
Amoriguard is a water-based paint with fillers based on recycled industrial waste. The colour has an effective 70% mass of solids, which occupy a volume of at least 55% excluding water. It was invented in South Africa by Mulalo Doyoyo and co-developed by Ryan Purchase. Substances in the paint such as volatile organic compounds, ammonia, formaldehyde, lead, alkyl phenol ethoxylate and glycol are low in quantity or absent. It is manufactured below critical pigment volume concentration (CPVC) which means that most voids between pigment particles in the dried film are filled with solid particles as opposed to air. The paint is hydrophobic and chemical-resistant. References Paints South African inventions
Amoriguard
[ "Chemistry" ]
154
[ "Paints", "Coatings" ]
53,533,205
https://en.wikipedia.org/wiki/Shanghai%20Astronomy%20Museum
Shanghai Astronomy Museum is a planetarium opened in 2021 in Lingang New City, Pudong New Area district, Shanghai. Its dome covers 38,000 square meters. It is the world's largest planetarium in terms of building scale. The planetarium, designed by New York City based Ennead Architects, serves as an educational and entertainment site for visitors. It is part of Shanghai Science and Technology Museum. With no straight lines or right angles, the building was designed to reflect the shapes, movement and geometry of the universe. Ennead Architects design partner, Thomas J Wong, explained that the foundational design concept of the museum was to “abstractly embody within the architecture some of the fundamental laws of astrophysics, which are the rule in space.” See also List of planetariums References Astronomy Planetaria in China Buildings and structures in Shanghai Tourist attractions in Shanghai Museums established in 2021 2021 establishments in Shanghai
Shanghai Astronomy Museum
[ "Astronomy" ]
187
[ "Astronomy stubs" ]
53,533,480
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Serbia
As a candidate country of the European Union, Serbia (RS) is in the process of being included in the Nomenclature of Territorial Units for Statistics (NUTS). However, due to the ongoing dispute with Kosovo, it has not yet agreed with the European Commission and Eurostat. The proposed three NUTS levels are: Local administrative units Below the NUTS levels, the two LAU (Local Administrative Units) levels are: See also Administrative divisions of Serbia ISO 3166-2 codes of Serbia References External links Overview map of EU Countries - NUTS level 1 Serbia Nuts
NUTS statistical regions of Serbia
[ "Mathematics" ]
111
[ "Nomenclature of Territorial Units for Statistics", "Statistical concepts", "Statistical regions" ]
53,533,778
https://en.wikipedia.org/wiki/National%20Sleep%20Foundation
The National Sleep Foundation (NSF) is an American non-profit, charitable organization. Founded in 1990, its stated goal is to provide expert information on health-related issues concerning sleep. It is largely funded by pharmaceutical and medical device companies. Research NSF Sleep Duration Recommendations In 2015 NSF released the results of a research study on sleep duration recommendations. The paper titled "National Sleep Foundation's sleep time duration recommendations: methodology and results summary" was published in the peer-reviewed Sleep Health Journal. NSF convened an expert panel of 18 leading scientists and researchers tasked with updating the official sleep duration recommendations. The panelists included sleep specialists and representatives from leading organizations including the American Academy of Pediatrics, American Association of Anatomy, American College of Chest Physicians, American Congress of Obstetricians and Gynecologists, American Geriatrics Society, American Neurological Association, American Physiological Society, American Psychiatric Association, American Thoracic Society, Gerontological Society of America, Human Anatomy and Physiology Society, and Society for Research in Human Development. The panelists participated in a rigorous scientific process that included reviewing over 300 current scientific publications and voting on how much sleep is appropriate throughout the lifespan. Sleep Health Index NSF developed Sleep Health Index to measure sleep health at a global group or at an individual level. It was created with the help of sleep experts and public opinion research experts. It is composed of three sub-component scales: sleep duration, sleep quality, and sleep disorders. The Index is fielded quarterly and results are publicly available. Sleep in America Poll NSF has conducted a national poll called Sleep in America Poll to catalog the state of sleep in America since 1991.  This poll provides valuable information to the public, sleep community and the media on specific topics of interest. Past Sleep In America poll data and results are available on the NSF's website. The NSF Sleep in America poll began providing evidence of the size and scope of the American sleep problem in 1991. The 2002 Sleep in America poll (1,010 people surveyed) first suggested that as many as 47 million Americans were risking injury and health problems because they were not sleeping enough. Media coverage of 2002 Sleep in America poll suggested a sleep "crisis" and an "epidemic," and included headlines suacha as "Epidemic of daytime sleepiness linked to increased feelings of anger, stress and pessimism." Again, in NSF's 2005 Sleep in America poll, it reported that half of adults report frequent sleep problems and 77 percent reported a partner with a sleep problem, with snoring being the most common complaint. The Centers for Disease Control (CDC) declared insufficient sleep a "public health epidemic" in 2014. Sleep Health Journal Sleep Health is NSF's official, peer-reviewed academic journal. It was launched in 2015. The Journal's aims are to explore sleep's role in population health and bring the social science perspective on sleep and health. Its scope extends across diverse sleep-related fields, including anthropology, education, health services research, human development, international health, law, mental health, nursing, nutrition, psychology, public health, public policy, fatigue management, transportation, social work, and sociology. The Journal was 2016 winner of the Association of American Publishers' PROSE Award for Best New Journal in Science, Technology and Medicine. The PROSE Awards annually recognize the very best in professional and scholarly publishing by bringing attention to distinguished books, journals, and electronic content. The 2021 Journal Citation Reports published a 2020 Impact Factor of 4.450 for Sleep Health. Sleep Monitoring Standards In 2014 NSF encouraged the Consumer Technology Association (CTA) and the American National Standards Institute (ANSI) to develop standards for sleep technology. As a result, the R6.4 WG1 Sleep Monitors Group was established, composed of sleep experts and technology manufacturers. In September 2017, CEA and NSF announced a new standard for measuring sleep cycles with wearables and other applications. The new standard expands on 2016's work that defined terms and functionality required for sleep measuring devices. Education Public education NSF educates the public about sleep health in content that appears through online, print and broadcast media. NSF's official website is thensf.org which is the primary sleep health website for sleep education content. NSF operates three public education websites: thensf.org, drowsydriving.org (supporting NSF's annual Drowsy Driving Prevention Week campaign), and sleephealthjournal.org (supporting NSF's peer-reviewed research journal Sleep Health). NSF also licenses its educational content at times for distribution by other entities. NSF-branded sleep health content appears on sleepfoundation.org, which was acquired by OneCare Media in 2019. OneCare is a marketing business based on digital content, with a portfolio of consumer-oriented websites, primarily focused on health topics, and derives revenues from commissions on products sold by its affiliate partners. The website continues to be titled "Sleep Foundation" and uses the .org domain. Physician education The National Sleep Foundation is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians. In March 2017, NSF was awarded Accreditation with Commendation by the ACCME. Accreditation with Commendation is ACCME's mechanism for celebrating organizations that excel. Many of NSF's physician education courses are found in the Sleep Learning Zone, an online learning platform. Public awareness Sleep Awareness Week Sleep Awareness Week is NSF's annual public awareness event celebrating sleep health. It usually occurs during the week leading up to the beginning of daylight-saving time in the spring. During this week, NSF releases the results from its annual Sleep in America Poll or from the Sleep Health Index. NSF provides valuable information about the benefits of optimal sleep and how sleep affects health, well-being, and safety. The week-long campaign provides the public and the media with shareable messages including an infographic, sleep health messaging, and social media posts. Drowsy Driving Prevention Week NSF conducts an annual Drowsy Driving Prevention Week during the week leading up to the end of daylight-saving time in the fall. The campaign goal is to reduce the number of drivers who choose to drive while sleep deprived. Drowsy driving is responsible for more than 6,400 U.S. deaths annually. These fall-asleep crashes are often caused by voluntarily not getting the sleep one needs. Millions of Americans also experience excessive sleepiness as a result of sleep disorders, such as obstructive sleep apnea and narcolepsy. The campaign encompasses dissemination of educational messages via social media. Awards Since 2001 NSF has been recognizing and celebrating the achievements of individuals who have contributed to advancing the sleep field. The following individuals received an award from the National Sleep Foundation: 2021 - Phyllis C. Zee, MD, PhD, Lifetime Achievement 2018 - Sudhansu Chokroverty, MD, Lifetime Achievement 2017 - Mark R. Rosekind, PhD, Lifetime Achievement 2016 - David Gozal, MD, MBA, Lifetime Achievement 2015 - Emmanuel Mignot, MD, PhD, Lifetime Achievement 2014 - Meir H. Kryger, MD, Lifetime Achievement 2014 - William C. Orr, PhD, Clinical Research Leadership 2014 - Arthur J. Spielman, PhD, Insomnia Educator Leadership 2013 - Timothy A. Roehrs, PhD, Lifetime Achievement 2013 - Christine Acebo, PhD, Excellence in Sleep Assessment Research 2012 - Michael Thorpy, MBChB, Lifetime Achievement 2012 - Charmane Eastman, PhD, Excellence in Applied Circadian Rhythm Research 2012 - Ernest Hartmann, Excellence in Science of Sleep and Dreaming 2011 - Robert Y. Moore, MD, PhD, Lifetime Achievement 2011 - Gregory Belenky, MD, Excellence in Sleep & Performance Research 2011 - Peter J. Hauri, PhD, Excellence in Insomnia Research & Education 2011 - Lorraine L. Wearley, PhD, Sleep Health & Safety Leadership 2010 - Allan I. Pack, MBChB, PhD, Lifetime Achievement 2010 - Pietro Badia, PhD, Sleep Educator 2010 - Wallace B. Mendelson, MD, Sleep & Psychiatry 2009 - Philip R. Westbrook, MD, Lifetime Achievement 2009 - Colin Sullivan, MBBS, PhD, Sleep Innovator 2008 - Charles A. Czeisler, MD, PhD, Lifetime Achievement 2007 - Sonia Ancoli-Israel, PhD, Lifetime Achievement 2006 - James K. Walsh, PhD, Lifetime Achievement 2005 - Christian Guilleminault, MD, Lifetime Achievement 2004 - Allan Rechtschaffen, PhD, Lifetime Achievement 2003 - Mary A. Carskadon, PhD, Lifetime Achievement 2002 - Thomas Roth, PhD, Lifetime Achievement 2001 - William C. Dement, MD, PhD, Lifetime Achievement SleepTech As part of addressing one of NSF's goals – that sleep science is rapidly incorporated into products and services – NSF launched the SleepTech program to advance innovations in sleep technology. Each year the National Sleep Foundation recognizes innovative sleep products by giving out the SleepTech Awards, the world's first innovation awards targeted specifically at sleep technology. Recent winners are: 2020 - SleepTech Award Winner: Itamar Medical - WatchPAT ONE 2019 - SleepTech Award Winner: The ReST Bed 2019 - SleepTech App Award Winner: Timeshifter - The Jet Lag App 2018 - SleepTech Award Winner: Happiest Baby - SNOO Smart Sleeper Finances NSF is a 501(c)(3) charitable organization, and contributions are tax-deductible. The foundation's programs are funded by corporate and individual contributions, and through its partnerships with corporations and government entities. Its recent revenues are in the $3.5 million range. According to then-CEO Richard Gelula, "The largest single source of National Sleep Foundation funding is pharmaceutical and medical device companies." In particular, nearly $1 million (≈28%) of its $3.6 million budget at the time came from manufacturers of sleeping medications. Controversies The National Sleep Foundation is sometimes criticised on the grounds that its work is unduly influenced by funding from sleeping pill manufacturers. The NSF has been criticized by the American Institute of Philanthropy, Dr. Sidney M. Wolfe of Public Citizen's Health Research Group, Jerry Avorn (head of the Division of Pharmacoepidemiology and Pharmacoeconomics at Harvard Medical School), and other consumer and medical ethics groups for its reliance on industry funding, and the possible influence of such funding on its work. In 2005, for instance, they released a survey purporting to find extremely high rates of insomnia, declared insomnia to be a "crisis" and an "epidemic," announced an "Insomnia Awareness Day" and a "National Sleep Awareness Week," but the poll, the declaration of a dedicated day and week, and the widely distributed press kits were paid for by manufacturers of sleeping medications, and the public relations firm assigned to contact medical reporters about the poll took the opportunity to mention the shortly-approaching release of Lunesta (eszopiclone), the first sleeping medication approved in the United States for extended use. Simultaneously, the drug's manufacturer assigned 1,250 pharmaceutical sales representatives to educate physicians about Lunesta, as part of a $60 million advertising push. A Sacramento Bee report on these connections also noted that 10 of NSF's 23-member Board of Directors had current or past financial ties to manufacturers of sleeping medications. These reports led to criticism from Public Citizen's Wolfe, who theorized that "Although they're not saying you should be on a sleeping pill, they're saying go to the doctor and that doctor will sell you a sleeping pill in a large proportion of instances." Wolfe also criticized American doctors for "selling" sleeping pills, "even if it's not what (the patient) really need(s)." A previous 2002 "Sleep in America" poll from NSF, which similarly characterized the results as revealing an "epidemic" of daytime sleepiness in its press release, was similarly characterized in a report by The Seattle Times as industry "astroturfing" due to sponsorship from the makers of the sleeping medications Unisom, Sonata, and Ambien. A 2016 NSF public education program highlighting "personal stories about sleep for four individuals" received grant support from Merck. A report in the Huffington Post described this effort as part of a multi-pronged "unbranded" marketing effort for Belsomra (suvorexant), Merck's then-forthcoming new sleeping drug. Some merchants and products have claimed to be "endorsed by the National Sleep Foundation" or have implied such endorsement in their literature. My Pillow made such claims in its television ads. At the time, the NSF was selling MyPillow on its own website. When asked by the Truth in Advertising consumer rights organization, an NSF spokesman declined to say whether MyPillow had made payments to the organization for its claimed “official pillow” status, but said in an email that the organization receives approaches “from many different manufacturers” and “works to select products that are a good fit for our organization.” In 2016, My Pillow agreed to stop claiming an NSF endorsement and paid a fine. References External links National Sleep Foundation website thensf.org National Sleep Foundation website drowsydriving.org National Sleep Foundation journal website sleephealthjournal.org Non-profit organizations based in Arlington, Virginia Health charities in the United States 501(c)(3) organizations Organizations established in 1990 Sleep
National Sleep Foundation
[ "Biology" ]
2,780
[ "Behavior", "Sleep" ]
53,534,121
https://en.wikipedia.org/wiki/Ophiuchus%20Superbubble
The Ophiuchus Superbubble is an astronomical phenomenon located in the Ophiuchus constellation, with a center around ℓ ≈ 30 °. This giant superbubble was first discovered in a 2007 study of extraplanar neutral hydrogen in the disk-halo transition of the Galaxy. The top extends to galactic latitudes over 25°, a distance of about 7 kpc. The Green Bank radio telescope has measured more than 220,000 HI spectra both in and around this structure. Structure The movable 110 meter antenna of a radio telescope made it possible to take pictures of several neighboring regions of the sky, resulting in a folded mosaic in which an area filled with hydrogen was highlighted. Near this area the interstellar gas is disturbed and many ejections are evident. The total mass of HI in the system is ≈ 106 M☉, with an equal mass of H+. The base of the structure consists of HI "whiskers" measuring several hundred pc wide and its halo extends over more than 1 kpc. The "whiskers" have a vertical density structure suggesting that they are the walls of the bubble and were created by a lateral rather than an upward movement. They resemble the vertical streaks of dust seen on NGC 891. Geographical The superbubble is located 23 thousand light-years from the Earth, and the object itself is "raised" 10 thousand light-years above the plane of the galaxy. According to the Kompaneets model of an expanding bubble, the age of this system is ≈ 30 Ma, and its total energy content is ~ 10^53 erg. It may be at the stage when expansion stops and the shell begins to experience significant instabilities. This system offers an unprecedented opportunity to study several important phenomena at close range, including the evolution of superbubbles, turbulence in the HI shell, and the magnitude of the ionizing flux over the galactic disk. Formation hypothesis The superbubble is hypothesized to be based on a massive cluster of young stars. The brightest of these stars exploded one after another, but since bright stars have a short lifespan (about 10 million years), a difference of a couple of million years essentially meant they all exploded at the same time. With a high degree of probability, the matter was "pushed" out of the galactic plane by such explosions, resulting in the inflation of the bubble. The interstellar matter itself is present in most galaxies and is mainly neutral or ionized hydrogen. Such structures are capable of influencing the distribution of chemical elements in the galaxy: heavy nuclei that are born inside stars are ejected during an explosion together with gas, which - in the form of a "superbubble" - transports them over considerable distances. References Ophiuchus Superbubbles
Ophiuchus Superbubble
[ "Astronomy" ]
561
[ "Ophiuchus", "Constellations" ]
53,534,260
https://en.wikipedia.org/wiki/Robotic%20governance
Robotic governance provides a regulatory framework to deal with autonomous and intelligent machines. This includes research and development activities as well as handling of these machines. The idea is related to the concepts of corporate governance, technology governance and IT-governance, which provide a framework for the management of organizations or the focus of a global IT infrastructure. Robotic governance describes the impact of robotics, automation technology and artificial intelligence on society from a holistic, global perspective, considers implications and provides recommendations for actions in a Robot Manifesto. This is realized by the Robotic Governance Foundation, an international non-profit organization. The robotic governance approach is based on the German research on discourse ethics. Therefore, the discussion should involve all Stakeholders, including scientists, society, religion, politics, industry as well as labor unions in order to reach a consensus on how to shape the future of robotics and artificial intelligence. The compiled framework, the so-called Robot Manifesto, will provide voluntary guidelines for a self-regulation in the fields of research, development as well as use and sale of autonomous and intelligent systems. The concept does not only appeal on the responsibility of researchers and robot manufacturers, but like with child labor and sustainability, also means a raising of opportunity costs. The greater public awareness and pressure will become concerning this topic, the harder it will get for companies to conceal or justify violations. Therefore, from a certain point it will be cheaper for organizations to invest in sustainable technologies and accepted. History of the concept The idea to set ethical standards for intelligent machines is not a new one and undoubtedly has its roots in science fiction literature. Even older is the discussion about ethics of intelligent, man-made creatures in general. Some of the earliest recorded examples can be found in Ovid's Metamorphoses, in Pygmalion, in the Jewish golem mysticism (12th century) as well as in the idea of Homunkulus (Latin: little man) arisen from the alchemy of the Late Middle Ages. The fundamental and philosophical question of these literary works is what will happen, if humans presume to create autonomous, conscious or even godlike creatures, machines, robots or androids. While most of the older works broach the issue of the act of creation, if it is morally appropriate and which dangers could arise, Isaac Asimov was the first to realize the necessity to restrict and regulate the freedom of action of machines. He wrote the first Three Laws of Robotics. At least since the use of drones equipped with air-to-ground missiles in 1995 that can be used against ground targets, like e.g., the General Atomics MQ-1, and the resulting collateral damage, the discussion on the international regulation of remote controlled, programmable and autonomous machines attracted public attention. Nowadays, this discussion covers the entire range of programmable, intelligent and/or autonomous machines, drones as well as automation technology combined with Big Data and artificial intelligence. Lately, well-known visionaries like Stephen Hawking, Elon Musk and Bill Gates brought the topic to the focus of public attention and awareness. Due to the increasing availability of small and cheap systems for public service as well as commercial and private use, the regulation of robotics in all social dimensions gained a new significance. Scientific recognition Robotic governance was first mentioned in the scientific community within a dissertation project at the Technical University of Munich, supervised by Professor Dr. emeritus Klaus Mainzer. The topic has been the subject of several scientific workshops, symposia and conferences ever since, including the Sensor Technologies & the Human Experience 2015, the Robotic Governance Panel at the We Robots 2015 Conference, a keynote at the 10th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO), a full day workshop on Autonomous Technologies and their Societal Impact as part of the 2016 IEEE International Conference on Prognostics and Health Management (PHM’16), a keynote at the 2016 IEEE International Conference on Cloud and Autonomic Computing (ICCAC), the FAS*W 2016: IEEE 1st International Workshops on Foundations and Applications of Self* Systems, the 2016 IEEE International Conference on Emerging Technologies and Innovative Business Practices for the Transformation of Societies (IEEE EmergiTech 2016) and the IEEE Global Humanitarian Technology Conference (GHTC 2016). Since 2015 IEEE even holds an own forum on robotic governance at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE IROS): the first and second "Annual IEEE IROS Futurist Forum", which brought together worldwide renowned experts from a wide range of specialities to discuss the future of robotics and the need for regulation in 2015 and 2016. In 2016 Robotic governance has also been the topic of a plenary keynote presentation on the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016) in Daejeon, South Korea. Several video statements and interviews on robotic governance, responsible use of robotics, automation technology and artificial intelligence as well as self-regulation in a world of Robotic Natives, with internationally recognized experts from research, economy and politics are published on the website of the Robotic Governance Foundation. Max Levchin, co-founder and former CTO of PayPal emphasized the need for robotic governance in the course of his Q&A session on the South by Southwest Festival (SXSW) 2016 in Austin and referred to the comments of his friend and colleague Elon Musk on this subject. Gerd Hirzinger, former head of the Institute of Robotics and Mechatronics of the German Aerospace Center, showed during his keynote speech at the IROS Futurist Forum 2015 the possibility of machines being so intelligent that it would be inevitable, one day, to prevent certain behavior. At the same event, Oussama Khatib, American roboticist and director of the Stanford robotics lab, advocated to emphasize the user acceptance when producing intelligent and autonomous machines. Bernd Liepert, president of the euRobotics aisbl – the most important robotics community in Europe – recommended to establish robotic governance worldwide and underlined his wish for Europe taking the lead in this discussion, during his plenary keynote at the IEEE IROS 2015 in Hamburg. Hiroshi Ishiguro, inventor of the Geminoid and head of the Intelligent Robotics Laboratory at the University of Osaka, showed during the RoboBusiness Conference 2016 in Odense that it is impossible to stop technical progress. Therefore, it is necessary to accept the responsibility and to think about regulation. In the course of the same conference, Henrik I. Christensen, author of the U.S. Robotic Roadmap, underlined the importance of ethical and moral values in robotics and the suitability of robotic governance to create a regulatory framework. See also Corporate citizenship Corporate social responsibility Ethics of artificial intelligence Roboethics Technology ethics References Robotics
Robotic governance
[ "Engineering" ]
1,376
[ "Robotics", "Automation" ]
53,534,406
https://en.wikipedia.org/wiki/Ministry%20of%20Electricity%20and%20Energy%20%28Myanmar%29
Ministry of Electricity and Energy (; abbreviated as MOEE) was the ministry of Myanmar composed by two ministries, Electrical Power (MOEP) and Energy (MOE) by President Htin Kyaw. It was reconstituted as MOEP and MOE in May 2022 by SAC. History In 2016, newly elected president Htin Kyaw combined Ministry of Electric Power and Ministry of Energy as Ministry of Electricity and Energy. In 2022 May, SAC reconstituted the ministry as Ministry of Electric Power and Ministry of Energy. Ministers Aung San Suu Kyi (March 2016- April 2016) Pe Zin Tun (April 2016- August 2017) Win Khine (August 2017- February 2021) Aung Than Oo (February 2021- 2 May 2022) Departments Union Minister Office Oil and Gas Planning Department Myanmar Oil and Gas Enterprise Myanmar Petrochemical Enterprise Myanmar Petroleum Products Enterprise Department of Electric Power Planning Department of Hydropower Implementation Department of Electric Power Transmission and System Control Electricity Supply Enterprise Electric Power Generation Enterprise Yangon Electricity Supply Corporation Mandalay Electricity Supply Corporation References External links ElectricityandEnergy Myanmar Energy in Myanmar
Ministry of Electricity and Energy (Myanmar)
[ "Engineering" ]
226
[ "Energy organizations", "Energy ministries" ]
70,520,651
https://en.wikipedia.org/wiki/Fat%20suppression
Fat suppression is an MRI technique in which fat signal from adipose tissue is suppressed to better visualize uptake of contrast material by bodily tissues, reduce chemical shift artifact, and to characterize certain types of lesions such as adrenal gland tumors, bone marrow infiltration, fatty tumors, and steatosis by determining the fat content of the tissues. Due to short relaxation times, fat exhibits a strong signal in magnetic resonance imaging (MRI), easily discernible on scans. Fat suppression can be achieved through various techniques as outlined below: Frequency Selective Pulses (CHESS): This method leverages the difference in resonance frequency with water, employing frequency selective pulses. Known as fat saturation (fat-sat) techniques, this approach facilitates effective fat suppression. Phase Contrast Techniques: Operating on the same principle as black boundary or india ink artifacts, phase contrast techniques contribute to suppressing fat signals in MRI. Inversion Recovery Sequences (STIR Technique): Utilizing short T1 relaxation time, the STIR technique involves inversion recovery sequences to achieve fat suppression. Dixon Method: A distinct approach to fat suppression that is primarily used to achieve uniform fat suppression. Hybrid Techniques (e.g., SPIR): Innovative approaches involve the combination of multiple fat suppression techniques, exemplified by SPIR, which integrates spectral presaturation with inversion recovery. The choice of a specific fat suppression technique should be guided by several factors, including the intended purpose—whether it is for contrast enhancement or tissue characterization. Considerations such as the quantity of fat in the tissue under examination, the magnetic field strength, and the homogeneity of the main magnetic field play crucial roles in the selection process. References Magnetic resonance imaging Nuclear magnetic resonance
Fat suppression
[ "Physics", "Chemistry" ]
342
[ "Nuclear magnetic resonance", "Magnetic resonance imaging", "Nuclear chemistry stubs", "Nuclear magnetic resonance stubs", "Nuclear physics" ]
70,522,547
https://en.wikipedia.org/wiki/NGC%204359
NGC 4359 is a dwarf barred spiral galaxy seen edge-on that is about 56 million light-years away in the constellation Coma Berenices. It was discovered by astronomer William Herschel on March 20, 1787. It is a member of the NGC 4274 Group, which is part of the Coma I Group or Cloud. On the sky, NGC 4359 appears to lie closest to the flocculent spiral NGC 4414 which is also a member of the NGC 4274 Group and the Coma I cloud. However, their radial velocities differ by around 500 km/s suggesting an interaction between the two is unlikely. See also List of NGC objects (4001–5000) NGC 4414 Coma I References External links 4359 040330 Coma Berenices 17870320 Barred spiral galaxies Dwarf spiral galaxies 07483 Coma I Group
NGC 4359
[ "Astronomy" ]
178
[ "Coma Berenices", "Constellations" ]
70,523,527
https://en.wikipedia.org/wiki/International%20Publisher%20Ltd.
International Publisher Ltd. (or International Publisher LLC) is an academic paper mill company that coordinates the sale of fake authorships on research papers for publication in an academic journal. The company is headquartered in Moscow (Russia) with offices in Ukraine, Kazakhstan, and Iran, and lists its chief editor as Ksenia Badziun. Its website has existed since 2018. Buyers can preselect a number of critera for their desired article. Many papers are created specifically for the purpose of selling co-authorships, and only after a sufficient number of slots are sold, and the company recruits writers to produce at least some of these papers. Others may be otherwise legitimate articles; there is evidence that it also approaches authors published in high-quality journals to sell co-authorship slots. Slots are priced according to the prestige of the journal and the position of the slot in the list of purported collaborators. Discovery and investigation The company was exposed by scientific misconduct tracking website Retraction Watch in 2019. In 2022, a report on arXiv was covered by Science Magazine detailing how International Publisher Ltd. had published hundreds of academic papers across diverse academic journals, including from respected publishing companies. Some of these publishers have opened an investigation into the matter. In 2019, the scientific indexing company Clarivate's Web of Science group sent International Publisher Ltd. a cease-and-desist letter, which was ignored. See more Research paper mill References Scientific misconduct
International Publisher Ltd.
[ "Technology" ]
293
[ "Scientific misconduct", "Ethics of science and technology" ]
70,523,734
https://en.wikipedia.org/wiki/Common%20data%20model
A common data model (CDM) can refer to any standardised data model which allows for data and information exchange between different applications and data sources. Common data models aim to standardise logical infrastructure so that related applications can "operate on and share the same data", and can be seen as a way to "organize data from many sources that are in different formats into a standard structure". A common data model has been described as one of the components of a "strong information system". A standardised common data model has also been described as a typical component of a well designed agile application besides a common communication protocol. Providing a single common data model within an organisation is one of the typical tasks of a data warehouse. Examples of common data models Border crossings X-trans.eu was a cross-border pilot project between the Free State of Bavaria (Germany) and Upper Austria with the aim of developing a faster procedure for the application and approval of cross-border large-capacity transports. The portal was based on a common data model that contained all the information required for approval. Climate data The Climate Data Store Common Data Model is a common data model set up by the Copernicus Climate Change Service for harmonising essential climate variables from different sources and data providers. General information technology Within service-oriented architecture, S-RAMP is a specification released by HP, IBM, Software AG, TIBCO, and Red Hat which defines a common data model for SOA repositories as well as an interaction protocol to facilitate the use of common tooling and sharing of data. Content Management Interoperability Services (CMIS) is an open standard for inter-operation of different content management systems over the internet, and provides a common data model for typed files and folders used with version control. The NetCDF software libraries for array-oriented scientific data implements a common data model called the NetCDF Java common data model, which consists of three layers built on top of each other to add successively richer semantics. Health Within genomic and medical data, the Observational Medical Outcomes Partnership (OMOP) research program established under the U.S. National Institutes of Health has created a common data model for claims and electronic health records which can accommodate data from different sources around the world. PCORnet, which was developed by the Patient-Centered Outcomes Research Institute, is another common data model for health data including electronic health records and patient claims. The Sentinel Common Data Model was initially started as Mini-Sentinel in 2008. It is used by the Sentinel Initiative of the USA's Food and Drug Administration. The Generalized Data Model was first published in 2019. It was designed to be a stand-alone data model as well as to allow for further transformation into other data models (e.g., OMOP, PCORNet, Sentinel). It has a hierarchical structure to flexibly capture relationships among data elements. The JANUS clinical trial data repository also provides a common data model which is based on the SDTM standard to represent clinical data submitted to regulatory agencies, such as tabulation datasets, patient profiles, listings, etc. Logistics SX000i is a specification developed jointly by the Aerospace and Defence Industries Association of Europe (ASD) and the American Aerospace Industries Association (AIA) to provide information, guidance and instructions to ensure compatibility and the commonality. The associated SX002D specification contains a common data model. Microsoft Common Data Model The Microsoft Common Data Model is a collection of many standardised extensible data schemas with entities, attributes, semantic metadata, and relationships, which represent commonly used concepts and activities in various businesses areas. It is maintained by Microsoft and its partners, and is published on GitHub. Microsoft's Common Data Model is used amongst others in Microsoft Dataverse and with various Microsoft Power Platform and Microsoft Dynamics 365 services. Rail transport RailTopoModel is a common data model for the railway sector. Other There are many more examples of various common data models for different uses published by different sources. See also Apache OFBiz, an open source enterprise resource planning system which provides a common data model Canonical model Data Reference Model, one of five reference models of the U.S. government federal enterprise architecture Data platform Metadata Open Semantic Framework, which internally uses the RDF to convert all data to a common data model Requirements Interchange Format Generic data model References Sharing Information theory Data modeling Application software Databases
Common data model
[ "Mathematics", "Technology", "Engineering" ]
897
[ "Telecommunications engineering", "Applied mathematics", "Data modeling", "Computer science", "Information theory", "Data engineering" ]
70,524,446
https://en.wikipedia.org/wiki/N-Butylmercuric%20chloride
n-Butylmercuric chloride is an organic mercury salt that is used as a catalyst and a precursor to other organomercuric compounds. Preparation n-Butylmercuric chloride is made by reacting n-butylmagnesium bromide with mercury chloride. It can also be prepared by reacting 1-butene with mercury acetate. References Mercury(II) compounds Organomercury compounds
N-Butylmercuric chloride
[ "Chemistry" ]
87
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
70,526,569
https://en.wikipedia.org/wiki/Small%20mammal
Small mammals or micromammals are a subdivision of mammals based on their body mass and size. Different values have been used as the upper limit. The International Biological Programme has defined small mammals as species weighing up to 5 kg. Alternatively, the International Union for Conservation of Nature (IUCN) groups the orders of rodents, tree shrews and eulipotyphlans (insectivores) together under the term small mammals. A significant majority of mammal species falls into the category of small mammals. They are found in a great range of habitats and climate zones. Characteristics Many small mammals have a short livespan and high fertility rate, resulting in a comparatively high variability in genetic composition. Their size leads to a reduced energy need for movement, but a high energy requirement for maintaining body temperature. This results in a high rate of food intake, using a wide range food sources. Their small size, together with frequently nocturnal or crepuscular activity, provide some protection against predators. List of species Eulipotyphlans Rodents Tree shrews Research and conservation The SSC Small Mammal Specialist Group (SMSG) of the IUCN "serves as the global authority on the world's small mammals" both with regard to research as well as conservation efforts. References Mammals
Small mammal
[ "Biology" ]
257
[ "Mammals", "Animals" ]
70,526,597
https://en.wikipedia.org/wiki/Cornell%20potential
In particle physics, the Cornell potential is an effective method to account for the confinement of quarks in quantum chromodynamics (QCD). It was developed by Estia J. Eichten, Kurt Gottfried, Toichiro Kinoshita, John Kogut, Kenneth Lane and Tung-Mow Yan at Cornell University in the 1970s to explain the masses of quarkonium states and account for the relation between the mass and angular momentum of the hadron (the so-called Regge trajectories). The potential has the form: where is the effective radius of the quarkonium state, is the QCD running coupling, is the QCD string tension and GeV is a constant. Initially, and were merely empirical parameters but with the development of QCD can now be calculated using perturbative QCD and lattice QCD, respectively. Short distance potential The potential consists of two parts. The first one, dominate at short distances, typically for fm. It arises from the one-gluon exchange between the quark and its anti-quark, and is known as the Coulombic part of the potential, since it has the same form as the well-known Coulombic potential induced by the electromagnetic force (where is the electromagnetic coupling constant). The factor in QCD comes from the fact that quarks have different type of charges (colors) and is associated with any gluon emission from a quark. Specifically, this factor is called the color factor or Casimir factor and is , where is the number of color charges. The value for depends on the radius of the studied hadron. Its value ranges from 0.19 to 0.4. For precise determination of the short distance potential, the running of must be accounted for, resulting in a distant-dependent . Specifically, must be calculated in the so-called potential renormalization scheme (also denoted V-scheme) and, since quantum field theory calculations are usually done in momentum space, Fourier transformed to position space. Long distance potential The second term of the potential, , is the linear confinement term and fold-in the non-perturbative QCD effects that result in color confinement. is interpreted as the tension of the QCD string that forms when the gluonic field lines collapse into a flux tube. Its value is GeV. controls the intercepts and slopes of the linear Regge trajectories. Domains of application The Cornell potential applies best for the case of static quarks (or very heavy quarks with non-relativistic motion), although relativistic improvements to the potential using speed-dependent terms are available. Likewise, the potential has been extended to include spin-dependent terms Calculation of the quark-quark potential A test of validity for approaches that seek to explain color confinement is that they must produce, in the limit that quark motions are non-relativistic, a potential that agrees with the Cornell potential. A significant achievement of lattice QCD is to be able compute from first principles the static quark-antiquark potential, with results confirming the empirical Cornell Potential. Other approaches to the confinement problem also results in the Cornell potential, including the dual superconductor model, the Abelian Higgs model, and the center vortex models. More recently, calculations based on the AdS/CFT correspondence have reproduced the Cornell potential using the AdS/QCD correspondence or light front holography. See also Color confinement QCD vacuum References Quantum chromodynamics Mesons Quantum mechanical potentials
Cornell potential
[ "Physics" ]
736
[ "Matter", "Hadrons", "Quantum mechanics", "Quantum mechanical potentials", "Subatomic particles" ]