id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
57,613,611
https://en.wikipedia.org/wiki/Gregory%20A.%20Voth
Gregory A. Voth (born January 22, 1959) is a theoretical chemist and Haig P. Papazian Distinguished Service Professor of Chemistry at the University of Chicago. He is also a professor of the James Franck Institute and the Institute for Biophysical Dynamics. Education He received his bachelor's degree from University of Kansas in 1981 with highest distinction and honors and a Ph.D. in theoretical chemistry from the California Institute of Technology in 1987. His doctoral advisor was Rudolph A. Marcus. He was also an IBM Postdoctoral Fellow at the University of California, Berkeley from 1987-1989. At Berkeley, his postdoctoral advisors were William Hughes Miller and David Chandler. Career Professor Voth is interested in the development and application of theoretical and computational methods to study problems involving the structure and dynamics of complex condensed phase systems, including proteins, membranes, liquids, and materials. He has developed a method known as “multiscale coarse-graining” in which the resolution of the molecular-scale entities is reduced into simpler structures, but key information on their interactions is accurately retained (or renormalized) so the resulting computer simulation can accurately and efficiently predict the properties of large assemblies of complex molecules such as lipids and proteins. This method is multiscale, meaning it describes complex condensed phase and biomolecular systems from the molecular scale to the mesoscale and ultimately to the macroscopic scale. Professor Voth’s other research interests include the study of charge transport (protons and electrons) in aqueous systems and biomolecules – a fundamental process in living organisms and other systems that have been poorly understood because of its complexity. He also studies the exotic behavior of room-temperature ionic liquids and other complex materials such nanoparticle self-assembly, polymer electrolyte membranes, and electrode-electrolyte interfaces in energy storage devices. In the earlier part of his career, Professor Voth extensively developed and applied new methods to study quantum and electron transfer dynamics in condensed phase systems-much of this work was based on the Feynman path integral description of quantum mechanics. As of 03/12/2023, he is the author or co-author of more than 600 peer-reviewed scientific articles (Google Scholar h-index = 120; total citations = 55,964) and has mentored more than 200 postdoctoral fellows and graduate students. Honors and awards Biophysical Society Innovation Award, 2021. Biophysical Society S F Boys-A Rahman Award Winner, 2019. Royal Society of Chemistry ACS Joel Henry Hildebrand Award in the Theoretical and Experimental Chemistry of Liquids ACS Division of Physical Chemistry Award in Theoretical Chemistry Stanislaw M. Ulam Distinguished Scholar, Los Alamos National Laboratory, Elected to the International Academy of Quantum Molecular Science Fellow of the Biophysical Society Fellow of the American Chemical Society University of Utah Distinguished Scholarly and Creative Research Award John Simon Guggenheim Fellowship, Miller Visiting Professorship, University of California, Berkeley National Science Foundation Creativity Award Fellow of the American Association for the Advancement of Science Fellow of the American Physical Society IBM Corporation Faculty Research Award Camille Dreyfus Teacher-Scholar Award Alfred P. Sloan Foundation Research Fellow Presidential Young Investigator Award David and Lucile Packard Foundation Fellowship in Science and Engineering Camille and Henry Dreyfus Distinguished New Faculty Award Francis and Milton Clauser Doctoral Prize, California Institute of Technology Herbert Newby McCoy Award, California Institute of Technology and the Procter and Gamble Award for Outstanding Research in Physical Chemistry, ACS References External links Voth Group home page 1959 births Living people American physical chemists University of Chicago faculty California Institute of Technology alumni University of Kansas alumni University of Utah faculty Fellows of the American Physical Society Fellows of the American Chemical Society Sloan Research Fellows Fellows of the American Academy of Arts and Sciences Theoretical chemists Scientists from Kansas People from Topeka, Kansas 20th-century American chemists 21st-century American chemists
Gregory A. Voth
[ "Chemistry" ]
781
[ "Theoretical chemists", "American theoretical chemists" ]
57,613,887
https://en.wikipedia.org/wiki/Nimbus%202
Nimbus 2 (also called Nimbus-C) was a meteorological satellite. It was the second in a series of the Nimbus program. Launch Nimbus 2 was launched on May 15, 1966, by a Thor-Agena rocket from Vandenberg Air Force Base, California, United States. The spacecraft functioned nominally until January 17, 1969. The satellite orbited the Earth once every 1 hour and 48 minutes, at an inclination of 100°. Its perigee was and its apogee was . Mission The second in a series of second-generation meteorological research and development satellites, Nimbus 2 was designed to serve as a stabilized, Earth-oriented platform for the testing of advanced meteorological sensor systems and the collecting of meteorological data. The polar-orbiting spacecraft consisted of three major elements: (1) a torus-shaped sensory ring, (2) solar paddles, and (3) the control system housing. The solar paddles and control system housing were connected to the sensory ring by a truss structure, giving the satellite the appearance of an ocean buoy. Nimbus 2 was nearly tall, in diameter at the base, and about across with solar paddles extended. The sensory ring, which formed the satellite base, housed the electronics equipment and battery modules. The lower surface of the sensory ring provided mounting space for sensors and telemetry antennas. An H-frame structure mounted within the center of the torus provided support for the larger experiments and tape recorders. Mounted on the control system housing, which was located on top of the spacecraft, were Sun sensors, horizon scanners, gas nozzles for attitude control, and a command antenna. Use of a stabilization and control system allowed the spacecraft's orientation to be controlled to within plus or minus 1° for all three axes (pitch, roll, yaw). The spacecraft carried: Advanced Vidicon Camera System (AVCS): instrument for recording and storing remote cloud cover pictures Automatic Picture Transmission (APT): instrument for providing real-time cloud cover pictures High and Medium Resolution Infrared Radiometers (HRIR/MRIR): for measuring the intensity and distribution of electromagnetic radiation emitted by and reflected from the Earth and its atmosphere The Nimbus 2 and experiments performed normally after launch until July 26, 1966, when the spacecraft tape recorder failed. Its function was taken over by the HRIR tape recorder until November 15, 1966, when it also failed. Some real-time data were collected until January 17, 1969, when the spacecraft mission was terminated owing to deterioration of the horizon scanner used for Earth reference. References Weather satellites of the United States Spacecraft launched in 1966
Nimbus 2
[ "Astronomy" ]
544
[ "Astronomy stubs", "Spacecraft stubs" ]
57,614,074
https://en.wikipedia.org/wiki/Nimbus%20B
Nimbus B was a meteorological satellite launched as part of the Nimbus program. It was released on May 18, 1968 from the Vandenberg Air Force Base, Lompoc, California, by means of a Thor-Agena launch vehicle, together with the SECOR 10 satellite. Nimbus B never achieved orbit because a malfunction in the booster guidance system forced the destruction of the spacecraft and its payload during launch. The Radioisotope Thermoelectric Generator SNAP-19 RTG was salvaged from the water, refurbished and later flown on Nimbus 3. Gallery Instruments High Data Rate Storage System (DHRSS) High and Medium-Resolution Infrared Radiometers (HRIR/MRIR) Image Dissector Camera System (IDCS) Infrared Interferometer Spectrometer (IRIS) Monitor of Ultraviolet Solar Energy (MUSE) Radioisotope Thermoelectric Generator (SNAP-19) Real-time transmission System (RTTS) Satellite Infrared Spectrometer (SIRS) See also Television Infrared Observation Satellite References External links The Day the Nimbus Weather Satellite Exploded by the Smithsonian Magazine 1968 in spaceflight Weather satellites of the United States Satellite launch failures
Nimbus B
[ "Astronomy" ]
248
[ "Astronomy stubs", "Spacecraft stubs" ]
57,614,558
https://en.wikipedia.org/wiki/Nimbus%205
Nimbus 5 (also called Nimbus E or Nimbus V) was a meteorological satellite for the research and development of sensing technology. It was the fifth successful launch in a series of the Nimbus program. The objective of Nimbus 5 was to test and evaluate advanced sensing technology, and to provide improved photographs of cloud formations. Launch Nimbus 5 was launched on December 11, 1972, by a Delta rocket from Vandenberg Air Force Base, California, USA. The satellite orbited the Earth once every 107 minutes, at an inclination of 99°. Its perigee was and its apogee was . Instruments There were six science instruments aboard Nimbus 5. The satellite also included Sun sensors, and horizon scanners for navigation. Infrared Temperature Profile Radiometer (ITPR) The ITPR was designed to obtain vertical profiles of temperature and moisture in the atmosphere. A 3-dimensional map could then be created with a resolution of . Selective Chopper Radiometer (SCR) The SCR had three objectives: to observe the global atmospheric temperature structure, to observe the distribution of water vapor, and to measure the density of ice crystals in cirrus clouds. Its sensing resolution was about . Nimbus E Microwave Spectrometer (NEMS) NEMS was used to demonstrate the use of microwave sensors for measuring tropospheric temperature profiles, water content in clouds, and surface temperature. The instrument monitored five selected frequencies continuously. The data were recorded on a magnetic tape so they could be transmitted later. Electrically Scanning Microwave Radiometer (ESMR) ESMR was used for mapping the microwave radiation from Earth's surface. This information was used to measure the water content of clouds, and to observe sea ice. It was also used to test the use of microwaves to measure soil moisture. The antenna system was deployed after launch, and controlled by an onboard computer. Surface Composition Mapping Radiometer (SCMR) For measuring the thermal emission characteristics of Earth's surface and sea temperatures. A scanning mirror rotated ten times per second to sense sections wide. SCMR malfunctioned soon after launch. Temperature/Humidity Infrared Radiometer (THIR) THIR was used for measuring cloud top temperatures and water vapor content in the stratosphere. It could measure cloud temperatures in the day and at night. The sensing unit was a bolometer made from germanium. References Weather satellites of the United States 1972 in spaceflight
Nimbus 5
[ "Astronomy" ]
503
[ "Astronomy stubs", "Spacecraft stubs" ]
57,615,200
https://en.wikipedia.org/wiki/Undecylprodigiosin
Undecylprodigiosin is an alkaloid produced by some Actinomycetes bacteria. It is a member of the prodiginines group of natural products and has been investigated for potential antimalarial activity. Natural sources Undecylprodigiosin is a secondary metabolite found in some Actinomycetes, for example Actinomadura madurae, Streptomyces coelicolor and Streptomyces longisporus. Production Biosynthesis The biosynthesis of undecylprodigiosin starts with PCP apoprotein which is transformed into the holoprotein using acetyl CoA and PPtase then adenylation occurs utilizing L-proline and ATP. The resulting molecule is then oxidized by dehydrogenase enzyme. Elongation by decarboxylative condensation with malonyl CoA is followed by another decarboxylative condensation with L-serine using α-oxamine synthase (OAS) domain. The compound is then cyclized, oxidized with dehydrogenase and methylated with SAM to give 4-methoxy-2,2′-bipyrrole-5-carboxaldehyde (MBC) intermediate which react with 2-undecylpyrrole (2-UP) to give undecylprodigiosin. Laboratory The first total synthesis of the undecylprodigiosin was published in 1966, confirming the chemical structure. As with the biosynthesis, the key intermediate was MBC. Uses As with other prodiginines, the compound has been investigated for its pharmaceutical potential as anticancer, immunosuppressant, or antimalarial agent. References Streptomyces Alkaloids Pyrroles
Undecylprodigiosin
[ "Chemistry" ]
386
[ "Organic compounds", "Biomolecules by chemical classification", "Natural products", "Alkaloids" ]
59,204,782
https://en.wikipedia.org/wiki/Peptide%20loading%20complex
The peptide-loading complex (PLC) is a short-lived, multisubunit membrane protein complex that is located in the endoplasmic reticulum (ER). It orchestrates peptide translocation and selection by major histocompatibility complex class I (MHC-I) molecules. Stable peptide-MHC I complexes are released to the cell surface to promote T-cell response against malignant or infected cells. In turn, T-cells recognize the activated peptides, which could be immunogenic or non-immunogenic. Overview A PLC assembly consists of seven subunits, including the transporters associated with antigen processing (TAP1 and TAP2 – jointly referred to as TAP), the oxidoreductase ERp57, the MHC-I heterodimer, and the chaperones tapasin and calreticulin. TAP transports proteasomal degradation products from the cytosol into the lumen of the ER, where they are loaded onto MHC-I molecules. The peptide-MHC-I complexes then move via a secretory pathway to the cell surface, presenting their antigenic load to cytotoxic T-cells. In general, preliminary MHC-I heavy chains are chaperoned by the calnexin–calreticulin system in the ER. Together with β2-microglobulin (β2m), MHC-I heavy chains form assemblies of heterodimers that act as receptors for antigenic peptides. Empty MHC-I heterodimers are recruited by calreticulin and form short-lived macromolecular PLC where the chaperone tapasin further provides stabilization in the MHC-I molecules. Furthermore, ERp57 and tapasin form disulfide-linked conjugates, and tapasin is crucial for maintaining the structural stability of the PLC as well as facilitating optimal peptide loading. After final quality control, during which MHC-I heterodimers undergo peptide editing, stable peptide–MHC-I complexes are released to the cell surface for T-cell recognition. The PLC can serve a large variety of MHC-I allomorphs, thus playing a central role in the differentiation and priming of T lymphocytes, and in controlling viral infections and tumour development. Structure The structure of the human PLC has been determined using single-particle electron cryo-microscopy (cryo-EM). The PLC, measuring 150 Å by 150 Å and with a total height of 240 Å, is organized around the Transporter associated with Antigen Processing (TAP). It includes molecules such as tapasin, calreticulin, ERp57, and Major Histocompatibility Complex class I (MHC-I), arranged in a pseudo-symmetric pattern. TAP TAP is a heterodimeric complex, consisting of TAP1 (ABCB2) and TAP2 (ABCB3) members of the ABC transporter superfamily. The common feature of all ABC transporters is their organization: 1) into two transmembrane domains (TMDs) and 2) into two nucleotide-binding domains (NBDs). Both intramolecular domains are coupled to each other and when ATP binding is in progress, conformational changes in the TMDs allow proteasomal degradation products to move across the membrane. TAP recognizes and transports the antigen peptides produced in the cytosol straight into the ER, while tapasin recognizes the kind of peptides that have the ability to form stable complexes with MHC-I. This process is known as peptide proofreading or editing. Peptides selected through proofreading improve MHC-I stability; tapasin also contributes to the editing of immunogenic peptide epitopes. However, only lately it was proven via biochemical, biophysical, and structural studies that a key function in adaptive immunity, the catalytic mechanism of peptide proofreading, is performed by tapasin and TAPBPR (TAP-binding protein-related, a tapasin homologue). Tapasin Cresswell and co-workers first discovered tapasin (TAP-associated glycoprotein) as a 48 kDa protein in complexes isolated with TAP1 antibodies from digitonin lysates of human B lymphoblastoid cells. Tapasin binds HC/β2m along with ER chaperones to the peptide transporter. It is located in the ER and its function comprises holding together class I molecules jointly with the chaperone calreticulin and the ERp57 to TAP. Studies of a tapasin-deficient cell line and from mice bearing a disrupted tapasin gene, the short-lived complex of class I molecules. Tapasin and TAP are very important for the stabilization of the class I molecules and also for the optimization of the peptide presented to cytotoxic T cells. A PLC-independent tapasin homologue protein named TAPBPR was found that has the ability to act as a second MHC-I specific peptide proofreader or editor, but does not possess a transmembrane domain. Tapasin and TAPBPR share similar binding interfaces on MHC-I, as shown with the X-ray structure of TAPBPR with MHC-I (heavy chain and β2 microglobulin). The use of a photo-cleavable high-affinity peptide allowed researchers to form a stable (bound) MHC-I molecules and afterwards to form a stable TAPBPR and MHC-I complex with cleavage by UV light of the photoinduced peptide. ERp57 ERp57 is an enzyme of the thiol oxidoreductase family located in the ER. It is attached to substrates in an indirect fashion through association with the molecular chaperone calreticulin of the peptide-loading complex, In early stages of generation of MHC-I molecules, ERp57 is associated with free MHC-I heavy chains. As a result, its function is determined by the formation of disulfide bonds in heavy chains, by oxidative folding of the heavy chain, and finally by the fact that ERp57 is loading the peptides onto MHC-I molecules. MHC-I MHC-I heavy chains may work as chaperones with the aid of the calnexin-calreticulin complex in the ER. In addition to this, β2-microglobulin (β2m) is attached to the heavy chains of the heterodimers and as a whole they act as receptors for antigenic peptides. When MHC-I chains are empty, they are recruited by calreticulin and form a transient PLC. Tapasin regularly plays a role in the stabilization of MHC-I. Only after MHC-I heterodimers are deployed for peptide proofreading or editing, stable pMHC-I (peptide-MHC-I) complexes are released to the cell surface for recognition and destruction of virus-infected or malignantly neoplastic cells. In general, each individual organism owns a collection of six MHC-I molecules (three from each parent). Thus, in autoimmune emergencies, compatible donors are relatives who own a similar collection of MHC-I molecules, apart from those of the recipient. Calreticulin Calreticulin – especially its lectin-like domain – interacts with MHC-I. The P domain faces the MHC-I peptide-binding site towards ERp57. This orientation makes it possible for tapasin to attach and secure MHC-I. This translocation of TAP facilitates its opening out into an ER luminal cavity, edged by standard membrane entry points such as those for tapasin and MHC-I. These two entry points facilitate the recruitment of MHC-I with optimal peptide loading and eventual release of MHC-I in T-cell surfaces for recognition. References Peptides Immune system Protein targeting Transmembrane proteins
Peptide loading complex
[ "Chemistry", "Biology" ]
1,681
[ "Biomolecules by chemical classification", "Immune system", "Protein targeting", "Organ systems", "Cellular processes", "Molecular biology", "Peptides" ]
59,205,557
https://en.wikipedia.org/wiki/Messaging%20Layer%20Security
Messaging Layer Security (MLS) is a security layer for end-to-end encrypting messages. It is maintained by the MLS working group of the Internet Engineering Task Force, and is designed to provide an efficient and practical security mechanism for groups as large as 50,000 and for those who access chat systems from multiple devices. Security properties Security properties of MLS include message confidentiality, message integrity and authentication, membership authentication, asynchronicity, forward secrecy, post-compromise security, and scalability. History The idea was born in 2016 and first discussed in an unofficial meeting during IETF 96 in Berlin with attendees from Wire, Mozilla and Cisco. Initial ideas were based on pairwise encryption for secure 1:1 and group communication. In 2017, an academic paper introducing Asynchronous Ratcheting Trees was published by the University of Oxford and Facebook setting the focus on more efficient encryption schemes. The first BoF took place in February 2018 at IETF 101 in London. The founding members are Mozilla, Facebook, Wire, Google, Twitter, University of Oxford, and INRIA. As of March 29, 2023, the IETF has approved publication of Messaging Layer Security (MLS) as a new standard. It was officially published on July 19, 2023. At that time, Google announced it intended to add MLS to the end to end encryption used by Google Messages over RCS. Matrix is one of the protocols declaring migration to MLS. Research on adding post-quantum cryptography (PQC) to MLS is ongoing, but MLS does not currently support PQC. Implementations OpenMLS: language: Rust, license: MIT MLS++: language: C++, license: BSD-2 mls-rs: language: Rust, license: MIT, Apache 2.0 MLS-TS: language: TypeScript, license: Apache 2.0 References External links RFC 9420 The Messaging Layer Security (MLS) Protocol Cryptography Internet privacy Secure communication
Messaging Layer Security
[ "Mathematics", "Engineering" ]
409
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
59,207,188
https://en.wikipedia.org/wiki/Molecular%20layer%20deposition
Molecular layer deposition (MLD) is a vapour phase thin film deposition technique based on self-limiting surface reactions carried out in a sequential manner. Essentially, MLD resembles the well established technique of atomic layer deposition (ALD) but, whereas ALD is limited to exclusively inorganic coatings, the precursor chemistry in MLD can use small, bifunctional organic molecules as well. This enables, as well as the growth of organic layers in a process similar to polymerization, the linking of both types of building blocks together in a controlled way to build up organic-inorganic hybrid materials. Even though MLD is a known technique in the thin film deposition sector, due to its relative youth it is not as explored as its inorganic counterpart, ALD, and a wide sector development is expected in the upcoming years. History Molecular layer deposition is a sister technique of atomic layer deposition. While the history of atomic layer deposition dates back to the 1970s, thanks to the independent work of Valentin Borisovich Aleskovskii. and Tuomo Suntola, the first MLD experiments with organic molecules were not published until 1991, when an article from Tetsuzo Yoshimura and co-workers appeared regarding the synthesis of polyimides using amines and anhydrides as reactants. After some work on organic compounds along the 1990s, the first papers related to hybrid materials emerged, after combining both ALD and MLD techniques. Since then, the number of articles submitted per year on molecular layer deposition has increased steadily, and a more diverse range of deposited layers have been observed, including polyamides, polyimines, polyurea, polythiourea and some copolymers, with special interest in the deposition of hybrid films. Reaction mechanism In similar fashion to an atomic layer deposition process, during an MLD process the reactants are pulsed on a sequential, cyclical manner, and all gas-solid reactions are self-limiting on the sample substrate. Each of these cycles are called MLD cycles and layer growth is measured as Growth Per Cycle (GPC), usually expressed in nm/cycle or Å/cycle. During a model, two precursor experiment, an MLD cycle proceeds as follows: First, precursor 1 is pulsed in the reactor, where it reacts and chemisorbs to the surface species on the sample surface. Once all adsorption sites have been covered and saturation has been reached, no more precursor will attach, and excess precursor molecules and generated byproducts are withdrawn from the reactor, either by purging with inert gas or by pumping the reactor chamber down. Only when the chamber has been properly purged with inert gas/pumped down to base pressure (~ 10−6 mbar range) and all unwanted molecules from the previous step have been removed, can precursor 2 be introduced. Otherwise, the process runs the risk of CVD-type growth, where the two precursors react in the gaseous phase before attaching to the sample surface, which would result in a coating with different characteristics. Next, precursor 2 is pulsed, which reacts with the previous precursor 1 molecules anchored to the surface. This surface reaction is again self-limiting and, followed again by purging/pumping to base pressure the reactor, leaves behind a layer terminated with surface groups that can again react with precursor 1 in the next cycle. In the ideal case, the repetition of the MLD cycle will build up an organic/inorganic film one monatomic layer at a time, enabling highly conformal coatings with precise thickness control and film purity If ALD and MLD are combined, more precursors in a wider range can be used, both inorganic and organic. In addition, other reactions can be included in the ALD/MLD cycles as well, such as plasma or radical exposures. This way, an experiment can be freely customised according to the research needs by tuning the number of ALD and MLD cycles and the steps contained within the cycles. Process chemistry and surface reactions Precursor chemistry plays a key role in MLD. The chemical properties of the precursor molecules drive the composition, structure and stability of the deposited hybrid material. To reach the saturation stage in a short time and ensure a reasonable deposition rate, precursors must chemisorb on the surface, react rapidly with the surface active groups and react with each other. The desired MLD reactions should have a large negative ∆G value. Organic compounds are employed as precursors for MLD. For their effective use, the precursor should have sufficient vapor pressure and thermal stability to be transported in the gas phase to the reaction zone without decomposing. Volatility is influenced by the molecular weight and intermolecular interactions. One of the challenges in MLD is to find an organic precursor that has sufficient vapor pressure, reactivity and thermal stability. Most organic precursors have low volatility, and heating is necessary to ensure the sufficient supply of vapor reaching the substrate. The backbone of the organic precursors can be flexible i.e., aliphatic, or rigid i.e., aromatics employed with the functional groups. The organic precursors usually are homo or heterobifunctional molecules with -OH, -COOH, -NH2, -CONH2, -CHO, -COCl, -SH, -CNO, -CN, alkenes, etc. functional groups. The bifunctional nature of the precursors is essential for continuous film growth as one group is expected to react with the surface and the other one is accessible to react with the next pulse of the co-reactant. The attached functional groups play a vital role in the reactivity and binding modes of the precursor and they should be able to react with the functional groups present at the surface. A flexible backbone may hinder the growth of a continuous and dense film by back coordination, blocking the reactive sites and thus lowering the film growth rate. Thus, finding a MLD precursor with all the above-mentioned requirements fulfilled is not straightforward process. Surface groups play a crucial role as reaction intermediates. The substrate is usually hydroxylated or hydrogen terminated and hydroxyls serve as reactive linkers for condensation reactions with metals. The inorganic precursor reacts with surface reactive groups via the corresponding linking chemistry that leads to the formation of new O-Metal bonds. The metal precursor step changes the surface termination, leaving the surface with new reactive sites ready to react with the organic precursor. The organic precursor reacts at the resulting surface by bonding covalently with the metal sites, releasing metal ligands and leaves another reactive molecular layer ready for the next pulse. Byproducts are released after each adsorption step and the reactions are summarised below. Process considerations When performing an MLD process, as a variant of ALD, certain aspects need to be taken into account in order to obtain the desired layer with adequate purity and growth rate: Saturation Before starting an experiment, the researcher must know whether the process designed will yield saturated or unsaturated conditions. If this information is unknown, it is a priority to get to know it in order to have accurate results. If not long enough precursor pulsing times are allowed, the surface reactive sites of the sample will not have sufficient time to react with the gaseous molecules and form a monolayer, which will be translated in a lower growth per cycle (GPC). To solve this issue, a saturation experiment can be performed, where the film growth is monitored in-situ at different precursor pulsing times, whose GPCs will then be plotted against pulsing time to find the saturation conditions. Additionally, too short purging times will result in remaining precursor molecules in the reactor chamber, which will be reactive in the gaseous phase towards the new precursor molecules introduced during the next step, obtaining an undesired CVD-grown layer instead. MLD window Film growth usually depends on the temperature of deposition, on what is called MLD window, a temperature range in which, ideally, film growth will remain constant. When working outside of the MLD window, a number of problems can occur: When working at lower temperatures: limited growth, due to insufficient reactivity; or condensation, which will appear like a higher GPC than expected. When working at higher temperatures: precursor decomposition, which originates non-saturating uncontrolled growth; or desorption that will lower deposition rates. In addition, even when working within the MLD window, GPCs can still vary with temperature sometimes, due to the effect of other temperature-dependent factors, such as film diffusion, number of reactive sites or reaction mechanism. Non-idealities Non-monolayer growth When carrying out an MLD process, the ideal case of one monolayer per cycle is not usually applicable. In the real world, many parameters affect the actual growth rate of the film, which in turn produce non idealities like sub-monolayer growth (deposition of less than a full layer per cycle), island growth and coalescence of islands. Substrate effects During an MLD process, film growth will usually achieve a constant value (GPC). However, during the first cycles, incoming precursor molecules will not interact with a surface of the grown material but rather with the bare substrate, and thus will undergo different chemical reactions with different reaction rates. As a consequence of this, growth rates can experience a substrate enhancement (faster substrate-film reaction than film-film reactions) and therefore higher GPCs in the first cycles; or a substrate inhibition (slower substrate-film reaction than film-film reactions), accompanied by a GPC decrease at the beginning. In any case, process growth rates can be very similar in both cases in some depositions. Lower than anticipated growth In MLD, it is not strange to observe that, often, experiments yield lower than anticipated growth rates. The reason for this relies on several factors, such as: Molecule tilting: organic molecules with long chains are prone to not remaining completely perpendicular to the surface, lowering the number of surface sites. Bidentate ligands: when a reacting molecule has two functional groups, it may bend and react with two surface sites instead of remaining straight on the surface. This has been shown, for instance, for titanicones grown with ethylene glycol and glycerol. Because glycerol has an additional hydroxyl group compared to ethylene glycol and is able to provide an extra reactive hydroxyl group in the case of a double reaction of the terminal hydroxyl groups with the surface. Steric hindrance: organic precursors are often bulky, and can cover several surface groups when attached to the surface. Long pulsing times: organic precursors can have very low vapour pressures, and very long pulsing times may be necessary in order to achieve saturation. In addition, long purging times are usually needed to remove all unreacted molecules from the chamber afterward. Low temperatures: to increase the precursor vapor pressure, one might think of increasing its temperature. Nevertheless, organic precursors are usually very thermally fragile, and a temperature increase may induce decomposition. Gas-phase: many organic reactions are normally carried out in the liquid phase, and are therefore dependent of acid-base interactions or solvation effects. These effects are not present in the gaseous phase and, as a consequence, many processes will yield lower reaction rates or directly won't be possible. This phenomenon can be avoided as much as possible by using organic precursors with stiff backbones or with more than two functional groups, using a three step reaction sequence, or using precursors in which ring-opening reactions occur. Physical state of precursors Liquid precursors High volatility and ease-of-handling make liquid precursors the preferred choice for ALD/MLD. Generally, liquid precursors have high enough vapor pressures at room temperature and hence require limited to no heating. They are also not prone to common problems with solid precursors like caking, particle size change, channeling and provide consistent and stable vapor delivery. Hence, some solid precursors with low melting points are generally used in their liquid states. A carrier gas is usually employed to carry the precursor vapor from its source to the reactor. The precursor vapors can be directly entrained into this carrier gas with the help of solenoid and needle valves. On the other hand, the carrier gas may be flown over the head space of a container containing the precursor or bubbled through the precursor. For the latter, dip-tube bubblers are very commonly used. The setup comprises a hollow tube (inlet) opening almost at the bottom of a sealed ampoule filled with precursor and an outlet at the top of the ampoule. An inert carrier gas like Nitrogen/Argon is bubbled through the liquid via the tube and led to the reactor downstream via the outlet. Owing to relatively fast evaporation kinetics of liquids, the outcoming carrier gas is nearly saturated with precursor vapor. The vapor supply to the reactor can be regulated by adjusting the carrier gas flow, temperature of the precursor and if needed, can be diluted further down the line. It must be ensured that the connections downstream from the bubbler are kept at high enough temperatures so as to avoid precursor condensation. The setup can also be used in spatial reactors which demand extremely high, stable and constant supply of precursor vapor. In conventional reactors, hold cells can also be used as a temporary reservoir of precursor vapor. In such a setup, the cell is initially evacuated. It is then opened to a precursor source and allowed to be filled with precursor vapor. The cell is then cut off from the precursor source. Depending upon the reactor pressure, the cell may then be pressurized with an inert gas. Finally, the cell is opened to the reactor and the precursor is delivered. This cycle of filling and emptying the hold (storage) cell can be synced with an ALD cycle. The setup is not suitable for spatial reactors which demand continuous supply of vapor. Solid precursors Solid precursors are not as common as liquid but are still used. A very common example of a solid precursor having potential applications in ALD for semiconductor industry is trimethylindium (TMIn). In MLD, some solid co-reactants like p-Aminophenol, Hydroquinone, p-Phenylenediamine can overcome the problem of double reactions faced by liquid reactants like Ethylene glycol. Their aromatic backbone can be attributed as one of the reasons for this. Growth rates obtained from such precursors is usually higher than precursors with flexible backbones. However, most of the solid precursors have relatively low vapor pressures and slow evaporation kinetics. For temporal setups, the precursor is generally filled in a heated boat and the overhead vapors are swept to the reactor by a carrier gas. However, slow evaporation kinetics make it difficult to deliver equilibrium vapor pressures. In order to ensure maximum saturation of a carrier gas with the precursor vapor, the contact between a carrier gas and the precursor needs to be long and sufficient. A simple dip-tube bubbler, commonly used for liquids, can be used for this purpose. But, the consistency in vapor delivery from such a setup is prone to evaporative/sublimative cooling of the precursor, precursor caking, carrier gas channeling, changes in precursor morphology and particle size change. Also, blowing high flows of carrier gas through a solid precursor can lead to small particles being carried away to the reactor or a downstream filter thereby clogging it. In order to avoid these problems, the precursor may first be dissolved in a non-volatile inert liquid or suspended in it and the solution/suspension can then be used in a bubbler setup. Apart from this, some special vapor delivery systems have also been designed for solid precursors to ensure stable and consistent delivery of precursor vapor for longer durations and higher carrier flows. Gaseous precursors ALD/MLD are both gas phase processes. Hence, precursors are required to be introduced into the reaction zones in their gaseous form. A precursor already existing in a gaseous physical state would make its transport to the reactor very straightforward and hassle free. For example, there will be no need of heating the precursor thereby reducing the risk of condensation. However, precursors are seldom available in gaseous state. On the other hand, some ALD co-reactants are available in gaseous form. Examples include H2S used for sulphide films; NH3 used for nitride films; plasmas of O2 and O3 to produce oxides. The most common and straight forward way of regulating the supply of these co-reactants to the reactor is using a mass flow controller attached between the source and the reactor. They can also be diluted with an inert gas to control their partial pressure. Film characterisation Several characterisation techniques have evolved over time as the demand for creating ALD/MLD films for different applications has increased. This includes lab-based characterisation and efficient synchrotron-based x-ray techniques. Lab-based characterisation Since they both follow a similar protocol, almost all characterisation applicable to ALD generally applies to MLD as well. Many tools have been employed to characterise MLD film properties such as thickness, surface and interface roughness, composition, and morphology. Thickness and roughness (surface and interface) of a grown MLD film are of utmost importance and are usually characterised ex-situ by X-ray reflectivity (XRR). In-situ techniques offer an easier and more efficient characterisation than their ex-situ counterparts, among which spectroscopic ellipsometry (SE) and quartz crystal microbalance (QCM) have become very popular to measure thin films from a few angstroms to a few micrometers with exceptional thickness control. X-ray photoelectron spectroscopy (XPS) and X-ray diffractometry (XRD) are widely used to gain insights into film composition and crystallinity, respectively, whereas atomic force microscopy (AFM) and scanning electron microscopy (SEM) are being frequently utilised to observe surface roughness and morphology. As MLD mostly deals with hybrid materials, comprising both organic and inorganic components, Fourier transform infrared spectroscopy (FTIR) is an important tool to understand the new functional group added or removed during the MLD cycles and also it is a powerful tool to elucidate the underlying chemistry or surface reactions during each sub cycle of an MLD process. Synchrotron-based characterisation A synchrotron is an immensely powerful source of x-rays that reaches energy levels which cannot be achieved in a lab-based environment. It produces synchrotron radiation, the electromagnetic radiation emitted when charged particles undergo radial acceleration, whose high power levels offer a deeper understanding of processes and lead to cutting-edge research outputs. Synchrotron-based characterisations also offer potential opportunities for understanding the basic chemistry and developing fundamental knowledge about MLD processes and their potential applications. The combination of in-situ X-ray fluorescence (XRF) and Grazing incidence small angle X-ray scattering (GISAXS) has been demonstrated as a successful methodology to learn the nucleation and growth during ALD processes and, although this combination has not yet been investigated in detail to study MLD processes, it holds great potential to improve the understanding of initial nucleation and internal structure of the hybrid materials developed by MLD or by vapour phase infiltration (VPI). Potential applications The main application for molecular scale-engineered hybrid materials relies on its synergetic properties, which surpass the individual performance of their inorganic and organic components. The main fields of application of MLD-deposited materials are Packaging / encapsulation: depositing ultrathin, pinhole-free and flexible coatings with improved mechanical properties (flexibility, stretchability, reduced brittleness). One example are gas-barriers on organic light emitting diodes (OLEDs). Electronics: Tailoring materials with special mechanical and dielectric properties, such as advanced integrated circuits that require particular insulators or flexible thin film transistors with high-k gate dielectrics. Also, the recovery of energy wasted as heat as electric power with certain thermoelectric devices. Biomedical applications: to enhance either cell growth, better adhesion or the opposite, generating materials with anti-bacterial properties. These can be used in research areas like sensing, diagnostics or medicine delivery. Combining inorganic and organic building blocks on a molecular scale has proved to be challenging, due to the different preparative conditions needed for forming inorganic and organic networks. Current routes are often based on solution chemistry, e.g. sol-gel synthesis combined with spin-coating, dipping or spraying, to which MLD is an alternative. MLD usage for dielectric materials. Low-k The dielectric constant (k) of a medium is defined as the ratio of the capacitor capacitances with and without medium. Nowadays delay, crosstalk and power dissipation caused by the resistance of the metal interconnection and the dielectric layer of nanoscale devices have become the main factors that limit the performance of a device and, as electronic devices are scaled-down further, interconnect resistance capacitance (RC) delay may dominate the overall device speed. To solve this, current work is focused on minimising the dielectric constant of materials by combining inorganic and organic materials, whose reduced capacitance allows for shrinkage of spacing between metal lines and, with it, the ability to decrease the number of metal layers in a device. In these kind of materials, the organic part must be hard and resistant and, for that purpose, metal oxides and fluorides are commonly used. However, since this materials are more brittle, organic polymers are also added, providing the hybrid material with low dielectric constant, good interstitial ability, high flatness, low residual stress, low thermal conductivity. In current research, great efforts are being put in order to prepare low-k materials by MLD with a k value of less than 3. High-k Novel organic thin-film transistors require a high-performance dielectric layer, which should be thin and possess a high k-value. MLD makes tuning the high-k and dielectric strength possible by altering the amount and the ratio of the organic and inorganic components. Moreover, the usage of MLD allows to achieve better mechanical properties in terms of flexibility. Various hybrid dielectrics have already been developed: zincone hybrids from zirconium tert-butoxide (ZTB) and ethylene glycol (EG); Al2O3 based hybrids such as self-assembled MLD-deposited octenyltrichlorosilane (OTS) layers and Al2O3 linkers. Additionally, dielectric Ti-based hybrid from TiCl4 and fumaric acid proved its applicability in charge memory capacitors. MLD for porous materials MLD has high potential for the deposition of porous hybrid organic-inorganic and purely organic films, such as Metal-Organic Frameworks (MOFs) and Covalent-Organic Frameworks (COFs). Thanks to the defined pore structure and chemical tunability, thin films of these novel materials are expected to be incorporated in the next generation of gas sensors and low-k dielectrics. Conventionally, thin films of MOFs and COFs are grown via solvent-based routes, which are detrimental in a cleanroom environment and can cause corrosion of the pre-existing circuitry. As a cleanroom-compatible technique, MLD presents an attractive alternative, which has not been fully realized yet. As to date, there are no reports on direct MLD of MOFs and COFs. Scientists are actively developing other solvent-free all-gas-phase methods towards a true MLD process. One of the early examples of an MLD-like process is the so-called "MOF-CVD". It was first realized for ZIF-8 utilizing a two-step process: ALD of ZnO followed by exposure to 2-methylimidazole linker vapor. It was later extended to several other MOFs. MOF-CVD is a single-chamber deposition method and the reactions involved exhibit self-limiting nature, bearing a strong resemblance to a typical MLD process. An attempt to perform a direct MLD of a MOF by sequential reactions of a metal precursor and organic linker commonly results in a dense and amorphous film. Some of these materials can serve as a MOF precursor after a specific gas-phase post-treatment. This two-step process presents an alternative to the MOF-CVD. It has been successfully realized for a few prototypical MOFs: IRMOF-8, MOF-5, UiO-66, Though the post-treatment step is necessary for MOF crystallization, it often requires harsh conditions (high temperature, corrosive vapors) that lead to rough and non-uniform films. A deposition with zero to minimum post-treatment is highly desirable for industrial applications. MLD for conductive materials. Conductive and flexible films are crucial for numerous emerging applications, such as displays, wearable devices, photovoltaics, personal medical devices, etc. For example, a zincone hybrid is closely related to a ZnO film and, therefore, may combine the conductivity of ZnO with the flexibility of an organic layer. Zincones can be deposited from diethylzinc (DEZ), hydroquinone (HQ) and water to generate a molecular chain in the form of (−Zn-O-phenylene-O−)n, which is an electrical conductor. Measurements of a pure ZnO film showed a conductivity of ~14 S/m, while the MLD zincone showed ~170 S/m, demonstrating a considerable enhancement of the conductivity in the hybrid alloy of more than one order of magnitude. MLD for energy storage MLD coatings for battery electrodes One of the main applications of MLD in the batteries field is to coat the battery electrodes with hybrid (organic-inorganic) coatings. The main reason being, these coatings can potentially protect the electrodes from the main sources of degradation, while not breaking. These coatings are more flexible than purely inorganic materials. Therefore, being able to cope with volume expansion occurring in the battery electrodes upon charge and discharge. MLD coatings on anodes:The implementation of silicon anodes in batteries is extremely interesting due to its high theoretical capacity (4200mAh/g). Nevertheless, the huge volume change upon lithium alloying and dealloying is a big issue as it leads to the degradation of the silicon anodes. MLD thin film coatings, such as Alucones (AL-GL, AL-HQ), can be used on silicon as a buffering matrix, due to is high flexibility and toughness. Therefore, relieving the volume expansion for the Si anode, and leading to a significant improve in cycling performance. MLD coatings on cathodes:Li sulfur batteries are of great interest due to their high energy density, which makes it promising for applications such as electric vehicles (EVs) and hybrid electric vehicles (HEVs). However, their poor cycle life due to the dissolution of the polysulfides from the cathode, is detrimental for the battery performance. This fact, together with the large volume expansion are some of the main factors that lead to the poor electrochemical performance. Alucone coatings (AL-EG) on sulfur cathodes have been successfully used to face these issues. MLD for thermoelectric Materials Atomic/molecular layer deposition (ALD/MLD) as a thin film deposition technology with high precision and control creates this opportunity to produce very good hybrid inorganic-organic superlattice structures. Adding organic barrier layers inside the inorganic lattice of the thermoelectric materials improves the thermoelectric efficiency. The aforementioned phenomenon is the result of a quenching effect that the organic barrier layers have on phonons. Consequently, the electrons that are mainly responsible for the electrical transport through the lattice, can pass through the organic layers mostly intact, while the phonons that are responsible for the thermal transport will be suppressed to some degree. Consequently, the resulting films will have better thermoelectric efficiency. Practical Outlook It is believed that the application of barrier layers along with other methods for increasing thermoelectric efficiency can help to produce thermoelectric modules that are non-toxic, flexible, cheap, and stable. One such case is thermoelectric oxides of earth-abundant elements. These oxides in comparison to other thermoelectric materials have lower thermoelectricity due to their higher thermal conductivity. Therefore, adding barrier layers, by means of ALD/MLD, is a good method to overcome this negative characteristic of oxides. MLD for biomedical applications Bioactive and biocompatible surfaces MLD can also be applied to design of bioactive and biocompatible surfaces for targeted cell and tissue responses. Bioactive materials involve materials for regenerative medicine, tissue engineering (tissue scaffolds), biosensors etc. The important factors that can affect the cell-surface interaction, as well as the immune response of the system are surface chemistry (e.g. functional groups, surface charge and wettability) and surface topography. Understanding these properties is crucial in order to control the attachment and proliferation of the cell, and resultant bioactivity of the surfaces. Furthermore, the choice of organic building blocks and a type of biomolecules (e.g. proteins, peptides or polysaccharides) during the formation of bioactive surfaces is a key factor for cellular response of the surface. MLD allows for the building of bioactive, precise structures by combining such organic molecules with inorganic biocompatible elements like titanium. The use of MLD for biomedical applications is not widely studied and is a promising field of research. This method enables surface modification and thus, can functionalize a surface. A recent study published in 2017 used MLD to create bioactive scaffolds by combining titanium clusters with amino acids such as glycine, L-aspartic acid and L-arginine as organic linkers, to enhance rat conjunctival goblet cell proliferation. This novel group of organic-inorganic hybrid materials was called titaminates. Also, the bioactive hybrid materials that contain titanium and primary nucleobases such as thymine, uracil and adenine show high (>85%) cell viability and potential application in the field of tissue engineering. Antimicrobial surfaces Hospital-acquired infections caused by pathogenic microorganisms such as bacteria, viruses, parasites or fungi, are a major problem in modern healthcare. A large number of these microbes developed the ability to stop popular antimicrobial agents (such as antibiotics and antivirals) from working against them. To overcome the increasing problem of antimicrobial resistance, it has become necessary to develop alternative and effective antimicrobial technologies to which pathogens will not be able to develop resistance. One possible approach is to cover a surface of medical devices with antimicrobial agents e.g. photosensitive organic molecules. In the method called antimicrobial photodynamic inactivation (aPDI), photosensitive organic molecules utilise light energy to form highly reactive oxygen species that oxidize biomolecules (like proteins, lipids and nucleic acids) leading to the pathogen death. Furthermore, aPDI can locally treat the infected area, which is an advantage for small medical devices like dental implants. MLD is a suitable technique to combine such photosensitive organic molecules like aromatic acids with biocompatible metal clusters (i.e. zirconium or titanium) to create light-activated antimicrobial coatings with controlled thickness and accuracy. The recent studies show that the MLD-fabricated surfaces based on 2,6-naphthalenedicarboxylic acid and Zr-O clusters were successfully used against Enterococcus faecalis in the presence of UV-A irradiation. Advantages and limitations Advantages The main advantage of molecular layer deposition relates to its slow, cyclical approach. While other techniques may yield thicker films in shorter times, molecular layer deposition is known for its thickness control at Angstrom level precision. In addition, its cyclical approach yields films with excellent conformality, making it suitable for the coating of surfaces with complex shapes. The growth of multilayers consisting of different materials is also possible with MLD, and the ratio of organic/inorganic hybrid films can easily be controlled and tailored to the research needs. Limitations As well as in the previous case, the main disadvantage of molecular layer deposition is also related to it slow, cyclical approach. Since both precursors are pulsed sequentially during each cycle, and saturation needs to be achieved each time, the time required in order to obtain a film thick enough can easily be in the order of hours, if not days. In addition, before depositing the desired films it is always necessary to test and optimise all parameters for it to yield successful results. In addition, another issue related to hybrid films deposited via MLD is their stability. Hybrid organic/inorganic films can degrade or shrink in H2O. However, this can be used to facilitate the chemical transformation of the films. Modifying the MLD surface chemistries can provide a solution to increase the stability and mechanical strength of hybrid films. In terms of cost, regular molecular layer deposition equipment can cost between $200,000 and $800,000. What's more, the cost of the precursors used needs to be taken into consideration. Similar to the atomic layer deposition case, there are some rather strict chemical limitations for precursors to be suitable for molecular layer deposition. MLD precursors must have Sufficient volatility Aggressive and complete reactions Thermal stability No etching of the film or substrate material Sufficient purity In addition, it is advisable to find precursors with the following characteristics: Gases or highly volatile liquids High GPC Unreactive, volatile byproducts Inexpensive Easy to synthesise and handle Non-toxic Environmentally friendly References External links ALD/MLD process animation ALD/MLD process design and optimisation Thin film deposition Semiconductor device fabrication Chemical processes
Molecular layer deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
7,150
[ "Thin film deposition", "Microtechnology", "Coatings", "Thin films", "Chemical processes", "Semiconductor device fabrication", "nan", "Chemical process engineering", "Planes (geometry)", "Solid state engineering" ]
59,208,283
https://en.wikipedia.org/wiki/M%20equilibrium
M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization ("no mistakes") and perfect beliefs ("no surprises"). The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis. Background A large body of work in experimental game theory has documented systematic departures from Nash equilibrium, the cornerstone of classic game theory. The lack of empirical support for Nash equilibrium led Nash himself to return to doing research in pure mathematics. Selten, who shared the 1994 Nobel Prize with Nash, likewise concluded that "game theory is for proving theorems, not for playing games". M equilibrium is motivated by the desire for an empirically relevant game theory. M equilibrium accomplishes this by replacing the two main assumptions underlying classical game theory, perfect maximization and rational expectations, with the weaker notions of ordinal monotonicity –players' choice probabilities are ranked the same as the expected payoffs based on their beliefs – and ordinal consistency – players' beliefs yield the same ranking of expected payoffs as their choices. M equilibria do not follow from the fixed-points that follow by imposing rational expectations and that have long dominated economics. Instead, the mathematical machinery used to characterize M equilibria is semi-algebraic geometry. Interestingly, some of this machinery was developed by Nash himself. The characterization of M equilibria as semi-algebraic sets allows for mathematically precise and empirically testable predictions. Definition M equilibrium is based on the following two conditions; Ordinal monotonicity: choice probabilities are ranked the same as the expected payoffs based on players’ beliefs. This replaces the assumption of "perfect maximization". Ordinal consistency: player’s beliefs yield the same ranking of expected payoffs as their choices. This replaces the rational expectations or perfect-beliefs assumption. Let and denote the concatenations of players’ choice and belief profiles respectively, and let and denote the concatenations of players’ rank correspondences and profit functions. We write for the profile of expected payoffs based on players’ beliefs and for the profile of expected payoffs when beliefs are correct, i.e. for . The set of possible choice profiles is and the set of possible belief profiles is . Definition: We say form an M Equilibrium if they are the closures of the largest non-empty sets and that satisfy: for all , . Properties It can be shown that, generically, M equilibria satisfy the following properties: M equilibria have positive measure in M equilibria are "colorable" by a unique rank vector Nash equilibria arise as boundary points of some M equilibrium The number of M equilibria can generically be even or odd, and may be less than, equal, or greater than the number of Nash equilibria. Also, any M equilibrium may contain zero, one, or multiple Nash equilibria. Importantly, the measure of any M equilibrium choice set is bounded and decreases exponentially with the number of players and the number of possible choices. Meta Theory Surprisingly, M equilibrium "minimally envelopes" various parametric models based on fixed-points, including Quantal Response Equilibrium. Unlike QRE, however, M equilibrium is parameter-free, easy to compute, and does not impose the rational-expectations condition of homogeneous and correct beliefs. Behavioral stability The interior of a colored M equilibrium set consists of choices and beliefs that are behaviorally stable. A profile is behaviorally stable when small perturbations in the game do not destroy its equilibrium nature. So an M-equilibrium is behaviorally stable when it remains an M equilibrium even after perturbing the game. Behavioral stability is a strengthening of the concept of strategic stability. See also Bounded rationality Behavioral game theory References Game theory equilibrium concepts
M equilibrium
[ "Mathematics" ]
810
[ "Game theory", "Game theory equilibrium concepts" ]
59,208,417
https://en.wikipedia.org/wiki/Cynosura%20%28nymph%29
In Greek mythology, Cynosura ( , ) is the name of an Idaean Oread nymph from the island Crete who brought up a young Zeus during his early years when he hid from his father Cronus, and ended up among the stars. Mythology Along with fellow nymph Helice, Cynosura put the infant Zeus in a cave and nurtured him in Ida, in Crete, while the Dictaean Curetes deceived Cronus so he would not devour his son. One day, Cronus happened to visit Crete, so Zeus hid the nymphs by transforming them both into bears, as he changed his shape into that of a dragon, in order to go undetected by Cronus. Eventually, after he became king of the gods, he honoured his two nurses by placing them both in the sky as constellations, and Cynosura became Ursa Minor, which was a common name for the constellation in Ancient Greece. The most common origin myth for the two bear constellations, however, was that of Callisto, a follower of Artemis, and her son Arcas. The origin of the word "Cynosura"/"dog's tail" is unknown, as it does not connect to the theme of the constellation, and no other constellation fitting the description exists. It has been argued that the derivation from the word for dog is false. See also Orion Rhea Melissa References Bibliography Gaius Julius Hyginus, Astronomica from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Maurus Servius Honoratus, In Vergilii carmina comentarii. Servii Grammatici qui feruntur in Vergilii carmina commentarii; recensuerunt Georgius Thilo et Hermannus Hagen. Georgius Thilo. Leipzig. B. G. Teubner. 1881. Online version at the Perseus Digital Library. Smith, William, A Dictionary of Greek and Roman Biography and Mythology, London. John Murray: printed by Spottiswoode and Co., New-Street Square and Parliament Street, 1873. External links CYNOSURA from The Theoi Project Oreads Deeds of Zeus Metamorphoses into animals in Greek mythology Mythological Cretans Ursa Minor
Cynosura (nymph)
[ "Astronomy" ]
488
[ "Ursa Minor", "Constellations" ]
59,209,120
https://en.wikipedia.org/wiki/Janssen%20revolver
The Janssen revolver () was invented by the French astronomer Pierre Jules César Janssen in 1874. It was the instrument that originated chronophotography, a branch of photography based on capturing movement from a sequence of images. To create the apparatus Pierre Janssen was inspired by the revolving cylinder of Samuel Colt's revolver. Usage The revolver used two discs and a sensitive plate, the first with twelve holes (shutter) and the second with only one, on the plate. The first one would take a full turn every eighteen seconds, so that each time a shutter window passed in front of the window of the second (fixed) disk, the sensitive plate was discovered in the corresponding portion of its surface, creating an image. In order for the images not to overlap, the sensitive plate rotated with a quarter of the shutter speed. The Shutter Speed was one and a half seconds. A mirror on the outside of the apparatus reflected the movement of the object towards the lens that was located in the barrel of this photographic revolver. When the revolver was in operation it was capable of taking forty-eight images in seventy-two seconds. History In the mid-nineteenth century, one of the scientific challenges of the moment was to determine with the greatest accuracy possible the distance between the Earth and the Sun, the so-called Astronomical Unit, which indicates the size of the Solar System. At that time, the only way to know it was through the astronomical phenomenon of Venus transit: the passage of Venus ahead of the Sun, which required two simultaneous observations being made at a time from different land latitudes and measure the total duration of the event. With this data and applying the laws of Kepler, which describe the behavior of planetary orbits, the distance with the rest of the planets of the Solar System could be obtained. The method had two drawbacks: the frequency of the phenomenon and technical problems of getting the start and end of the transit. The Venus transit in 1874 was a unique opportunity, which was why more than sixty co-ordinated expeditions from up to ten different countries were dispatched to locations in China, Vietnam, New Caledonia, some Pacific islands and Japan. The distortion caused by the terrestrial atmosphere, the diffraction of the telescopes, the subjectivity of the observer and the "black drop effect" (an optical effect that distorts the silhouette of Venus just in the instant that enters and leaves the solar disk) meant the attempt faced huge technical challenges, which had previously been insurmountable. Janssen's invention of the photographic revolver was designed in an attempt to overcome these difficulties. Application Janssen tested the device with the support of the French government in Nagasaki (Japan). As the exact moment in which the transit of Venus would take place was impossible to predict, he added a watch set to create a sequence of images. The revolver recorded 48 photographs in 72 seconds in a daguerreotype, material that was no longer used but was ideal for the sunlight that was presented in the situation, since it could capture the light in a great time of exposure and obtain clearer results. The British expeditions photographed the transit from different geographic points by using apparatuses inspired by the revolver of Janssen. Unfortunately, the quality of the resulting images of the two expeditions was not sufficient to accurately calculate the Astronomical Unit, and the observations were more reliable at eye. Even so, Janssen introduced his revolver to the Société Francaise de Photographie in 1875 and the Académie des Sciences in 1876, to which he suggested the possibility of using his apparatus for the study of the animal movement, especially of the birds, because of the rapidity of the movement of their wings. Legacy In 1882, the physiologist Etienne-Jules Marey concluded that a galloping horse would have four legs in the air at a certain moment. Four years previously, Eadweard Muybridge was the first to record the movement of living beings, in The Horse in Motion, with 12 serialized cameras that allowed him to play and even project those photographs in a row. The action was not being reconstructed from the point of view of an observer, but from a camera that accompanied the subject - such as a tracking shot - and in which, in each photograph, the action had a different viewpoint. Marey, based on the invention of Janssen, managed to solve these problems with his 1882 photo gun, which captured 12 small photos on a circular plate and at regular intervals. This improvement allowed the image to be captured by a fragile glass plate, so that it was no longer used by the impractical daguerreotype, thus reducing the exposure time. It was, therefore, the first camcorder, although it still had certain differences of conception with the later camcorders: On one hand, the obtained images had as a goal the decomposition of the movement for its study, and not for their projection; and on the other hand, being obtained on a glass disk, the duration of the action that could be recorded was necessarily very short. Both inventions were a first step in the development of the first film cameras, but they can not be considered as such because their main objective was not the projection of films, but to study movement as a result of its decomposition. References 1874 introductions 1870s in film History of film History of astronomy
Janssen revolver
[ "Astronomy" ]
1,068
[ "History of astronomy" ]
59,210,075
https://en.wikipedia.org/wiki/Plumbylene
Plumbylenes (or plumbylidenes) are divalent organolead(II) analogues of carbenes, with the general chemical formula, R2Pb, where R denotes a substituent. Plumbylenes possess 6 electrons in their valence shell, and are considered open shell species. The first plumbylene reported was the dialkylplumbylene, [(Me3Si)2CH]2Pb, which was synthesized by Michael F. Lappert et al in 1973. Plumbylenes may be further classified into carbon-substituted plumbylenes, plumbylenes stabilized by a group 15 or 16 element, and monohalogenated plumbylenes (RPbX). Synthesis Plumbylenes can generally be synthesized via the transmetallation of PbX2 (where X denotes halogen) with an organolithium (RLi) or Grignard reagent (RMgX). The first reported plumbylene, [((CH3)3Si)2CH]2Pb, was synthesized by Michael F. Lappert et al by transmetallation of PbCl2 with [((CH3)3Si)2CH]Li. The addition of equimolar RLi to PbX2 produces the monohalogenated plumbylene (RPbX); addition of 2 equivalents leads to disubstituted plumbylene (R2Pb). Adding an organolithium or Grignard reagent with a different organic substituent (i.e. R’Li/R’MgX) from RPbX leads to the synthesis of heteroleptic plumbylenes (RR’Pb). Dialkyl-, diaryl-, diamido-, dithioplumbylenes, and monohalogenated plumbyelenes have been successfully synthesized this way. Transmetallation with [((CH3)3Si)2N]2Pb as the Pb(II) precursor has also been used to synthesize diarylplumbylenes, disilylplumbylenes, and saturated N-heterocyclic plumbylenes. Alternatively, plumbylenes may be synthesized from the reductive dehalogenation of tetravalent organolead compounds (R2PbX2). Structure and bonding The key aspects of bonding and reactivity in plumbylenes are dictated by the inert pair effect, whereby the combination of a widening s–p orbital energy gap as a trend down the group 14 elements and a strong relativistic contraction of the 6s orbital lead to a limited degree of sp hybridization and the 6s orbital being deep in energy and inert. Consequently, plumbylenes exclusively have a singlet spin state due to the large singlet–triplet energy gap, and tend to exist in an equilibrium between monomeric and dimeric forms in solution. This is in contrast to carbenes, which often have a triplet ground state and readily dimerize to form alkenes. In dimethyllead, (CH3)2Pb, the Pb–C bond length is 2.267 Å and the C–Pb–C bond angle is 93.02°; the singlet–triplet gap is 36.99 kcal mol−1. Diphenyllead, (C6H5)2Pb was computed with GAMESS at the B3PW91 level of theory using the basis sets 6-311+G(2df,p) for C and H and def2-svp for Pb with the ECP60MDF pseudopotential, in an adapted procedure (which uses the cc-pVTZ basis set for Pb instead). The molecular orbitals (MOs) (visualized using Chimera) and natural bond orbitals (NBOs) (visualized using multiwfn) generated are produced below, and qualitatively identical to the literature. As expected, the HOMO is 6s-dominated, and the LUMO is 6p-dominated. The NBOs are of the 6s lone pair and vacant 6p orbital respectively. The Pb–C bond distance was found to be 2.303 Å and the C–Pb–C angle 105.7°. Notwithstanding the different levels of theory, the larger bond angle for (C6H5)2Pb compared to can be rationalized by the greater repulsion between the sterically bulkier phenyl groups relative to methyl groups. Atoms in molecules (AIM) topology analysis revealed critical points in (C6H5)2Pb, and is consistent with the literature. Plumbylenes occur as reactive intermediates in the formation of tetravalent plumbanes (R4Pb). Although the inert pair effect suggests the divalent state should be thermodynamically more stable than the tetravalent state, in the absence of stabilizing substituents, plumbylenes are sensitive to heat and light, and tend to undergo polymerization and disproportionation, forming elemental lead in the process. Plumbylenes can be stabilized as monomers by the use of sterically bulky ligands (kinetic stabilization) or heteroatom-containing substituents that can donate electron density into the vacant 6p orbital (thermodynamic stabilization). Dimerization Plumbylenes are able to undergo dimerization in two ways: either through the formation of a Pb=Pb double bond to form a formal diplumbene, or through bridging halide interactions. Unhalogenated plumbylenes tend to exist in an equilibrium between the monomeric and dimeric form in solution, and, due to the low dimerization energy, as either monomers or dimers in the solid state, depending on the steric bulk of substituents. However, increasing the steric bulk of lead-bound substituents can prevent the close association of plumbylene molecules and allow the plumbylene to exist exclusively as monomers in solution or even in the solid state. The driving force for dimerization in general arises from the Lewis amphoteric nature of plumbylenes, which possess a Lewis acidic vacant 6p orbital and a weakly Lewis basic 6s lone pair, which can act as electron acceptor and donor orbitals respectively. These diplumbenes possess a trans-bent structure similar to that in lighter, non-carbon congeners (disilenes, digermylenes, distannylenes). The observed Pb–Pb bond lengths in diplumbenes (2.90 – 3.53 Å) have been found to typically be longer than those in tetravalent diplumbanes R3PbPbR3 (2.84 – 2.97 Å). This, together with the low computed dimerization energy (energy released from the formation of dimers from monomers) of 24 kJ mol−1 for Pb2H4, indicates weak multiple bonding. This counterintuitive result is due to the pair of 6s-6p donor-acceptor interactions representing the Pb=Pb double bond in diplumbenes being less energetically favourable compared to the overlap of spn orbitals (with a higher degree of hybridization than in diplumbenes) in the Pb–Pb single bond in diplumbanes. In monohalogenated plumbylenes, the halogen atom on one plumbylene is able to donate a lone pair into the vacant 6p orbital of the lead atom on a separate plumbylene in a bridging mode. Monohalogenated plumbylenes have been found to generally exist as monomers in solution and dimers in the solid state, but, again, sufficiently bulky substituents on lead can sterically block this dimerization mode. Due to decreasing dimerization energy down Group 14, while monohalogenated stannylenes and plumbylenes dimerize via the halogen-bridging mode, monohalogenated silylenes and germylenes tend to dimerize via the abovementioned multiply-bonded mode instead. In a recent study, an N-heterocyclic plumbylene was shown to undergo dimerization leading to C–H activation, existing in solution in an equilibrium between the monomer and a dimer resulting from cleavage of an aryl C–H bond and formation of Pb–C and N–H bonds. DFT studies proposed that the reaction occurred via electrophilic substitution at the arene of one plumbylene by the lead atom of another, and involves concerted Pb–C and N–H bond formation instead of insertion of Pb into the C–H bond. Stabilizing intramolecular interactions with substituents bearing lone pairs Plumbylenes may be stabilized by electron donation into the vacant orbital of the lead atom. The two common intramolecular modes are resonance from a lone pair on the atom directly attached to the lead or by coordination from a Lewis base elsewhere in the molecule. For example, Group 15 or 16 elements directly adjacent to Pb donate a lone pair in manner similar to their stabilizing effect on Fisher carbenes. Common examples of more remote electron-donors include nitrogen atoms that can lead to a six-memberd ring by bonding to the lead. Even a fluorine atom on a remote trifluoromethyl group has been seen forming a coordination to lead in [2,4,6-(CF3)3C6H2]2Pb. Agostic interactions Agostic interactions have also been shown to stabilize plumbylenes. DFT computations on the compounds [(R(CH3)2Si){(CH3)2P(BH3)}CH]2Pb (R = Me or Ph) found that agostic interactions between bonding B–H orbitals and the vacant 6p orbital lowered the energy of the molecule by ca. 38 kcal mol−1; this was supported by X-ray crystal structures showing the favourable positioning of said B–H bonds in proximity of Pb. Reactivity As previously mentioned, unstabilized plumbylenes are prone to polymerization and disproportionation, and plumbylenes without bulky substituents tend to dimerize in one of two modes. Below, the reactions of stabilized plumbylenes (at least at the temperatures at which they were studied) are listed. Lewis acid-base adduct formation Plumbylenes are Lewis acidic via the vacant 6p orbital and tend to form adducts with Lewis bases, such as trimethylamine N-oxide (Me3NO), 1-azidoadamantane (AdN3), and mesityl azide (MesN3). In contrast, the reaction between stannylenes and Me3NO produces the corresponding distannoxane (from oxidation of Sn(II) to Sn(IV)) instead of the Lewis adduct, which can be attributed to tin being a period above Pb, experiencing the inert pair effect to a lesser degree and hence having a higher susceptibility to oxidation. In the case of AdN3, the terminal N of the azidoadamantane binds to the plumbylene via a bridging mode between the Lewis acidic Pb and the Lewis basic P atom; in the case of MesN3, the azide evolves N2 to form a nitrene, which then inserts into a C-H bond of an arene substituent and coordinates to Pb as a Lewis base. Insertion Similar to carbenes and other Group 14 congeners, plumbylenes have been shown to undergo insertion reactions, specifically into C–X (X = Br, I) and Group 16 E–E (E = S, Se) bonds. Insertions into lead-substituent bonds can also occur.27 In the examples below, insertion is accompanied by intramolecular rearrangement to place more electron-donating heteroatoms next to the electron-deficient lead.27 Transmetallation Plumbylenes are known to undergo nucleophilic substitution with organometallic reagents to form transmetallated products.28 In an unusual example, the use of TlPF6, bearing the weakly coordinating anion PF6−, led to the formation of crystals of an oligonuclear lead compound with a chain structure upon work-up, highlighting the interesting reactivity of plumbylenes.28 In addition, plumbylenes can also undergo metathesis with group 13 E(CH3)3 (E = Al, Ga) compounds. Plumbylenes bearing different substituents can also undergo transmetallation and exchange substituents, with the driving force being the relief of steric strain and the low Pb-C bond dissociation energy. Applications Plumbylenes can be used as concurrent σ-donor-σ-acceptor ligands to metal complexes, functioning as σ-donor via its filled 6s orbital and σ-acceptor via its empty 6p orbital. Room temperature-stable plumbylenes have also been suggested as precursors in chemical vapour deposition (CVD) and atomic layer deposition (ALD) of lead-containing materials. Dithioplumbylenes and dialkoxyplumbylenes may be useful as precursors for preparing the semiconductor material lead sulphide and piezoelectric PZT respectively. References Organolead compounds
Plumbylene
[ "Chemistry" ]
2,896
[ "Functional groups", "Octet-deficient functional groups" ]
59,210,881
https://en.wikipedia.org/wiki/Anne%20Aaron
Anne Aaron is a Filipina engineer and the director of video algorithms at Netflix. Her responsibilities include "hiring and managing software engineers and research scientists, strategic decision-making on software architecture and research, project management, and cross-team coordination" Education Aaron attended the Philippine Science High School and the Ateneo de Manila University where she received a Bachelor of Science degree in physics in 1998, and another in computer engineering the year after. She then entered Stanford University where she received a PhD in electrical engineering. During these years, Aaron received the AT&T Asia Pacific Leadership Award and C.V. Starr Southeast Asian Fellowship. Career After Stanford, Aaron dove into video streaming companies, such as Modulus Video and Dyyno, then followed a stint at Cisco Systems, where she was the engineering lead for video encoding for its FlipShare Video desktop software. She has been working at Netflix since 2011. Awards Aaron was recognized as one of the 43 most powerful female engineers of 2017 by Business Insider. In 2018 she was featured among "America's Top 50 Women In Tech" by Forbes. External links References Living people People in information technology Netflix people Stanford University School of Engineering alumni Ateneo de Manila University alumni Filipino electrical engineers 21st-century women engineers Filipino women engineers 21st-century Filipino engineers Filipino emigrants to the United States Year of birth missing (living people)
Anne Aaron
[ "Technology" ]
275
[ "People in information technology", "Information technology" ]
59,211,466
https://en.wikipedia.org/wiki/Convolutional%20layer
In artificial neural networks, a convolutional layer is a type of network layer that applies a convolution operation to the input. Convolutional layers are some of the primary building blocks of convolutional neural networks (CNNs), a class of neural network most commonly applied to images, video, audio, and other data that have the property of uniform translational symmetry. The convolution operation in a convolutional layer involves sliding a small window (called a kernel or filter) across the input data and computing the dot product between the values in the kernel and the input at each position. This process creates a feature map that represents detected features in the input. Concepts Kernel Kernels, also known as filters, are small matrices of weights that are learned during the training process. Each kernel is responsible for detecting a specific feature in the input data. The size of the kernel is a hyperparameter that affects the network's behavior. Convolution For a 2D input and a 2D kernel , the 2D convolution operation can be expressed as:where and are the height and width of the kernel, respectively. This generalizes immediately to nD convolutions. Commonly used convolutions are 1D (for audio and text), 2D (for images), and 3D (for spatial objects, and videos). Stride Stride determines how the kernel moves across the input data. A stride of 1 means the kernel shifts by one pixel at a time, while a larger stride (e.g., 2 or 3) results in less overlap between convolutions and produces smaller output feature maps. Padding Padding involves adding extra pixels around the edges of the input data. It serves two main purposes: Preserving spatial dimensions: Without padding, each convolution reduces the size of the feature map. Handling border pixels: Padding ensures that border pixels are given equal importance in the convolution process. Common padding strategies include: No padding/valid padding. This strategy typically causes the output to shrink. Same padding: Any method that ensures the output size same as input size is a same padding strategy. Full padding: Any method that ensures each input entry is convolved over for the same number of times is a full padding strategy. Common padding algorithms include: Zero padding: Add zero entries to the borders of input. Mirror/reflect/symmetric padding: Reflect the input array on the border. Circular padding: Cycle the input array back to the opposite border, like a torus. The exact numbers used in convolutions is complicated, for which we refer to (Dumoulin and Visin, 2018) for details. Variants Standard The basic form of convolution as described above, where each kernel is applied to the entire input volume. Depthwise separable Depthwise separable convolution separates the standard convolution into two steps: depthwise convolution and pointwise convolution. The depthwise separable convolution decomposes a single standard convolution into two convolutions: a depthwise convolution that filters each input channel independently and a pointwise convolution ( convolution) that combines the outputs of the depthwise convolution. This factorization significantly reduces computational cost. It was first developed by Laurent Sifre during an internship at Google Brain in 2013 as an architectural variation on AlexNet to improve convergence speed and model size. Dilated Dilated convolution, or atrous convolution, introduces gaps between kernel elements, allowing the network to capture a larger receptive field without increasing the kernel size. Transposed Transposed convolution, also known as deconvolution, fractionally strided convolution, and upsampling convolution, is a convolution where the output tensor is larger than its input tensor. It's often used in encoder-decoder architectures for upsampling. It's used in image generation, semantic segmentation, and super-resolution tasks. History The concept of convolution in neural networks was inspired by the visual cortex in biological brains. Early work by Hubel and Wiesel in the 1960s on the cat's visual system laid the groundwork for artificial convolution networks. An early convolution neural network was developed by Kunihiko Fukushima in 1969. It had mostly hand-designed kernels inspired by convolutions in mammalian vision. In 1979 he improved it to the Neocognitron, which learns all convolutional kernels by unsupervised learning (in his terminology, "self-organized by 'learning without a teacher'"). In 1998, Yann LeCun et al. introduced LeNet-5, an early influential CNN architecture for handwritten digit recognition, trained on the MNIST dataset. (Olshausen & Field, 1996) discovered that simple cells in the mammalian primary visual cortex implement localized, oriented, bandpass receptive fields, which could be recreated by fitting sparse linear codes for natural scenes. This was later found to also occur in the lowest-level kernels of trained CNNs. The field saw a resurgence in the 2010s with the development of deeper architectures and the availability of large datasets and powerful GPUs. AlexNet, developed by Alex Krizhevsky et al. in 2012, was a catalytic event in modern deep learning. See also Convolutional neural network Pooling layer Feature learning Deep learning Computer vision References Artificial neural networks Computer vision Deep learning
Convolutional layer
[ "Engineering" ]
1,178
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
59,211,511
https://en.wikipedia.org/wiki/Terminal%20investment%20hypothesis
The terminal investment hypothesis is the idea in life history theory that as an organism's residual reproductive value (or the total reproductive value minus the reproductive value of the current breeding attempt) decreases, its reproductive effort will increase. Thus, as an organism's prospects for survival decreases (through age or an immune challenge, for example), it will invest more in reproduction. This hypothesis is generally supported in animals, although results contrary to it do exist. Definition The terminal investment hypothesis posits that as residual reproductive value (measured as the total reproductive value minus the reproductive value of the current breeding attempt) decreases, reproductive effort increases. This is based on the cost of reproduction hypothesis, which says that an increase in resources dedicated to current reproduction decreases the potential for future reproduction. But, as the residual reproductive value decreases, the importance of this trade-off decreases, leading to increased investment in the current reproductive attempt. This terminal investment hypothesis can be illustrated by the equation , where is the total reproductive value, the reproductive value of the current breeding attempt, the proportionate increase in resulting from a positive decision (where a yes-no decision must be made regarding whether or not to increase reproductive effort), the cost of a positive decision where there is no selective pressure for either a positive decision or negative decision (this variable is also known as the "barely-justified cost"). The variable is the proportionate loss in from a negative decision. The barely-justified cost is thus inversely proportional to the residual reproductive value. When the level of reproductive investment has not reached the point where the equation above is true, more positive decisions about reproductive effort will be made. Thus, as the residual reproductive value decreases, more positive decisions need to be made so the equation is equal. In animals In animals, most tests of the terminal investment hypothesis are correlations of age and reproductive effort, immune challenges on all age stages, and immune challenges on older ages versus younger ages. The last type of test is considered to be a more reliable measure of senescence's effect on reproductive effort, as younger individuals should reduce reproductive effort to reduce their chance of death because of their high future reproductive prospects, while older animals should increase effort because of their low future prospects. Overall, the terminal investment hypothesis is generally supported in a variety of animals. In birds A study on blue tits published in 2000 found that individuals injected with a human diphtheria–tetanus vaccine fed their nestlings less than those injected with a control solution. In a study published in 2004, house sparrows that were injected with a Newcastle disease vaccine were more likely to lay a replacement clutch after their first clutch had been artificially removed than those that were injected with a control solution. In a study published in 2006, old blue-footed boobies injected with lipopolysaccharides (to challenge the immune system) before laying fledged more young than normal, whereas young individuals fledged less than normal. An increase in maternal effort in immune challenged birds may be mediated by the hormone corticosterone; a study published in 2015 found that house wrens injected with lipopolysaccharides increased foraging, and that measurements of corticosterone from eggs laid after injection found a positive correlation of this hormone with maternal foraging rates. In insects A study published in 2009 supported the cost of reproduction and terminal investment hypotheses in the burying beetle. It found that beetles manipulated to overproduce young (by replacing a mouse carcass with a carcass) had shorter lifespans than those that bred on just carcasses, followed by those that had a carcass. In turn, non-breeding beetles had a significantly longer lifespan than those that bred. This supports the cost of reproduction hypothesis. Another experiment from the same study found beetles that first bred at 65 days had a larger brood size before dispersal (before the larvae start to pupate in the soil) than those that initially bred at 28 days. This supports the terminal investment hypothesis, and prevents the effect of an increased average brood size in older animals due to differential survival of quality individuals. In flatworms A study published in 2004 on the flatworm Diplostomum spathaceum found that as its intermediate host, a snail, aged, production of cercariae (which are passed on to the final host, a fish) decreased. This is in line with the bet hedging hypothesis, which, in this case, says that the flatworm should attempt to keep its host alive longer so that more young can be produced; it does not support the terminal investment hypothesis. In mammals A study published in 2002 found results contrary to the terminal investment hypothesis in reindeer. Calf weight peaked at the mother's seventh year of age, and declined thereafter. However, this would only be opposed to the hypothesis if reproductive costs did not increase with age. An alternative hypothesis, the senescence hypothesis, positing that reproductive output declines with age-related loss of function, was supported by the study. These two hypotheses are not necessarily mutually exclusive; a study on rhesus macaques published in 2010 strongly supported the senescence hypothesis and weakly supported the terminal investment hypothesis. It found that older mothers were lighter, less active, and had lighter infants with reduced survival rates compared to younger mothers (supporting the senescence hypothesis), but that older individuals spent more time in contact with their young (supporting the terminal investment hypothesis). Additionally, a study published in 1982 on red deer on the island of Rhum found that while older mothers produced less offspring (and lighter offspring, when they did) than expected for a given body weight, they had longer suckling bouts (which had previously been correlated with milk yield, calf body condition in early winter, and calf survival to spring) compared to younger mothers. In reptiles A study on spotted turtles published in 2008 found that individuals in very poor condition sometimes did not breed. This is consistent with the bet hedging hypothesis, and indicates decision making on a large temporal scale (as spotted turtles may live for 65 to 110 years). However, individuals in poor condition generally produced a relatively large amount of small eggs; consistent with the terminal investment hypothesis. In plants Although the terminal investment hypothesis has been relatively widely studied in animals, there have been few studies of the hypothesis' application to plants. One study on members of the long-lived oak genus Quercus found that trees declined in condition towards the end of their lifespan, and did not invest an increasing proportion of their decreasing resources in reproduction. References Game theory Behavioral ecology
Terminal investment hypothesis
[ "Mathematics", "Biology" ]
1,335
[ "Behavior", "Evolutionary game theory", "Behavioral ecology", "Behavioural sciences", "Game theory", "Ethology" ]
59,215,441
https://en.wikipedia.org/wiki/Journal%20of%20Petroleum%20Geology
The Journal of Petroleum Geology is a quarterly peer-reviewed scientific journal covering the geology of petroleum and natural gas. It was established in 1978 and is published by Wiley-Blackwell on behalf of Scientific Press Ltd. The editor-in-chief is Christopher Tiratsoo (Scientific Press Ltd.). According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.872, ranking it 99th out of 190 journals in the category "Geosciences, Multidisciplinary". References External links Petroleum geology Geology journals Academic journals established in 1978 Quarterly journals Wiley-Blackwell academic journals English-language journals
Journal of Petroleum Geology
[ "Chemistry" ]
129
[ "Petroleum", "Petroleum geology" ]
59,216,757
https://en.wikipedia.org/wiki/Earth%27s%20circumference
Earth's circumference is the distance around Earth. Measured around the equator, it is . Measured passing through the poles, the circumference is . Treating the Earth as a sphere, its circumference would be its single most important measurement. The first known scientific measurement and calculation was done by Eratosthenes, by comparing altitudes of the mid-day sun at two places a known north–south distance apart. He achieved a great degree of precision in his computation. The Earth's shape deviates from spherical by flattening, but by only about 0.3%. Measurement of Earth's circumference has been important to navigation since ancient times. In modern times, Earth's circumference has been used to define fundamental units of measurement of length: the nautical mile in the seventeenth century and the metre in the eighteenth. Earth's polar circumference is very near to 21,600 nautical miles because the nautical mile was intended to express one minute of latitude (see meridian arc), which is 21,600 partitions of the polar circumference (that is 60 minutes × 360 degrees). The polar circumference is also close to 40,000 kilometres because the metre was originally defined to be one ten millionth (i.e., a kilometre is one ten thousandth) of the arc from pole to equator (quarter meridian). The accuracy of measuring the circumference has improved since then, but the physical length of each unit of measure had remained close to what it was determined to be at the time, so the Earth's circumference is no longer a round number in metres or nautical miles. History Eratosthenes The measure of Earth's circumference is the most famous among the results obtained by Eratosthenes, who estimated that the meridian has a length of 252,000 stadia, with an error on the real value between −2.4% and +0.8% (assuming a value for the stadion between 155 and 160 metres; the exact value of the stadion remains a subject of debate to this day; see stadion). Eratosthenes described his technique in a book entitled On the measure of the Earth, which has not been preserved; what has been preserved is the simplified version described by Cleomedes to popularise the discovery. Cleomedes invites his reader to consider two Egyptian cities, Alexandria and Syene (modern Aswan): Cleomedes assumes that the distance between Syene and Alexandria was 5,000 stadia (a figure that was checked yearly by professional bematists, mensores regii). He assumes the simplified (but inaccurate) hypothesis that Syene was precisely on the Tropic of Cancer, saying that at local noon on the summer solstice the Sun was directly overhead. Syene was actually north of the tropic by something less than a degree. He assumes the simplified (but inaccurate) hypothesis that Syene and Alexandria are on the same meridian. Syene was actually about 3 degrees of longitude east of Alexandria. According to Cleomedes' On the Circular Motions of the Celestial Bodies, around 240 BC, Eratosthenes calculated the circumference of the Earth in Ptolemaic Egypt. Using a vertical rod known as a gnomon and under the previous assumptions, he knew that at local noon on the summer solstice in Syene (modern Aswan, Egypt), the Sun was directly overhead, as the gnomon cast no shadow. Additionally, the shadow of someone looking down a deep well at that time in Syene blocked the reflection of the Sun on the water. Eratosthenes then measured the Sun's angle of elevation at noon in Alexandria by measuring the length of another gnomon's shadow on the ground. Using the length of the rod and the length of the shadow as the legs of a triangle, he calculated the angle of the sun's rays. This angle was about 7°, or 1/50th the circumference of a circle; assuming the Earth to be perfectly spherical, he concluded that its circumference was 50 times the known distance from Alexandria to Syene (5,000 stadia, a figure that was checked yearly), i.e. 250,000 stadia. Depending on whether he used the "Olympic stade" (176.4 m) or the Italian stade (184.8 m), this would imply a circumference of 44,100 km (an error of 10%) or 46,100 km, an error of 15%. A value for the stadion of 157.7 metres has even been posited by L.V. Firsov, which would give an even better precision, but is plagued by calculation errors and false assumptions. In 2012, Anthony Abreu Mora repeated Eratosthenes's calculation with more accurate data; the result was 40,074 km, which is 66 km different (0.16%) from the currently accepted polar circumference. Eratosthenes' method was actually more complicated, as stated by the same Cleomedes, whose purpose was to present a simplified version of the one described in Eratosthenes' book. Pliny, for example, has quoted a value of 252,000 stadia. The method was based on several surveying trips conducted by professional bematists, whose job was to precisely measure the extent of the territory of Egypt for agricultural and taxation-related purposes. Furthermore, the fact that Eratosthenes' measure corresponds precisely to 252,000 stadia (according to Pliny) might be intentional, since it is a number that can be divided by all natural numbers from 1 to 10: some historians believe that Eratosthenes changed from the 250,000 value written by Cleomedes to this new value to simplify calculations; other historians of science, on the other side, believe that Eratosthenes introduced a new length unit based on the length of the meridian, as stated by Pliny, who writes about the stadion "according to Eratosthenes' ratio". Posidonius Posidonius calculated the Earth's circumference by reference to the position of the star Canopus. As explained by Cleomedes, Posidonius observed Canopus on but never above the horizon at Rhodes, while at Alexandria he saw it ascend as far as degrees above the horizon (the meridian arc between the latitude of the two locales is actually 5 degrees 14 minutes). Since he thought Rhodes was 5,000 stadia due north of Alexandria, and the difference in the star's elevation indicated the distance between the two locales was 1/48 of the circle, he multiplied 5,000 by 48 to arrive at a figure of 240,000 stadia for the circumference of the earth. It is generally thought that the stadion used by Posidonius was almost 1/10 of a modern statute mile. Thus Posidonius's measure of 240,000 stadia translates to , not much short of the actual circumference of . Strabo noted that the distance between Rhodes and Alexandria is 3,750 stadia, and reported Posidonius's estimate of the Earth's circumference to be 180,000 stadia or . Pliny the Elder mentions Posidonius among his sources and—without naming him—reported his method for estimating the Earth's circumference. He noted, however, that Hipparchus had added some 26,000 stadia to Eratosthenes's estimate. The smaller value offered by Strabo and the different lengths of Greek and Roman stadia have created a persistent confusion around Posidonius's result. Ptolemy used Posidonius's lower value of 180,000 stades (about 33% too low) for the earth's circumference in his Geography. This was the number used by Christopher Columbus in order to underestimate the distance to India as 70,000 stades. Aryabhata Around AD 525, the Indian mathematician and astronomer Aryabhata wrote Aryabhatiya, in which he calculated the diameter of earth to be of 1,050 yojanas. The length of the yojana intended by Aryabhata is in dispute. One careful reading gives an equivalent of , too large by 11%. Another gives , too large by 20%. Yet another gives , too large by 5%. Islamic Golden Age Around AD 830, Caliph Al-Ma'mun commissioned a group of Muslim astronomers led by Al-Khwarizmi to measure the distance from Tadmur (Palmyra) to Raqqa, in modern Syria. They calculated the Earth's circumference to be within 15% of the modern value, and possibly much closer. How accurate it actually was is not known because of uncertainty in the conversion between the medieval Arabic units and modern units, but in any case, technical limitations of the methods and tools would not permit an accuracy better than about 5%. A more convenient way to estimate was provided in Al-Biruni's Codex Masudicus (1037). In contrast to his predecessors, who measured the Earth's circumference by sighting the Sun simultaneously from two locations, al-Biruni developed a new method of using trigonometric calculations, based on the angle between a plain and mountain top, which made it possible for it to be measured by a single person from a single location. From the top of the mountain, he sighted the dip angle which, along with the mountain's height (which he determined beforehand), he applied to the law of sines formula. This was the earliest known use of dip angle and the earliest practical use of the law of sines. However, the method could not provide more accurate results than previous methods, due to technical limitations, and so al-Biruni accepted the value calculated the previous century by the al-Ma'mun expedition. Columbus's error 1,700 years after Eratosthenes's death, Christopher Columbus studied what Eratosthenes had written about the size of the Earth. Nevertheless, based on a map by Toscanelli, he chose to believe that the Earth's circumference was 25% smaller. If, instead, Columbus had accepted Eratosthenes's larger value, he would have known that the place where he made landfall was not Asia, but rather a New World. Historical use in the definition of units of measurement In 1617 the Dutch scientist Willebrord Snellius assessed the circumference of the Earth at 24,630 Roman miles (24,024 statute miles). Around that time British mathematician Edmund Gunter improved navigational tools including a new quadrant to determine latitude at sea. He reasoned that the lines of latitude could be used as the basis for a unit of measurement for distance and proposed the nautical mile as one minute or one-sixtieth () of one degree of latitude. As one degree is of a circle, one minute of arc is of a circle – such that the polar circumference of the Earth would be exactly 21,600 miles. Gunter used Snellius's circumference to define a nautical mile as 6,080 feet, the length of one minute of arc at 48 degrees latitude. In 1793, France defined the metre so as to make the polar circumference of the Earth 40,000 kilometres. In order to measure this distance accurately, the French Academy of Sciences commissioned Jean Baptiste Joseph Delambre and Pierre Méchain to lead an expedition to attempt to accurately measure the distance between a belfry in Dunkerque and Montjuïc castle in Barcelona to estimate the length of the meridian arc through Dunkerque. The length of the first prototype metre bar was based on these measurements, but it was later determined that its length was short by about 0.2 millimetres because of miscalculation of the flattening of the Earth, making the prototype about 0.02% shorter than the original proposed definition of the metre. Regardless, this length became the French standard and was progressively adopted by other countries in Europe. This is why the polar circumference of the Earth is actually 40,008 kilometres, instead of 40,000. See also Arabic mile Geographical mile References Bibliography External links Carl Sagan demonstrates how Eratosthenes determined that the Earth was round and the approximate circumference Circumference Units of length Geodesy
Earth's circumference
[ "Mathematics" ]
2,597
[ "Units of length", "Applied mathematics", "Quantity", "Geodesy", "Units of measurement" ]
59,217,032
https://en.wikipedia.org/wiki/Year%20Million
Year Million is a six-part documentary and science fiction television series produced by National Geographic, which premiered on May 15, 2017, on their channel. The series received two Emmy Award nominations, including a Primetime Emmy for its narrator Laurence Fishburne. The series is based on the 2008 book Year Million: Science at the Far Edge of Knowledge by Damien Broderick. The narrative alternates tells the story of a family of three in the future, using 2016 interviews to explain events unfolding in the story. The series was filmed in Budapest. Synopsis Investigating the ramifications of a variety of potentially world-changing inventions, the series visits a cast of characters representing a typical American family in several different possible timelines. Ray Kurzweil, Michio Kaku, Peter Diamandis and Brian Greene guide the documentary aspect, discussing possible changes the future might hold based on their research: Artificial Intelligence, Man merging with Machine, the human species becoming an interplanetary entity. Exploring life in both the near and the far future, where artificial intelligence is ubiquitous and advances in science have radically extended our lifespans. The series aims to show that communication, work and education will be revolutionized through virtual telepathy. Accolades The series' narrator, Laurence Fishburne, was nominated for a Primetime Emmy Award, with a further Craft Emmy Nomination for Outstanding Lighting Direction and Scenic Design. Cast Each episode of the series is broken up into first narrated scenes, then interviews with scientists and futurologists; the docudrama segments fit around the interviews and narration to illustrate how technological changes might impact a regular family. Laurence Fishburne as Narrator Interviews Ray Kurzweil Michio Kaku Peter Diamandis Brian Greene Drama Vinette Robinson - Eva Reece Ritchie - Oscar Dinita Gohil - Sajani Olive Gray - Jess Joe Corrigall - Damon Siobahn Dillon - Mother Miklós Bányai - Newscaster Episodes References External links 2017 American television series debuts 2017 American television series endings 2010s American documentary television series 2010s American science fiction television series American English-language television shows Documentary films about science Documentary films about outer space Documentary television series about technology National Geographic (American TV channel) original programming Science education television series Documentary television shows about evolution
Year Million
[ "Astronomy" ]
457
[ "Space art", "Documentary films about outer space" ]
59,217,686
https://en.wikipedia.org/wiki/Kut%20%28mythology%29
According to the Turkic belief, kut (also spelled qut, , or 'fortune') is a kind of force vitalizing the body. Through kut, humans are connected with the heavens. Further, the sacred ruler is believed to be endowed with much more kut than other people, thus the heaven would have appointed him as the legitimate ruler. Turkic Khagans claimed that they were "heaven-like, heaven-conceived" and possessed kut, a sign of the heavenly mandate to rule. Rulers of the Qocho were entitled "idiqut", meaning "sacred good fortune". It also existed in Mongols as suu. It was believed that if the ruler had lost his kut, he could be dethroned and killed. However, this had to be carried out without shedding his blood. This was usually done by strangling with a silk cord. This custom of strangling continued among the Ottomans. Usage by Ottomans Ottomans continued this tradition by reexpressing the "ruler's heavenly mandate" (kut) into Irano-Islamic terms with titles such as "shadow of God on earth" (zill Allah fi'l-alem) and "caliph of the face of the earth" (halife-i ru-yi zemin). Name Kutlug is frequently used and well-known personal Uyghur name. It was also the name of first rulers of the Second Turkic Khaganate, Ilterish Qaghan, and the Uyghur Khaganate, Kutlug I Bilge Kagan. See also Kutadgu Bilig Mandate of Heaven References Tengriism Vitalism
Kut (mythology)
[ "Biology" ]
348
[ "Non-Darwinian evolution", "Vitalism", "Biology theories" ]
68,533,357
https://en.wikipedia.org/wiki/PongSat
PongSats are high-altitude "near-space" missions that hold a probe or other project that can fit inside a ping-pong (table tennis) ball. The launch program is run by a volunteer organization, JP Aerospace (which also provided balloon launch services for the Space Chair.) JP Aerospace succeeded in its first launch of PongSat missions, with a balloon-launched rocket (AKA a rockoon), at the West Texas Spaceport near Fort Stockton, in October 2002. The launcher reached 100,000 feet with 64 hosted PongSats. Many of the flights have been funded through a KickStarter crowdfunding campaign. Although many PongSats contain things like food items, simply because schoolchildren are curious about the result, other missions include "multiple sensors and complex mini-computers". It's been described by its founder as part of "America's Other Space Program," but also as one that relies "primarily on volunteers and helium." SpaceHub Southeast has organized several PongSat flights from Atlanta. According to founder John Powell, the PongSat launch program is very global, with payloads delivered to JP Aerospace from "Poland, India, Japan, Slovenia, Germany, Belgium, Turkey, China, Australia, Indonesia." References External links Pongsat Guide PongSats and MiniCubes (pdf) Presentation at UNSW ACSER Cubesat Innovation Workshop, 2017-04-19/20 Weekly Space Hangout: December 16, 2020 – John Powell Tells Us About PongSats and Airship to Orbit "PongSats: The World's Space Program". Michael Molitch-Hou. Feb 20, 2013. 3dprintingindustry.com "PongSats take student science projects to new heights", Ben Coxworth, New Atlas. July 27, 2012 Balloon-borne experiments Educational technology non-profits
PongSat
[ "Astronomy" ]
393
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
68,533,868
https://en.wikipedia.org/wiki/Tweet%20%28social%20media%29
A tweet (now officially known as a post since 2023) is a short status update on the social networking site Twitter (officially known as X since 2023) which can include images, videos, GIFs, straw polls, hashtags, mentions, and hyperlinks. Around 80% of all tweets are made by 10% of users, averaging 138 tweets per month, with the median user making only two tweets per month. Following the acquisition of Twitter by Elon Musk in October 2022, and rebranding of the site as "X" in June 2023, all references to the word "tweet" were removed from the service, changed to "post", and "retweet" changed to "repost". The terms "tweet" and "retweet" are still more popular when referring to posts on X. Content The service has experimented with changing how tweets work over the years to attract more users and to keep them on the site. The character limit was originally 140 characters when the service started, had media attachments no longer count in the mid-2010s, and doubled altogether in 2017. Now, a tweet can contain up to 280 characters and include media. Users subscribed to X Premium (formerly Twitter Blue) can post up to 25,000 characters and can include bold and italic styling. Character limit Tweets originally were limited to 140 characters when the service launched, in 2006. Twitter was originally designed to be used on SMS text messages, which are limited to 160 characters. Twitter reserved 20 characters for the username, leaving 140 characters for the post. The original limit was seen as an iconic fixture of the platform, encouraging "speed and brevity". Increasing the limit had been a topic of discussion inside the company for years, and had been resurfaced in 2015 for ways to grow the userbase. At the time, internal discussion also involved excluding links and mentions from the character limit. By January 2016, an internal product named "Beyond 140" was in development, targeting Q1 of the same year for expanding tweet limits. By the end of 2015, the company was moving close to introducing a 5,000 or 10,000 character limit. An unfinalized version had tweets that went over the old 140 character threshold only showing the first 140 characters, with a call-to-action that there was more in the tweet. Clicking on the tweet would reveal the rest, which was done to retain the same feel of the timeline. The change was controversial internally and met with backlash by users. Dorsey confirmed that the 140 character limit would remain, but had told employees upon his return as CEO that the once-sacred aspects of Twitter were no longer untouchable. In May 2016, a week after being leaked, Twitter announced that media attachments (images, GIFs, videos, polls, quote tweets) nor mentions in replies would no longer increase the character limit to be rolled out later in the year to ready developers. The changes rolled out in September, except for the @replies, which were tested in October and then rolled out in March 2017, a year after the original announcement. These changes were a compromise to internal resistance to a 10,000 character limit from the year before. On September 26, 2017, Twitter announced the company was testing doubling the character limit—from 140 to 280. It was an effort for users to be more expressive with their tweets, as users were otherwise cramming ideas into a single tweet by rewriting and removing vowels, or not tweeting at all. It began testing to a small group of users in all languages, excluding Japanese, Chinese, and Korean, because the three languages can say double the amount of information in one character. According to the company's statistics, 0.4% of tweets in Japanese hit the 140 character ceiling, while 9% of tweets in English hit the ceiling. Users not in the test group were able to see and interact with them normally. The change was similarly controversial internally as the 10,000 character limit proposal. The immediate reaction by Twitter users was largely negative. Links URLs can be linked on Twitter. A tweet's links are converted to the t.co link shortener, and use up 23 characters out of the limit. The shortener was introduced in June 2011 to allow users to save space on their links, without needing a third-party service like Bitly or TinyURL. Media Some users use screenshots of text and uploaded them as images to increase the amount of words they could include in a tweet. Cards Beginning in 2012, tweets linking to partnered websites would show, below the content of the tweet, expanded media: an excerpt of a linked news article or an embedded video. Twitter already had a way to see Instagram posts and YouTube videos, called "expanded tweets". Twitter then began allowing websites apply to test offering cards for Twitter users. Later in 2012, notably after Facebook purchased it, Instagram started cropping images displayed in cards, with the plan to end them all together. CoTweets Between July 2022 and January 2023, Twitter tested a feature where two users could be the author of a tweet, which would be posted on both of their accounts. Both users' profile pictures, names, and handles are shown. One user drafts a tweet in the Composer field, then invite a user that is both following them and has their account published. Edits could not be made once the invite was sent, with the alternative being deleting the invitation and making a second one. The second author could accept the invitation, at which the tweet would then be posted to both accounts. Once published, the second user could revoke them being a co-author, and the tweet would change to being written by the first author and being removed from the second author's tweets. Until the second author accepts the invitation, the tweet would be unlisted, not appearing on the authors' timelines or in searches, but available via a direct link. It was tested with some accounts in the US, Canada, and South Korea. The company noted during the test that the feature may be turned off and all CoTweets deleted. The feature was spotted in code in December 2021. On January 31, Twitter suddenly and quietly decided to stop new CoTweets from being made, though noted that it could return in the future. CoTweets were able to be seen for another month, before being converted to a normal tweet for the first author, and a retweet for the second author. Though Twitter's support page offered a generic reasoning for discontinuing the feature, Elon Musk said that it was to focus on allowing users to add text attachments. Vibes Twitter briefly tested a feature in 2022 that allowed users to set the current status—codenamed "vibe"— for a tweet or account, from a small set of emoji-phrase combinations. It would allow the user to either tag per-tweet, or on the profile level with it showing on tweets and the profile. Testing began on vibes in June 2022 with a wider selection that could be put above tweets, but disappeared after some time. Phrases included "✔️ Current status" and "💤 Case of the Mondays". Twitter removed the ability to add vibes to tweets. Interactions Users can interact with tweets by 'retweeting' (reblogging), liking, quoting the tweet, or replying to it. Retweets In November 2009, Twitter began rolling out the ability to 'retweet' a tweet. Prior to this, people would write "RT @username" before quoting the original tweet. Some people limited their 140 character limit down further, so that other people could always fit their entire tweet in a proto-retweet. In 2023, with the rebranding of Twitter to X, "retweets" were quietly renamed to "reposts"; however, "retweets" remained the most commonly used term on the site. Liking Tweets can be liked by users, adding them to a list that other users used to be able to view, prior to likes becoming private for all users. The feature was available when Twitter launched in 2006. Until 2015, 'likes' were called 'favorites' (or 'favs'). The service renamed them because people "often misunderstood" the feature, and people reacted more positively in user tests. Users had the option of hiding their likes from the public, though their like would not be hidden from the list of users who likes a given tweet. Jack Dorsey said in 2019 that, if he had to create Twitter over again, he would deemphasize the like, or not include it altogether because it did not positively contribute to healthy conversations. Likes used to be public and they are not broadcast to the user's tweets timeline. When likes were public, users would often forget their likes were public or liked more revealing tweets. High-profile users and politicians' accounts have liked pornographic, hateful, and racist tweets. For instance, in 2017, Ted Cruz's account liked a tweet with a two-minute porn video about a day after it was posted. Cruz said that many people had access to his account and one of his staff members pressed the like button in "an honest mistake". Likes would later be privatized for all users profiles, with Elon Musk stating "Public likes are incentivizing the wrong behavior", and encouraging users to like more Tweets without fear of being noticed. Likes are now anonymous, except to the author of the tweet, and to the person who liked it. Verified users could choose to private their likes prior to the update. When not logged in, users' tweets are sorted by how many likes they received, opposed to reverse-chronological. Quote tweets In 2014, Twitter began testing a new feature that allows users to embed a tweet inside their tweet to add additional commentary. Prior to this, users could include a snippet of another tweet in a new tweet, but were limited to quoting the—at the time—140 character limit. It was originally called "retweet with comment", and was later named "quote tweet". Following the rebranding of Twitter to X, quote tweets were renamed, simply dropping the "tweet" to become "quotes". The common name still largely remains "quote tweets". Threads Multiple tweets in reply to each other are grouped together in 'threads'. The 140 character limit prevented users from posting as complete thoughts as they desired, and resorted to making upwards of dozens of tweets, which all showed in a disjointed manner, dubbed a "tweetstorm". It was popularized by Marc Andreessen. Bookmarking Users are able to add a bookmark to individual tweets via the bookmark button, or within share icon menu, saving them to revisit them later. The bookmarks are private, but tweets display the number of times it has been bookmarked, if at all. The development was revealed to in October 2017. The feature, highly requested by Japanese users, started from an annual hack week at the company and called "#ShareForLater". Previously, users would resort to liking the tweet or by sending it to themselves. Liking tweets is often seen as an endorsement or positive endorsement, and the likes are public and are notified to the user who made the tweet. The feature was tested in November for some users, and rolled out in February 2018 on mobile alongside a new share menu. The web version of Twitter did not test the bookmark feature until November 2018 When released, the user who made the tweet would have been unaware that a tweet was bookmarked. Fact-checking In March 2020, Twitter added a label to a manipulated video of then-candidate Joe Biden that Donald Trump retweeted. Two months later, as a result of the COVID-19 pandemic, Twitter introduced a policy that would label or warn users on tweets with COVID-19 misinformation. The company said at the time that other areas would have labels covered, and shortly afterwards, misleading information on elections were included. On March 26, then-US president Donald Trump made two false statements about mail-in ballots, claiming they were "substantially fraudulent". Within 24 hours of the tweet, Twitter's general counsel and the acting head of policy jointly decided to label Trump's tweets, with several hours of internal debate from company leaders, and then-CEO Jack Dorsey signed off on the decision shortly before the label was applied. The labels, which told readers to "Get the facts about mail-in ballots", was the first time they were applied to Trump's tweets. A spokesperson for Twitter said that the tweets contained "potentially misleading information about voting processes and have been labeled to provide additional context around mail-in ballots". The label linked to articles by CNN, The Washington Post, and The Hill, as well as summaries of claims of fraud. Three days later, a tweet about the George Floyd protests in Minneapolis–Saint Paul was hidden from view. Community Notes In the weeks after the January 6 United States Capitol attack, Twitter rolled out a new program that allowed users to add notes underneath tweets that would benefit from additional context. Prior to the transfer of Twitter to Elon Musk, Community Notes were officially called Birdwatch. History The first tweet, made by Jack Dorsey, was made on March 21, 2006. It has the Snowflake ID of 20. The Iconfactory was developing a Twitter application in 2006 called "Twitterrific" and developer Craig Hockenberry began a search for a shorter way to refer to "Post a Twitter Update." In 2007 they began using "twit" before Twitter developer Blaine Cook suggested that "tweet" be used instead. "Tweet" was added to the Merriam-Webster dictionary in 2011 and to the Oxford English Dictionary in 2012. Both its use as a verb and noun were added. This was notable as the Oxford English Dictionary normally waits ten years after the coining of a word to add it to the dictionary. In 2023, the terms "tweet" and "retweet" were quietly retired in favor of the terms "post" and "repost", as a part of Twitter's rebrand to X, but many users continue to use the former terms on the platform. Demographics The median Twitter user tweets twice a month. Around 80% of tweets made are from 10% of the users, who tweet 138 times per month. 65% of the prolific users are women, compared to 48% of the bottom 90%. Most of the prolific users tweet about political issues. There is no difference in political views between the two groups. 25% of the prolific users use automated tools to make tweets, compared to 15% of the others. References External links How to Tweet — Twitter Support Twitter Internet slang 2000s neologisms Internet terminology 2006 introductions
Tweet (social media)
[ "Technology" ]
3,238
[ "Computing terminology", "Internet terminology" ]
68,534,966
https://en.wikipedia.org/wiki/FMRI%20lie%20detection
fMRI lie detection is a field of lie detection using functional magnetic resonance imaging (fMRI). FMRI looks to the central nervous system to compare time and topography of activity in the brain for lie detection. While a polygraph detects anxiety-induced changes in activity in the peripheral nervous system, fMRI purportedly measures blood flow to areas of the brain involved in deception. History Psychiatrist and scientific researcher Daniel Langleben was inspired to test lie detection while he was at Stanford University studying the effects of a drug on children with attention deficit disorder (ADD). He found that these children have a more difficult time inhibiting the truth. He postulated that lying requires increased brain activity compared to truth because the truth must be suppressed, essentially creating more work for the brain. In 2001, he published his first work with lie detection using a modified form of the Guilty Knowledge Test, which is sometimes used in polygraph tests. The subjects, right-handed, male college students, were given a card and a Yes/No handheld clicker. They were told to lie to a computer asking questions while they underwent a brain scan only when the question would reveal their card. The subjects were given $20 for participating, and told they would receive more money if they deceived the computer; however, none did. His studies showed that the inferior and superior prefrontal and anterior cingulate gyri and the parietal cortex showed increased activity during deception. In 2002, he licensed his methods for lie detection to the No Lie MRI company located in San Diego, California. Working As "Prospects of fMRI as a Lie Detector" states, fMRIs use electromagnets to create pulse sequences in the cells of the brain. The fMRI scanner then detects the different pulses and fields that are used to distinguish tissue structures and the distinction between layers of the brain, matter type, and the ability to see growths. The functional component allows researchers to see activation in the brain over time and assess efficiency and connectivity by comparing blood use in the brain, which allows for the identification of which portions of the brain are using more oxygen, and thus being used during a specific task. This is called the blood-oxygen-level-dependent (BOLD) hemodynamic response. FMRI data have been examined through the lens of machine learning algorithms to decode whether subjects believed or disbelieved statements, ranging from mathematical, semantic to religious belief statements. In this study, independent component features were used to train the algorithms, achieving up to 90% accuracy on predicting a subjects response, when prompted to indicate with a button press whether they believed or disbelieved a given assertion. Brain activation Activation of BA 40, the superior parietal lobe, the lateral left MRG, the striatum, and left thalamus was unique to truth while activation of the precuneus, posterior cingulate gyrus, prefrontal cortex, and cerebellum will be used to show a similar network for truth and lie. The most brain activity occurs in both sides of the prefrontal cortex, which is linked to response inhibition. This indicates that deception may involve inhibition of truthful responses. Overall bilateral activation occurs in deception in the middle frontal gyrus, parahippocampal gyrus, the precuneus, and the cerebellum. When looking into the different styles of lying we see differentiation in the locations of activation. Spontaneous lies require retrieval from the semantic and episodic memory to be able to quickly formulate a viable situation that remains in working memory while visual images are created to further hide the truth. The areas associated with this retrieval, the ventrolateral prefrontal cortex, anterior prefrontal cortex, and precuneus, are activated as well as the dorsolateral prefrontal cortex, anterior cingulate, and posterior visual cortex are activated. The anterior cingulate cortex is used for cross-checking and probability. For well-rehearsed, memorized, and coherent lies episodic memory activation is needed. This creates increased activation in the right anterior prefrontal cortex, BA 10, and the precuneus. The Parahippocampal cortex may be used in this process to generalize lies to situations because no cross-checking is needed. Newer studies have considered the salience of lying in a variety of situations. If a lie is of lower salience activation is broader and general while salient lies have specific activation in regions associated with inhibition and selection. Many areas are much more active in lying than truth possibly meaning its harder to retrieve false information compared to true memories because truth has more encoded retrieval cues. Interestingly, the limbic system, which is involved in many different emotional responses including the sympathetic nervous system, is not activated in deception. Legality Historically, fMRI lie detector tests have not been allowed into evidence in legal proceedings, the most famous attempt being Harvey Nathan's insurance fraud case in 2007. This pushback from the legal system may be based on the 1988 Federal Employment Polygraph Protection Act that acts to protect citizens from incriminating themselves and the right to silence. The legal system specifically would require many more studies on the negative false rate to decide if the absence of deception proves innocence. The lack of legal support has not stopped companies like No Lie MRI and CEPHOS from offering private fMRI scans to test deception. There is potential to use fMRI evidence as a more advanced form of lie detection, particularly in identifying the regions of the brain involved in truth telling, deception, and false memories. False memories are a barrier in validating witness testimonies. Research has shown that when presented a list of semantically related words, participant recollection can often be unintentionally false and additive of words that were not originally present. This is a normal psychological occurrence, but presents numerous problems to a jury when attempting to sort out the facts of a case. fMRI imaging is also being used to analyze brain activity during intentional lies. Findings have shown that the dorsolateral prefrontal cortex activates when subjects are pretending to know information, but that the right anterior hippocampus activates when a subject presents false recognition in contrast to lying or accurately telling a truth. This indicates that there may be two separate neural pathways for lying and false memory recall. However, there are limitations to how much brain imaging can distinguish between truths and deceptions because these regions are common areas of executive control function; It is difficult to tell if the activation seen is due to the lie told, or something unrelated. Future research aims to differentiate between when someone has genuinely forgotten an experience and when someone has made an active choice to withhold or fabricate information. Developing this distinction to the point of scientific validity would help discern when defendants are being truthful about their actions and when witnesses are being truthful about their experiences. Pros and cons While fMRI studies on deception have claimed detection accuracy as high as 90% many have problems with implementing this style of detection. At a basic level administering, fMRIs is extremely difficult and costly. Only yes or no answers can be used which allows for flexibility in the truth and style of lying. fMRI requires the participant to remain still for long periods and little movements can create issues with the scan. Some people are unable to take one such as those with medical conditions, claustrophobia, or implants. When looking at deception specifically, there is little research on non-compliant individuals. The criminal justice system interacts with many types of criminals that are not often taken into account in fMRI studies such as addicts, juveniles, mentally unstable, and the elderly. Studies have been done on Chinese individuals and their language and cultural differences did not change results, as well as a study (S. Spence 2011) on that 52 schizophrenic patients, 27 of whom were experiencing delusions at the time of the study. While these studies are promising, the lack of extensive research on the populations that would be most affected by fMRIs being admitted into the legal system is a huge drawback. As well, fMRI deception tests look only at changes in activity in the brain which similarly to the polygraph does not show directly that lying is occurring. If dealing with complex styles of lying or questions the need for a control condition is critical to differentiate from other higher emotional states unrelated to deception. Some studies, such as Ganis et al.., have shown that it is possible to fool an fMRI by learning countermeasures. References Magnetic resonance imaging Lie detection
FMRI lie detection
[ "Chemistry" ]
1,750
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
68,535,281
https://en.wikipedia.org/wiki/List%20of%20star%20systems%20within%2075%E2%80%9380%20light-years
This is a list of star systems within 75–80 light years of Earth. The closest B-type star, Regulus, is in this list. See also Lists of stars List of star systems within 70–75 light-years List of star systems within 80–85 light-years List of nearest stars and brown dwarfs References Lists of stars Star systems Lists by distance
List of star systems within 75–80 light-years
[ "Physics", "Astronomy" ]
76
[ "Lists by distance", "Physical quantities", "Distance", "Astronomical objects", "Star systems" ]
68,535,891
https://en.wikipedia.org/wiki/Polymer%20devolatilization
Polymer devolatilization, also known as polymer degassing, is the process of removing low-molecular-weight components such as residual monomers, solvents, reaction by-products and water from polymers. Motivation When exiting a reactor after a polymerization reaction, many polymers still contain undesired low-molecular weight components. These component may make the product unusable for further processing (for example, a polymer solution cannot directly be used for plastics processing), may be toxic, may cause bad sensory properties such as an unpleasant smell or worsen the properties of the polymer. It may also be desirable to recycle monomers and solvents to the process. Plastic recycling can also involve removal of water and volatile degradation products. Basic process types Devolatilization can be carried out when a polymer is in the solid or liquid phase, with the volatile components going into a liquid or gas phase. Examples are: Solid polymer, liquid phase: Extraction of caprolactam from polyamides with water. Solid polymer, gas phase: Removal of ethylene from polyethylene via air or nitrogen in silos. Liquid polymer, gas phase: Removal of styrene from polystyrene via vacuum. It is usual for different types of devolatilization steps to be combined to overcome limitations in the individual steps. Physical and chemical aspects Thermodynamics The thermodynamic activity of volatiles needs to be higher in the polymer than in the other phase for them to leave the polymer. In order to design such a process, the activity needs to be calculated. This is usually done via the Flory–Huggins solution theory. This effect can be enhanced via higher temperatures or lower partial pressure of the volatile component by applying an inert gas or lower pressure. Diffusion In order to be removed from the polymer, the volatile components need to travel to a phase boundary via diffusion. Because of the low diffusion coefficients of volatiles in polymers, this can be the rate-determining step. This effect can be enhanced by higher temperatures or by small diffusion lengths due to its higher Fourier number. Heat transfer Because polymers and polymer solutions often have a very high viscosity, the flow in devolatilizers is laminar, leading to low heat transfer coefficients, which can also be a limiting factor. Chemical stability Higher temperatures can also affect the chemical stability of the polymer and thus its use properties. If a polymer's ceiling temperature is exceeded, it will partially revert to its monomers, destroying its usability. More generally, polymer degradation also occurs during devolatilization, limiting the temperature and residence time available for the process. Foam vs. film devolatilization There are two basic forms of devolatilization to a vacuum. In foam devolatilization, bubbles inside the polymer solution nucleate and grow, finally bursting and releasing their volatile content to the surroundings. This requires sufficient vapor pressure. If possible, this is a very efficient method because the volatiles only need to diffuse a short way. Film devolatilization occurs when there is no longer sufficient vapor pressure to generate bubbles, and requires on sufficient surface area and good mixing. In this case, stripping agent such as nitrogen may be added to the polymer to induce improved mass transfer through bubbles. Types of devolatilizers for polymer melt Devolatilizers for polymer melts are classified as static or moving, also called "still" and "rotating" in the literature. Static devolatilizers Static devolatilizers include: Falling strand devolatilizers: Polymer is partitioned into many individual strands which fall down in a vacuum chamber. Diffusion moves volatiles into the gas phase, which are then collected via a vacuum system. This is usually the last stage of a devolatizing process, when vapor pressure is low. Falling film evaporator: Polymer falls down vertical walls, volatiles diffusing on the side that is not in contact with the walls. Tube evaporators: A boiling polymer solution flows downward in a vertical shell and tube heat exchanger into a separator. Polymer is collected at the bottom, vapor is collected via a vacuum system and condensers. Flash evaporators: A polymer solution is preheated and brought into a separator, where pressure below the vapor pressure of the solution leads to a part of the volatiles evaporating. Moving devolatilizers Co-rotating twin screw extruders: The polymer solution is brought into a co-rotating twin screw extruders, where it is subjected to shear and mechanical energy input and where vapors are drawn off. This type of machine allows different pressures in different zones. An advantage is the self-cleaning action of those extruders. Single-screw extruders: In principle similar to co-rotating twin screw extruders, without the self-cleaning action. Wiped-film evaporators: Polymer solution is brought into a single large vessel, where a rotor agitates the product and creates surface renewal. Only a single pressure level is possible in these machines. Large-volume kneaders: A polymer solution is brought into a large-volume kneader and subjected to shear at longer residence times than in an extruder. Devolatilizers for suspensions and latexes Removal of monomers and solvents from latex and suspensions, for example in the production of synthetic rubber, is usually done via stirred vessels. References Chemical engineering Process engineering Polymers
Polymer devolatilization
[ "Chemistry", "Materials_science", "Engineering" ]
1,130
[ "Process engineering", "Chemical engineering", "Mechanical engineering by discipline", "nan", "Polymer chemistry", "Polymers" ]
68,536,133
https://en.wikipedia.org/wiki/Salkowski%27s%20test
Salkowski's test, also known simply as Salkowski test, is a qualitative chemical test, that is used in chemistry and biochemistry for detecting a presence of cholesterol and other sterols. This biochemical method got its name after German biochemist Ernst Leopold Salkowski, who is known for development of multiple new chemical tests, that are used for detection of different kinds of molecules (besides cholesterol and other sterols also for creatinine, carbon monoxide, glucose and indoles). A solution that has tested positive on the Salkowski's test becomes red and gets yellow glow. Basic information Procedure For Salkowski test's procedure one needs a sample, that is to be tested for sterols, as well as chloroform and concentrated sulfuric acid that represent Salkowski's reagent. Usually the solution of chloroform and the sample is prepared first and later treated with concentrated sulfuric (VI) acid. After that the whole solution needs to be shaken well. It is important to use only dried glassware, as dehydration reaction occurs during the test's procedure. A solution that has tested positive on this qualitative chemical test exhibits two distinct layers in a test tube; the upper layer (chloroform) gets blueish red to violet colour, while the layer of sulfuric acid becomes yellow to green, with greenish glow being visible. If a sample does not contain any cholesterol or other sterols, colour of the tested solution remains unchanged and retains its original colour. Salkowski test can also be used to test the presence of indoles (crystalline alkaloids that are a degradation products of proteins, containing tryptophan). In such cases a sample is treated with nitric acid and 2% solution of potassium nitrite, with positive reaction being shown by presence of red colour. Chemistry of the test Treating a solution of a sample, containing sterols, with chloroform and highly hygroscopic sulfuric acid, leads to a dehydration reaction (two water molecules are removed from two cholesterol molecules) and formation of new double bonds. During the chemical reaction two sterols bind together and bisterol (bisteroid) is formed; bi-cholestadien (double cholestene with two double bonds) in a case of cholesterol. Red colour of a solution is a consequence of bi-sulfonic acid of a bi-cholestadien, which is a product of sulfuric acid sulfonating bi-cholestadien. References Biochemistry detection methods Salkowski test for cholesterol – Its principle and procedure
Salkowski's test
[ "Chemistry", "Biology" ]
558
[ "Biochemistry methods", "Chemical tests", "Biochemistry detection methods" ]
68,536,741
https://en.wikipedia.org/wiki/Feebly%20interacting%20particle
Feebly interacting particles (FIPs) are subatomic particles defined by having extremely suppressed interactions with the Standard Model (SM) bosons and / or fermions. These particles are potential thermal dark matter candidates, extending the model of weakly interacting massive particles (WIMPs) to include weakly interacting sub-eV particles (WISPs) and others. FIP physics is also known as dark-sector physics. Candidates FIP candidates could be massive (FIMP / WIMP) or massless and coupled to the SM particles through some minimal coupling strength. The light FIPs are theorized to be dark matter candidates, and, they provide an explanation for the origin of neutrino masses and CP symmetry in strong interactions. Neutrinos technically qualify as FIPs, but usually when the acronym "FIP" is used, it is intended to refer to some other, as-yet unknown particle. Cai, Cacciapaglia, and Lee (2022) proposed massive gravitons as feebly Interacting particle candidates. See also WIMP – weakly interacting massive particle WISP – weakly interacting sub-eV / slight / slender particle References Dark matter Hypothetical particles Physics beyond the Standard Model Astroparticle physics Exotic matter
Feebly interacting particle
[ "Physics", "Astronomy" ]
251
[ "Dark matter", "Hypothetical particles", "Unsolved problems in astronomy", "Concepts in astronomy", "Theoretical physics", "Astroparticle physics", "Unsolved problems in physics", "Astrophysics", "Subatomic particles", "Particle physics", "Exotic matter", "Particle physics stubs", "Theoretic...
68,537,938
https://en.wikipedia.org/wiki/List%20of%20cities%20by%20average%20precipitation
This is a selected list of cities around the world with their average monthly precipitation in litres per square metre (equivalently millimetres). Africa Asia Europe North America Oceania South America See also List of cities by average temperature List of cities by sunshine duration List of weather records References Precipitation Weather-related lists Lists of cities List of cities in Europe by precipitation
List of cities by average precipitation
[ "Physics" ]
70
[ "Weather", "Physical phenomena", "Weather-related lists", "Climate and weather statistics" ]
68,539,401
https://en.wikipedia.org/wiki/Zissis%20Samaras
Zissis Samaras (born 22 February 1956) is a Greek mechanical engineer and a professor of thermodynamics at Aristotle University of Thessaloniki, where he began his academic career in 1989. He is also the head of the Laboratory of Applied Thermodynamics (LAT) and co-founder of the two environmental spinoffs EMISIA SA and Exothermia. Career Zissis Samaras was born in Thessaloniki in 1956 and completed his PhD at Aristotle University of Thessaloniki in 1989. He later became a lecturer in Thermodynamics at the same institution. In 2003, he was appointed to a professorship and later head of department from 2007 to 2009. His research work deals primarily with engine and vehicle emissions testing and modeling, and he has carried out a wide range of projects on modeling emissions from internal combustion engines. In recent years, he has more broadly focused on sustainable energy and experimental techniques for testing of exhaust emissions. In the past two decades he has led multiple research projects with LAT, involving the European Commission. Moreover, the Greek state has made use of technology developed by the Laboratory of Applied Thermodynamics and Zissis Samaras. He is the Vice Chair of European Road Transport Research Advisory Council. Since 2021, he has been the co-ordinator of European Commission's fuel consumption project Mile21. Education He received his BSc/MSc and PhD degrees in mechanical engineering from Aristotle University of Thessaloniki. Published work Samaras has authored and co-authored more than 300 scientific publications to date. References 1956 births Living people Greek engineers Academic staff of the Aristotle University of Thessaloniki Mechanical engineers 21st-century Greek scientists Engineers from Thessaloniki
Zissis Samaras
[ "Engineering" ]
342
[ "Mechanical engineers", "Mechanical engineering" ]
68,539,682
https://en.wikipedia.org/wiki/NGC%20575
NGC 575 is a barred spiral galaxy of Hubble type SB(rs)c in the constellation Pisces. It is approximately 145 million light years from the Milky Way and has a diameter of about 70,000 light years. The object was discovered on October 17, 1876, by Édouard Stephan (listed as NGC 575) and on January 18, 1896, by Stéphane Javelle (listed as IC 1710). References External links SIMBAD Astronomical Database Deep Sky Catalog Barred spiral galaxies Pisces (constellation) 0575 005634 IC objects Discoveries by Édouard Stephan
NGC 575
[ "Astronomy" ]
118
[ "Pisces (constellation)", "Constellations" ]
68,540,818
https://en.wikipedia.org/wiki/Escaped%20plant
An escaped plant is a cultivated plant that has escaped from agriculture, forestry or garden cultivation and has become naturalized in the wild. Usually not native to an area, escaped plants may become invasive. Therefore, escaped plants are the subject of research in invasion biology. Some ornamental plants have characteristics which allow them to escape cultivation and become weedy in alien ecosystems with far-reaching ecological and economic consequences. Escaped garden plants may be called garden escapes or escaped ornamentals. Sometimes, their origins can even be traced back to botanical gardens. Dispersal All escaped plants belong to the so-called hemerochoric plants. This term is used across the board for plants that have been introduced directly or indirectly by humans. The term also includes the unintentionally introduced plants that were introduced through seed pollution (speirochoric) or through unintentional transport (agochoric). Plants may escape from cultivation in various ways, including the dumping of green waste in bushland and road reserves and by birds or other animals eating the fruits or seeds and dispersing them. Others are accidental hitchhikers that escape on ships, vehicles, and equipment. Plants can also escape through sending stolons (runners), as stolons are capable of independent growth in other areas. Garden escapees can be adventive, which means they can be established by human influence in a site outside their area of origin. Some plants, such as the opium poppy Papaver somniferum, escaped from cultivation so long ago that they are considered archaeophytes, and their original source may be obscure. Occasionally, seed contamination also introduces new plants that could reproduce for a short period of time. The proportion of adventitious species in open ruderal corridors at such locations can exceed 30% of the flora of these locations. Further, ornamental alien plants can easily escape their confined areas (such as gardens and greenhouses) and naturalize if the climate outside changes to their benefit. In the US, there are over 5,000 escaped plants, many of which are escaped ornamentals. Ecological threats Many invasive neophytes in Australia and New Zealand were originally garden escapees. The Jerusalem thorn forms impenetrable thorny thickets in the Northern Territory which can be several kilometers in length and width. Two other plants introduced as ornamental garden plants, Asparagus asparagoides and Chrysanthemoides monilifera, now dominate the herbaceous layer in many eucalyptus forests and supplant perennials, grasses, orchids, and lilies. Neophytes that compete aggressively, and which displace and repel populations of native species, may permanently change the habitat for native species and can become an economic problem. For example, species of Opuntia (prickly pears) have been introduced from America to Australia, and have become wild, thus rendering territories unsuitable for breeding; the same goes for European gorse (Ulex europaeus) in New Zealand. Rhododendron species introduced as ornamental garden plants in the British Isles crowd out island vegetation. The same can be seen in many acidic peatlands in the Atlantic and subatlantic climates. Robinia pseudoacacia was imported from America to Central Europe for its rapid growth, and it now threatens the scarce steppe and natural forest areas of the drylands. Examples in forests include Prunus serotina which was initially introduced to speed up the accumulation of humus. In North America, Tamarisk trees, native to southern Europe and temperate parts of Asia, have proven to be problematic plants. In nutrient-poor heaths, but rich in grasses and bushes (fynbos) in the region Cape in South Africa, species of eucalyptus from Australia are growing strongly. As they are largely accustomed to poor soils, and in the Cape region they lack competitors for nutrients and parasites that could regulate their population, they are able to greatly modify the biotope. In Hawaii, the epiphytic fern Phlebodium aureum, native to the tropical Americas, has spread widely and is considered an invasive plant. Particularly unstable ecosystems, already unbalanced by attacks or possessing certain characteristics, can be further damaged by escaped plants if the vegetation is already weakened. In the humid forests of Australia, escaped plants first colonize along roads and paths and then enter the interior of the regions they surround. Thunbergia mysorensis, native to India, invaded the rainforests around the coastal city of Cairns in Queensland and even invades trees 40 m high. In Central Australia, the Eurasian species Tamarix aphylla grows along river banks, repelling native tree species, and wildlife that go together, lowers water levels and increases soil salinity. As in the United States, tamarisks have proven to be formidable bio-invaders. The fight against this species of trees, which has spread widely since, appears to be almost hopeless. Related terms Escaped plants can fall within the definition of, and may have a relation to, these botanical terminologies below: Agriophyte: Refers to plant species that have invaded natural or near-natural vegetation and can survive there without human intervention. Established in their new natural habitats, they remain part of natural vegetation even after human influence has ceased, and are independent of humans in their continued existence. Examples in Central Europe are waterweed, Douglas fir and Japanese knotweed Alien: A non native species introduced by man. Archaeophyte: An alien species introduced by human activity long ago, such as the sweet chestnuts introduced by the Romans in Germany and now part of natural vegetation, and the opium and field poppies. Epecophyte: Species of recent appearance, usually numerous and constant in the country, but confined to artificial habitats, such as meadows and ruderal vegetation. They are dependent on humans for existence that their habitats require constant renewal. Ephemerophyte: Species that are only introduced inconsistently, that die briefly from culture or that would disappear again without constant replenishment of seeds. In other words, they can establish themselves temporarily, but they are not in a position to meet all the conditions relating to the territory. A cold winter, or an unusual drought, can lead to the death of these plants; most of the time, they are not able to fight against the local flora in extreme conditions. Hemerochory: Plants or their seeds may have been transported voluntarily (introduction) or involuntarily by humans in a territory which they could not have colonized by their own natural mechanisms of dissemination, or at least much more slowly. They are able to maintain themselves in this new vital space without voluntary help from man. Many Central European cultivated and ornamental plants are hemorochoric – insofar as they have escaped and subsist independently of cultivation. These are the forms of hemerochory: Agochoric: Plants that are spread through accidental transport with, among other things, ships, trains, and cars. On land, agochoric plants used to be common in harbors, at train stations, or along railway lines. Australia, like New Zealand, has taken stringent measures to prevent the spread by seed or human transport. Agricultural implements imported into Australia must be thoroughly cleaned. Air travelers from other continents are forced to thoroughly clean the soles of their shoes. Ethelochoric: Deliberate introduction by seedlings, seeds, or plants in a new habitat by humans. Many cultivated plants which currently play an important role in human nutrition have been deliberately disseminated by humans. Wheat, barley, lentil, broad bean and flax, for example. Speirochoric: Unintentional introduction by seeds. As all seed samples also contain the seeds of the grasses of the field from which they were obtained, the trade-in seeds of useful plants has also allowed the spread of other species. Speirochoric plants are therefore sown on soil prepared by man and compete with useful plants. Wild chamomile, poppy, cornflower, corn buttercup are example of plants that were unintentionally scattered. Neophyte: An alien species introduced by man after 1500 AD. Example species Examples of escaped plants and/or garden escapees include: Alchemilla mollis Allium schoenoprasum Allium ursinum Anredera cordifolia Aquilegia vulgaris Araujia sericifera Ardisia crenata Asclepias tuberosa Asparagus aethiopicus Baccharis halimifolia Bartlettina sordida Berberis thunbergii Borago officinalis Bryophyllum delagoense Buddleja davidii Calystegia silvatica Cardiospermum halicacabum Carpobrotus edulis Castanea sativa Cenchrus setaceus Centranthus ruber Cestrum elegans Cestrum parqui Clematis orientalis Clerodendrum bungei Consolida ajacis Convallaria majalis Coreopsis basalis Crocosmia spp. Cyclamen persicum Cymbalaria muralis Delairea odorata Dichondra repens Digitalis purpurea Dolichandra unguis-cati Doronicum orientale Echinops exaltatus Echium candicans Elodea canadensis Epiphyllum oxypetalum Eriocapitella hupehensis Erythranthe moschata Eschscholzia californica Foeniculum vulgare Galega officinalis Galinsoga parviflora Hedera helix Hedera hibernica Helianthus annuus Helianthus tuberosus Hemerocallis fulva Heracleum mantegazzianum Hesperis matronalis Ilex aquifolium Impatiens glandulifera Impatiens parviflora Ipomoea cairica Ipomoea indica Iris pseudacorus Isatis tinctoria Juglans regia Kalanchoe delagoensis Kniphofia uvaria Laburnum anagyroides Lamiastrum galeobdolon Lantana camara Lavandula stoechas Lespedeza bicolor Ligustrum lucidum Lilium lancifolium Linaria purpurea Lonicera maackii Lysimachia punctata Lythrum salicaria Macfadyena unguis-cati Melastoma sanguineum Monarda punctata Nothoscordum gracile Nymphaea mexicana Olea europaea subsp. cuspidata Opuntia ficus-indica Oxalis debilis Papaver cambricum Pelargonium peltatum Phlox paniculata Physalis alkekengi Prunus serotina Reynoutria japonica Rhododendron ponticum Ribes rubrum Ricinus communis Robinia pseudoacacia Rubus hawaiensis Ruellia simplex Senecio angulatus Senecio elegans Senna pendula Silene armeria Solanum lycopersicum Sparaxis tricolor Stachytarpheta mutabilis Sphagneticola trilobata Talinum paniculatum Thymus praecox Tradescantia fluminensis Tulipa sylvestris Vanilla × tahitensis Vinca major Vinca minor Watsonia meriana Gallery See also Volunteer plant Adventitious plant Archaeophyte Assisted colonization Hemerochory Neophyte Bibliography Angelika Lüttig, Juliane Kasten (2003): Hagebutte & Co: Blüten, Früchte und Ausbreitung europäischer Pflanzen. Fauna, Nottuln. ISBN 3-93-598090-6. Christian Stolz (2013): Archäologische Zeigerpflanzen: Fallbeispiele aus dem Taunus und dem nördlichen Schleswig-Holstein. Plants as indicators for archaeological find sites: Case studies from the Taunus Mts. and from the northern part of Schleswig-Holstein (Germany). Schriften des Arbeitskreises Landes- und Volkskunde 11. Herrando-Moraira, S., Nualart, N., Herrando-Moraira, A. et al. Climatic niche characteristics of native and invasive Lilium lancifolium. Sci Rep 9, 14334 (2019). Climatic niche characteristics of native and invasive Lilium lancifolium References External links ESCAPED GARDEN PLANTS AS A KEY THREATENING PROCESS Escape from confinement or garden escape (pathway cause) Invasive species Environmental conservation Environmental terminology Habitat
Escaped plant
[ "Biology" ]
2,601
[ "Pests (organism)", "Invasive species" ]
68,541,586
https://en.wikipedia.org/wiki/Tone%20indicator
A tone indicator or tone tag is a symbol attached to a sentence or message sent in a textual form, such as over the internet, to explicitly state the intonation or intent of the message, especially when it may be otherwise ambiguous. Tone indicators start with a forward slash (/), followed by a short series of letters, usually a shortening of another word. Examples include /j, meaning "joking"; /srs, meaning "serious"; or /s, meaning "sarcastic". History Early attempts to create tone indicators stemmed from the difficulty of denoting irony in print media, and so several irony punctuation marks were proposed. The percontation point (⸮; a reversed question mark) was proposed by Henry Denham in the 1580s to denote a rhetorical question, but usage died out by the 1700s. In 1668, John Wilkins proposed the irony mark, using an inverted exclamation mark (¡) to denote an ironic statement. Various other punctuation marks were proposed over the following centuries to denote irony, but none gained popular usage. In 1982, the emoticon was created to be used to denote jokes (with :-)) or things that are not jokes (with :-(). The syntax of modern tone indicators stems from /s, which has long been used on the internet to denote sarcasm. This symbol is an abbreviated version of the earlier /sarcasm, itself a simplification of </sarcasm>, the form of a humorous XML closing tag marking the end of a "sarcasm" block, and therefore placed at the end of a sarcastic passage. Internet usage On the internet, one or more tone indicators may be placed at the end of a message. A tone indicator on the internet often takes the form of a forward slash (/) followed by an abbreviation of a relevant adjective; alternatively, a more detailed textual description (e. g., / friendly, caring about your well-being) may be used. For example, /srs may be attached to the end of a message to indicate that the message is meant to be interpreted in a serious manner, as opposed to, for example, being a joke (which is commonly represented as /j). Tone indicators are used to explicitly state the author's intent, instead of leaving the message up to interpretation. See also Internet slang Poe's law References Internet terminology
Tone indicator
[ "Technology" ]
489
[ "Computing terminology", "Internet terminology" ]
68,542,438
https://en.wikipedia.org/wiki/Phosphate%20phosphite
A phosphate phosphite is a chemical compound or salt that contains phosphate and phosphite anions (PO33- and PO43-). These are mixed anion compounds or mixed valence compounds. Some have third anions. Phosphate phosphites frequently occur as metal organic framework (MOF) compounds which are of research interest for gas storage, detection or catalysis. In these phosphate and phosphite form bridging ligands to hard metal ions. Protonated amines are templates. Naming An phosphate phosphite compound may also be called a phosphite phosphate. Production Phosphate phosphite compounds are frequently produced by hydrothermal synthesis, in which a water solution of ingredients is enclosed in a sealed container and heated. Phosphate may be reduced to phosphite or phosphite oxidised to phosphate in this process. Properties On heating, Related Related to these are the nitrite nitrates and arsenate arsenites. List References Phosphites Phosphates Mixed anion compounds
Phosphate phosphite
[ "Physics", "Chemistry" ]
218
[ "Matter", "Mixed anion compounds", "Salts", "Phosphates", "Ions" ]
68,542,699
https://en.wikipedia.org/wiki/Vornorexant
Vornorexant, also known by its developmental code names ORN-0829 and TS-142, is an orexin antagonist medication which is under development for the treatment of insomnia and sleep apnea. It is a dual orexin OX1 and OX2 receptor antagonist (DORA). The medication is taken by mouth. As of June 2021, vornorexant is in phase 2 clinical trials for insomnia and phase 1 trials for sleep apnea. It is under development by Taisho Pharmaceutical. Vornorexant has a time to peak of 2.5hours and a relatively short elimination half-life of 1.3 to 3.3hours. It was designed to have a short half-life and duration in order to reduce next-day side effects like somnolence. See also Seltorexant – another investigational short-acting orexin receptor antagonist List of investigational sleep drugs § Orexin receptor antagonists References External links Vornorexant (ORN-0829, TS-142) - AdisInsight Experimental psychiatric drugs Fluoroarenes Hypnotics Ketones Orexin antagonists Oxazines Pyrazoles Pyridines Triazoles
Vornorexant
[ "Chemistry", "Biology" ]
253
[ "Hypnotics", "Behavior", "Ketones", "Functional groups", "Sleep" ]
68,543,267
https://en.wikipedia.org/wiki/Search%20for%20Hidden%20Particles
The Search for Hidden Particle (SHiP) is a proposed fixed-target experiment at CERN's Super Proton Synchrotron (SPS) with the goal of searching for the interactions and measurements of the weakly interacting particles. In October 2013, the Expression of Interest letter for SHiP was submitted to the SPS Council (SPSC). Following which the Technical Proposal was submitted in April 2015, describing the experimental and detector facility. The Comprehensive Design Study was completed during 2016-19. The experiment is planned to begin in 2027, and begin collecting data in 2030. SHiP Collaboration intends to search for the weakly interacting particles whose masses are below the Fermi energy scale. Such particles cannot be detected at Large Hadron Collider yet, though the High Luminosity LHC may open some possibilities. Alongside, the SHiP detector will also search for weakly-interacting sub-GeV dark matter particles. SHiP also plans to add information to the domain of tau neutrino physics. Out of the three neutrino flavors, the tau neutrino is the least studied. The experiment will aim to make the first direct observation of anti-tau neutrino, as well as measurements of the tau-neutrino and anti-tau neutrino cross-sections. Another goal is to study lepton flavor non-conservation, by observing the decays of the tau-leptons. References External links SHiP experiment record on INSPIRE-HEP CERN experiments Particle experiments
Search for Hidden Particles
[ "Physics" ]
301
[ "Particle physics stubs", "Particle physics" ]
68,543,614
https://en.wikipedia.org/wiki/Passenger%20locator%20form
A passenger locator form (PLF) is a form used by some countries to obtain information about incoming passengers prior to international travel. It typically requests contact, journey, and stay details. It may take the form of a physical document, or be entirely electronic and contain little more than a barcode. Ireland During the COVID-19 pandemic, Ireland required incoming travellers to complete a Passenger Locator Form. This requirement was withdrawn with effect from Sunday 6 March 2022. United Kingdom During the COVID-19 pandemic, the UK Government required incoming travellers to complete a passenger locator form. This requirement was withdrawn with effect from 4 a.m. on Friday 18 March 2022. References Identity documents International travel documents
Passenger locator form
[ "Physics" ]
153
[ "Physical systems", "Transport", "Transport stubs" ]
68,543,797
https://en.wikipedia.org/wiki/Arsenate%20arsenite
An arsenate arsenite is a chemical compound or salt that contains arsenate and arsenite anions (AsO33- and AsO43-). These are mixed anion compounds or mixed valence compounds. Some have third anions. Most known substances are minerals, but a few artificial arsenate arsenite compounds have been made. Many of the minerals are in the Hematolite Group. An arsenate arsenite compound may also be called an arsenite arsenate. Properties Some members of this group of materials like mcgovernite has an extremely high unit cell dimension of 204 Å. Related Mixed valence pnictide compounds related to the arsenate arsenites include the nitrite nitrates, and phosphate phosphites. List References Arsenates Arsenites Mixed anion compounds
Arsenate arsenite
[ "Physics", "Chemistry" ]
178
[ "Ions", "Matter", "Mixed anion compounds" ]
64,197,042
https://en.wikipedia.org/wiki/Autogamy%20depression
Autogamy depression can be defined as the "lowered viability of autogamous progeny relative to geitonogamous progeny". Viability has also been evaluated in terms of percent fruit set or seed set rather than reproductive fitness of the progeny. The experimental design for observing the occurrence of autogamy depression is called an "autogamy depression test" which has been described by researchers as analogous to a test for inbreeding depression. The ability for fitness of autogamous progeny to differ from geitonogamous progeny comes from the understanding that plants can accumulate heritable mutational variation through both mitotic division and meiotic division. Because plants have indeterminate growth, the apical meristems that contribute to the development of the reproductive structures of a plant have the potential to undergo continual mitosis resulting in the accumulation of somatic mutations (acquired mutations). It has been demonstrated through research that long lived plants can have higher per generation mutation rate (based on occurrences of more mitotic cell divisions compared to short lived plants). Any deleterious mutations that appear during mitotic growth are filtered out through cell lineage selection, in which deleterious mutations that are subject to developmental selection during mitotic growth are replaced by vigorous cell lineages, however, somatic mutations that are not expressed will not be subject to selection during growth of the plant and will accumulate in the apical meristem. Phenotypic effects of somatic mutations There is evidence of the phenotypic effects of somatic mutations in increased chlorophyll mutants of some long-lived plants. Chlorophyll mutants are inherently easy to observe because of the phenotypic effects of the chlorophyll mutations. Because of the highly conserved nature of photosynthetic processes, these chlorophyll mutation rates can be generalized to most angiosperms. Somatic mutations accumulating during vegetative growth have also been found to affect the fitness of seedlings in the next generation. Expectations of autogamy depression test Individual crowns are treated as "independent mitotic mutation-accumulation lines" and so the appearance of deleterious somatic mutations in the autogamous crosses will be heterozygous or homozygous at the same locus (~25% homozygous) and the appearance of deleterious somatic mutations in the geitonogamous crosses will be heterozygous. The autogamy depression can be calculated through the simple equation AD = 1 − (wa/wg), where AD is the autogamy depression, wa is the fitness of the autogamous progeny and wg is the fitness of the geitonogamous progeny. When the fitnesses are equal the AD is 0. The difference can be calculated by the equation D = wg − wa. References Evolution Genetics Plant reproduction
Autogamy depression
[ "Biology" ]
592
[ "Behavior", "Plant reproduction", "Plants", "Genetics", "Reproduction" ]
64,197,495
https://en.wikipedia.org/wiki/Andrew%20Anagnost
Andrew Anagnost is the President and CEO of Autodesk, having been appointed to the positions in 2017. He took over the positions from Carl Bass, who resigned in February 2017. Before the promotion, he had served in various other roles for the company since joining in 1997. He holds degrees from California State University, Northridge and Stanford University. Early life and education Anagnost grew up in Van Nuys, California and initially dropped out of high school. After issues with legal and educational authorities, his family helped him enroll in a new high school and he went on to graduate. Afterwards, Anagnost went on to earn a bachelor's degree in mechanical engineering from California State University, Northridge (CSUN) in 1987. His mother, sister, and brother also graduated from CSUN. During his bachelor's degree, Anagnost completed an internship at Lockheed Martin. He later obtained an MS in engineering science and a PhD in aeronautical engineering with a minor in computer science from Stanford University. Career Following graduation from his bachelor's, Anagnost initially worked as a composites structure engineer and propulsion installation engineer at Lockheed Martin, where he had previously interned. He left the position to pursue his further education at Stanford, leading to a position at the NASA Ames Research Center as a National Research Council post-doctoral fellow. Finding the aeronautics business 'too slow', he joined the Exa Corporation in Boston in 1992, before joining Autodesk as a product manager in 1997. Early on in his career at Autodesk, he led development of the company’s manufacturing products and increased the revenue of Autodesk Inventor five-fold to more than $500 million. Working his way up through the company over the years, he achieved the position of Chief Marketing Officer and SVP of the Business Strategy & Marketing. In these roles, he was credited with Autodesk's transition to software as a service, as well as the adoption of cloud computing. Following the resignation of Carl Bass, Anagnost was appointed as interim-CEO together with Amar Hanspal, the Chief Product Officer. Following a four month search, Anagnost was permanently appointed as President and CEO of the company. In this role, Anagnost has pushed for a refocus of the company on software for construction, leading to the demise of some other company ventures and a workforce slash which saw 1,200 employees lose their job at the company. As part of the new focus, Autodesk acquired construction tech start-ups PlanGrid for $875 million, the company's biggest ever acquisition, and BuildingConnected for $275 million in 2018. Additionally, since becoming CEO, the company's share price has nearly tripled and Autodesk has reached a market value of $41.1B, entering the Forbes Global 2000 and Fortune 500. Personal life and philanthropy Growing up, Anagnost's dream job was to work on space ships for NASA. He enjoys reading science fiction novels, with The Fountains of Paradise being one of his favorite works, and is a fan of both the Star Wars and Star Trek franchises. In 2018, Anagnost was one of the judges in the Annual Engineering Showcase at his alma mater CSUN and hosted a talk at the university. The following year he was rewarded with the 2019 Distinguished Alumni Award from CSUN. That same year, Anagnost and his wife donated $300,000 to the university to establish the Teresa Sendra-Anagnost Memorial Scholarship Endowment in honor of his mother, who died in 2011 after suffering complications from cardiac surgery. The endowment supports outstanding students in the university's College of Engineering and Computer Science through funding of their education. Autodesk also donated $1 million to CSUN in 2020 to support the founding and construction of a Center for Integrated Design and Advanced Manufacturing at the university. References Living people American computer businesspeople American technology chief executives California State University, Northridge alumni Stanford University School of Engineering alumni Autodesk people Year of birth missing (living people)
Andrew Anagnost
[ "Technology" ]
819
[ "Lists of people in STEM fields", "Proprietary technology salespersons" ]
64,198,260
https://en.wikipedia.org/wiki/Ginsenoside%20Rb1
Ginsenoside Rb1 (or Ginsenoside Rb1 or GRb1 or GRb1) is a chemical compound belonging to the ginsenoside family. Like other ginsenosides, it is found in the plant genus Panax (ginseng), and has a variety of potential health effects including anticarcinogenic, immunomodulatory, anti‐inflammatory, antiallergic, antiatherosclerotic, antihypertensive, and antidiabetic effects as well as antistress activity and effects on the central nervous system. Pharmacological effects A 1998 study by Seoul National University reported that GRb1 and GRg3 (ginsenosides Rb1 and Rg3) significantly attenuated glutamate-induced neurotoxicity by inhibiting the overproduction of nitric oxide synthase among some other findings regarding their neuroprotective properties. In 2002, the Laboratory for Cancer Research in Rutgers University showed that GRb1 and GRg1 have neuroprotective effect for spinal cord neurons, while ginsenoside Re did not exhibit any activity. GRb1 and GRg1 are proposed to represent potentially effective therapeutic agents for spinal cord injuries. The protection that GRg1 (ginsenoside Rg1) and GRb1 offer against Alzheimer’s disease symptoms in mice was first published by researchers in 2015. The GRg1 affected three metabolic pathways: the metabolism of lecithin, amino acids and sphingolipids, while GRb1 treatment affected lecithin and amino acid metabolism. It was reported in 2017 that GRb1 improved cardiac function and remodelling in heart failure in mice. The treatment of H-ginsenoside Rb1 potentially attenuated cardiac hypertrophy and myocardial fibrosis. Proposed biosynthesis The biosynthesis of GRb1 in Panax ginseng starts from farnesyl diphosphate (FPP), which is converted to squalene with squalene synthase (SQS), then to 2,3-oxidosqualene with squalene epoxidase (SE). The 2,3-oxidasqualene is then converted to dammarenediol-II by cyclization, with dammarenediol-II synthase (DS) as the catalyst. The dammarenediol-II is converted to protopanaxadiol and then to ginsenoside Rd. Finally, GRb1 is synthesized from ginsenoside Rd, catalysed by UDPG:ginsenoside Rd glucosyltransferase (UGRdGT), a biosynthetic enzyme of GRb1 first discovered in 2005. References Biosynthesis Triterpene glycosides
Ginsenoside Rb1
[ "Chemistry" ]
595
[ "Biosynthesis", "Metabolism", "Chemical synthesis" ]
64,198,506
https://en.wikipedia.org/wiki/Peroxydiphosphoric%20acid
Peroxydiphosphoric acid (H4P2O8) is an oxyacid of phosphorus. Its salts are known as peroxydiphosphates. It is one of two peroxyphosphoric acids, along with peroxymonophosphoric acid. History Both peroxyphosphoric acids were first synthesized and characterized in 1910 by Julius Schmidlin and Paul Massini, where peroxydiphosphoric acid was obtained in poor yields from the reaction between diphosphoric acid and highly-concentrated hydrogen peroxide. H4P2O7 + H2O2 -> H4P2O8 + H2O Preparation Peroxydiphosphoric acid can be prepared by the reaction between phosphoric acid and fluorine, with peroxymonophosphoric acid being a by-product. 2H3PO4 + F2 -> H4P2O8 + 2HF The compound is not commercially available and must be prepared as needed. Peroxodiphosphates can be obtained by electrolysis of phosphate solutions. Properties Peroxydiphosphoric acid is a tetraprotic acid, with acid dissociation constants given by pKa1 ≈ −0.3, pKa2 ≈ 0.5, pKa3 = 5.2 and pKa4 = 7.6. In aqueous solution, it disproportionates upon heating to peroxymonophosphoric acid and phosphoric acid. H4P2O8 + H2O <=> H3PO5 + H3PO4 References Phosphorus oxoacids Mineral acids
Peroxydiphosphoric acid
[ "Chemistry" ]
359
[ "Acids", "Inorganic compounds", "Mineral acids" ]
64,200,894
https://en.wikipedia.org/wiki/Shad%20%28software%29
The students education network (), with acronym Shad () That in addition to the abbreviation of the full name of the program, it refers to the word Shaad meaning happy, is a communication and educational software that was launched following the spread of the coronavirus due to the absence of students in schools in Iran. The software is owned by the Ministry of Education of Iran, and students, teachers and headmasters are the people who use this software. At first, on 2020 April 4, Shaad Software was run only on messaging apps, and principals, teachers, and students needed to install one of the Bale, Soroush, Gap, iGap, and Rubica messengers and other, but on 2020 April 9, the Ministry of Education presented the software without needing to have those messengers. About 70% of Iranian students are members of this social network. Due to the emphasis of education on the installation and use of this software, a significant number of students were activated in this student network, which is estimated to be more than 17 million people. According to Mohammad Mehdi Nooripour, chairman of the Student Organization Assembly, Shad software has about 800,000 daily visits. 13 percent of Iranian students never had an electronic device for application setup. History During the pandemic Shad development was delayed, and it was replaced by online TV teachers, but it has so many problems. It was called Social network of students. Products and services shaadbin.ir ("children search engine"- by Zarebin.ir) Student real life identity authentication (15 million students) Temporary free Internet bandwidth (mobile data-some designated Iranian mobile network corporations offered SIMs) External APIs for Iranian mobile apps Use Private schools educators are not required to install the app. Reception Simultaneously with the unveiling of the software, many students and teachers criticized the software. They claimed that the software had low quality and could not compensate for the training for students. In the view of some, its inefficiency is due to the fact that some students live in deprived areas and lack facilities such as computers, laptops, smartphones, and even high-speed or regular Internet. On the other hand, Mohammad Mehdi Nooripour and Majid Najafizadeh, Representatives of students and teachers of Iran thanked the Minister of Education for setting up this network at a meeting of Student Organization. Recently, in an update, new features have been added and the performance of Shad has been improved. See also COVID-19 pandemic in Iran References External links https://shad.ir/ Official website Social software Educational software Android (operating system) software Mobile applications Instant messaging clients
Shad (software)
[ "Technology" ]
547
[ "Social software", "Mobile content", "Mobile technology stubs", "Instant messaging clients", "Instant messaging", "Mobile software stubs" ]
64,202,283
https://en.wikipedia.org/wiki/Higuchi%20dimension
In fractal geometry, the Higuchi dimension (or Higuchi fractal dimension (HFD)) is an approximate value for the box-counting dimension of the graph of a real-valued function or time series. This value is obtained via an algorithmic approximation so one also talks about the Higuchi method. It has many applications in science and engineering and has been applied to subjects like characterizing primary waves in seismograms, clinical neurophysiology and analyzing changes in the electroencephalogram in Alzheimer's disease. Formulation of the method The original formulation of the method is due to T. Higuchi. Given a time series consisting of data points and a parameter the Higuchi Fractal dimension (HFD) of is calculated in the following way: For each and define the length by The length is defined by the average value of the lengths , The slope of the best-fitting linear function through the data points is defined to be the Higuchi fractal dimension of the time-series . Application to functions For a real-valued function one can partition the unit interval into equidistantly intervals and apply the Higuchi algorithm to the times series . This results into the Higuchi fractal dimension of the function . It was shown that in this case the Higuchi method yields an approximation for the box-counting dimension of the graph of as it follows a geometrical approach (see Liehr & Massopust 2020). Robustness and stability Applications to fractional Brownian functions and the Weierstrass function reveal that the Higuchi fractal dimension can be close to the box-dimension. On the other hand, the method can be unstable in the case where the data are periodic or if subsets of it lie on a horizontal line (see Liehr & Massopust 2020). References Fractals Algorithms
Higuchi dimension
[ "Mathematics" ]
383
[ "Mathematical analysis", "Functions and mappings", "Algorithms", "Mathematical logic", "Applied mathematics", "Mathematical objects", "Fractals", "Mathematical relations" ]
64,204,280
https://en.wikipedia.org/wiki/Reza%20Razavi
Reza Razavi is an Iranian professor of paediatric cardiovascular science, vice-president and vice-principal of research at the King's College London, the director of research at King's Health Partners, and the director of the King's Wellcome Trust EPSRC Centre For Medical Engineering. Career Professor Rezavi obtained a degree in medicine at St. Bartholomew's Medical School, Barts and The London School of Medicine and Dentistry, Barts Health NHS Trust based in 1988, and later trained in the area of Paediatrics and Paediatric Cardiology. He was appointed as the Head of Division of Imaging Sciences and Biomedical Engineering between Jan 2007 and March 2017, as an Assistant Principal (Research & Innovation) between 2015 and 2017, and as a non-executive director on the Board of Guy's and St Thomas’ NHS Foundation Trust during 2016. His research focuses in the area of cardiovascular diseases using imaging and biomedical engineering. It includes, but not limited to, cardiac magnetic resonance imaging (MRI) concerning congenital heart disease, electrophysiology and heart failure, image-guided intervention, X-ray and MRI based guided cardiac catheterisation, and methodological advancements for quicker cardiac imaging. He, along with his group, performed the first MRI-guided cardiac catheterisation in humans, and helped to establish the Trust's cardiovascular MRI service and developed the world's first cardiovascular MRI cardiac catheterisation programme. Memberships 2001–Present: Society of Cardiac MR Congenital Heart Disease Committee. 2001–Present: The British Society of Cardiac MR affiliated to the British Cardiac Society. He is a past chair of the British Society for Cardiovascular MRI. Present: Governance Committer Academic Board member at the King's College London. Journal services 2001–Present: Heart 2001–Present: European Heart Journal 2001–Present: Circulation (Baltimore) Publications Professor Reza Rezavi published his first research paper during the year 2000 titled "Pulmonary arterial thrombosis in a neonate with homozygous deficiency of antithrombin III: successful outcome following pulmonary thrombectomy and infusions of antithrombin III concentrate". His most cited article is the "Percutaneous pulmonary valve implantation in humans: results in 59 consecutive patients" with over 450 citations. Till today, Reza has published over 300 documents with more than 8,600 citations recorded by Scopus, and over 20,000 citations recorded by Google. References Medical researchers Academics of King's College London Magnetic resonance imaging Year of birth missing (living people) Living people Alumni of the Medical College of St Bartholomew's Hospital 20th-century British medical doctors 21st-century British medical doctors British medical researchers
Reza Razavi
[ "Chemistry" ]
552
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
64,204,284
https://en.wikipedia.org/wiki/Adrian%20Iovi%C8%9B%C4%83
Adrian Ioviță (born 28 June 1954) is a Romanian-Canadian mathematician, specializing in arithmetic algebraic geometry and p-adic cohomology theories. Education Born in Timișoara, Romania, Iovita received in 1978 his undergraduate degree in mathematics from the University of Bucharest. He worked as a researcher at the Institute of Mathematics of the Romanian Academy, obtaining a Ph.D. degree in 1991 from the University of Bucharest with thesis On local classfield theory written under the direction of Nicolae Popescu. He received in 1996 a doctorate in mathematics from Boston University. His doctoral thesis there was supervised by Glenn H. Stevens; the thesis title is p-adic Cohomology of Abelian Varieties. Career As a postdoc from 1996 to 1998 in Montreal he was at McGill University and Concordia University. From 1998 to 2003 he was an assistant professor at the University of Washington. Since 2003 he is a full professor at Concordia University. He has held permanent positions at the University of Padua, and also in Paris, Münster, Jerusalem, and Nottingham. Awards In 2008 Iovita received the Ribenboim Prize. In 2018 he was an invited speaker, with Vincent Pilloni and Fabrizio Andreatta, with talk p-adic variation of automorphic sheaves (given by Pilloni) at the International Congress of Mathematicians in Rio de Janeiro. Selected publications References 20th-century Romanian mathematicians 21st-century Romanian mathematicians Algebraic geometers University of Bucharest alumni Boston University Graduate School of Arts & Sciences alumni University of Washington faculty Academic staff of Concordia University Living people Romanian emigrants to Canada McGill University people Number theorists 1954 births Scientists from Timișoara
Adrian Ioviță
[ "Mathematics" ]
333
[ "Number theorists", "Number theory" ]
64,206,308
https://en.wikipedia.org/wiki/Steve%20Gottlieb%20%28amateur%20astronomer%29
Steven Michael Gottlieb (born April 4, 1949) is an American amateur astronomer, researcher, writer and lecturer. Biography Gottlieb grew up in the Los Angeles area, later moving to Northern California. In 1973, he earned a master's degree in mathematics at the University of California, Berkeley. Settling in the town of Albany, he taught high school mathematics in the East Bay for 37 years. Amateur astronomy Gottlieb began systematically observing Messier objects in 1977, using a 6-inch reflecting telescope. He employed many different scopes over the years, observing from dark sky sites near the San Francisco Bay Area, the Sierra Nevada foothills and star party events in California and elsewhere. By 2017, he had logged all 7,840 entries of the NGC Catalogue, completing the list after several visits to the southern hemisphere. His resulting compendium of observing reports has become a valuable resource for amateur astronomers. Gottlieb describes himself as a "hardcore visual observer", having never developed an interest in astrophotography. For him, "it's always been about the aesthetics at the eyepiece in a large scope". Currently his main telescope is a 24-inch StarStructure Dobsonian with computerized GoTo system. As of this writing, Gottlieb is the only known person to have visually observed all of the valid NGC objects. NGC/IC Project As Gottlieb's interests developed, he researched at the nearby UC Berkeley astronomy library, comparing his observations with those of professionals and with the Palomar Observatory Sky Survey. While so doing he discovered numerous errors and conflicting data, so began corresponding with other astronomers including Dr. Harold Corwin of the University of Texas. Gottlieb thus became one of the principal investigators of the NGC/IC Project, a collaboration among professional and amateur astronomers to identify and image objects, compile historical observations and correct mistakes in the NGC and IC catalogues. While helping to put the catalogues in order, he also worked with various telescope makers to correct the databases of computerized DSCs (digital setting circles) and GoTo systems. Later he gathered the list of objects and wrote descriptions for the "DeepMap 600", a popular folding star chart. Astronomy writer, public lecturer In the 1980s Gottlieb began writing articles for astronomy magazines about observing galaxy groups, various types of nebulae, supernova remnants and other topics. He is a Contributing editor for Sky and Telescope magazine, and his observing articles are often featured in the "Going Deep" column. Gottlieb promotes visual observing through public lectures for astronomy and science groups in Northern California and elsewhere. References External links NGC/IC Project Adventures in Deep Space Steve Gottlieb's NGC Notes 1949 births Living people American astronomers Amateur astronomers University of California alumni People from Los Angeles
Steve Gottlieb (amateur astronomer)
[ "Astronomy" ]
576
[ "Astronomers", "Amateur astronomers" ]
64,208,595
https://en.wikipedia.org/wiki/GE/PAC%204000
The GE/PAC 4000 computer systems are an obsolete line of computers manufactured by General Electric in Phoenix, Arizona beginning in the 1960s. PAC is short for Process Automation Computer, indicating the intended use of the systems for process control. All 4000 systems are 24-bit, using fixed-point binary data, with between 1020 and 65,536 words of magnetic core memory, and a magnetic drum memory with 8192 to 262,144 word capacity. The CPU logic is implemented with discrete transistors. The systems can be configured with a wide variety of analog and digital inputs and outputs. The 4020 is the low-end model of the system. Three models of the 4000, the 4040, 4050, and 4060 differ in storage speed— 5μsec, 3.4μsec, and 1.7 and 2.38μsec respectively— and by the implementation of a serial arithmetic unit on the 4040 vs. parallel on the other systems, Software The operating system for the 4000 series is called "G-E-MONITOR", a "skeleton real-time system program." "Several versions of MONITOR are available, each tailored to the needs of a specific industry or process." Other software included Process Assembler Language (PAL), FORTRAN II, and Tabular Sequence Control (TASC). A set of memory load, dump, and change routines was provided. Applications A product brochure highlighted potential uses in the utility industry, food processing, manufacturing, the metal and chemical industry, paper and cement manufacturing, and petroleum. References Transistorized computers General Electric
GE/PAC 4000
[ "Technology" ]
329
[ "Computing stubs", "Computer hardware stubs" ]
64,208,634
https://en.wikipedia.org/wiki/Glossary%20of%20functional%20analysis
This is a glossary for the terminology in a mathematical field of functional analysis. Throughout the article, unless stated otherwise, the base field of a vector space is the field of real numbers or that of complex numbers. Algebras are not assumed to be unital. See also: List of Banach spaces, glossary of real and complex analysis. * A B C D E F G H I K L M N O P Q R S T U V W References Bourbaki, Espaces vectoriels topologiques M. Takesaki, Theory of Operator Algebras I, Springer, 2001, 2nd printing of the first edition 1979. Further reading Antony Wassermann's lecture notes at http://iml.univ-mrs.fr/~wasserm/ Jacob Lurie's lecture notes on a von Neumann algebra at https://www.math.ias.edu/~lurie/261y.html https://mathoverflow.net/questions/408415/takesaki-theorem-2-6 Functional analysis Functional analysis
Glossary of functional analysis
[ "Mathematics" ]
229
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
64,208,738
https://en.wikipedia.org/wiki/Developmental%20selection
Developmental selection is selection that occurs on developmental units in an organism, such as cell lineages, embryos, and gametes or gametophytes. Generally, developmental selection is differentiated from natural selection because the targets of selection are internal to an organism contain the developmental units, rather than selection due to external environmental factors that favor specific phenotypes. However, in animals, developmental selection against offspring can manifest in the external environment, in which parents might select against offspring with developmental instabilities, or when offspring with deleterious malformations may not survive. Developmental selection in plants Selective embryo abortion A common form of developmental selection in plants is selective ovule abortion, where the maternal parent selects against unstable embryos. Abortion of low-viability offspring may be driven by either genetic factors or environmental stress. Developmental selection may also occur as the loss of embryos through expressed mutations in developing embryos that cause them to be unable to successfully survive, or from competition for maternal resources among the developing embryos. Gametophytic selection Developmental selection during the haploid life stage of plants may occur through gametophytic selection due to pollen competition and self-incompatibility. Gametophytic selection occurs when a large amount of pollen is deposited on the stigma, and may either occur by pollen competition or by the maternal plant inhibiting self-pollen or pollen from other species. Cell lineage selection Developmental selection can also occur as cell lineage selection due to differential growth rates among germ cell lineages in the apical meristem. In cell lineage selection, favorable mutations that arise in the meristem of plants are selected for and proliferate to become the dominant cells comprising the tip of the meristem, while deleterious mutations are selected against. This kind of selection can help to remove low-fitness meiotic and somatic mutations from populations of plants. This selection is analogous to somatic evolution in cancer. Developmental selection in animals Developmental selection can also occur in animals. Like with pollen competition, sperm is often produced in excess compared to the number of available eggs that can be fertilized. Thus, sperm competition displays developmental selection by selecting against gametes with morphologies that inhibit their success in fertilization. Poorly developed sperm can be also produced by environmental stressors that cause improper development in organisms. Developmental selection may also occur in the living offspring of animals. This tends to occur as malformations in developing offspring that inhibit their survival. Malformed or otherwise abnormally developed offspring may be selected against by the parents. For example, in house mice, the newly born pups are eaten by the mother if they do not squeak or cry out when the mother eats the umbilical cord connecting to the pup. References External links Evolution of animals Evolution of plants
Developmental selection
[ "Biology" ]
564
[ "Evolution of plants", "Animals", "Plants", "Evolution of animals" ]
64,208,794
https://en.wikipedia.org/wiki/Macroscope%20%28science%20concept%29
In science, the concept of a macroscope is the antithesis of the microscope, namely a method, technique or system appropriate to the study of very large objects or very complex processes, for example the Earth and its contents, or conceptually, the Universe. Obviously, a single system or instrument does not presently exist that could fulfil this function, however its concept may be approached by some current or future combination of existing observational systems. The term "macroscope" has also been applied to a method or compendium which can view some more specific aspect of global scientific phenomena in its entirety, such as all plant life, specific ecological processes, or all life on earth. The term has also been used in the humanities, as a generic label for tools which permit an overview of various other forms of "big data". As discussed here, the concept of a "macroscope" differs in essence from that of the macroscopic scale, which simply takes over from where the microscopic scale leaves off, covering all objects large enough to be visible to the unaided eye, as well as from macro photography, which is the imaging of specimens at magnifications greater than their original size, and for which a specialised microscope-related instrument known as a "Macroscope" has previously been marketed. For some workers, one or more (planetary scale) "macroscopes" can already be constructed, to access the sum of relevant existing observations, while for others, deficiencies in current sampling regimes and/or data availability point to additional sampling effort and deployment of new methodologies being required before a true "macroscope" view of Earth can be obtained. History of the concept The term "macroscope" is generally credited as being introduced into scientific usage by the ecologist Howard T. Odum in 1971, who employed it, in contrast to the microscope (which shows small objects in great detail), to represent a kind of "detail eliminator" which thus permits a better overview of ecological systems for simplified modelling and, potentially, management (Odum, 1971, figure 10). Ecologist James Brown (ecologist) equated the field of Macroecology as the process of looking "at the living world through a macroscope rather than through a microscope, and as a result it sees different things than are revealed by most ecological studies ... As I began to look through the macroscope, however, I found that it gave me a view of the ecological world that neither my experiments at one study site nor my nonmanipulative comparative studies at a necessarily limited number of field sites could provide." Some authors, such as Hidefumi Imura, continue to use the term as more-or-less synonymous with an overview or large scale pattern analysis of data in their field. Other prominent authors and speakers who have utilized "macroscope" terminology for "big picture" views in their particular areas of interest include Jesse H. Ausubel and John Thackara. In actuality, the term (in the present sense of a "larger view" of a subject than can be obtained by any single conventional action) pre-dates its use in Odum's work, being found for example in a book by Philip Bagby entitled "Culture and History: Prolegomena to the Comparative Study of Civilizations" published in 1959, who wrote, "[Someone should] invent a 'macroscope', an instrument which would ensure that the historian see only the larger aspects of history and blind him to the individual details", and also by W.H. Hargreaves and K.H. Blacker, who wrote in 1966 in the journal Psychiatric Services: "The advent of the electronic digital computer is causing a revolution in the behavioral sciences comparable to the impact the microscope had on biology. Like the microscope, the computer provides a view that is beyond the capability of the naked eye. The computer is being used as a "macroscope," which enables us to perceive relationships based on larger patterns of information than we are otherwise able to integrate." Slightly earlier still, in the area of geography, in a 1957 article entitled "Geographer's Quest" for the Centennial Review of Arts & Science, Lawrence M. Sommers and Clarence L. Vinge wrote: "What do we see? What are the inter-relationships that exist among the observed features? The near-views can, by means of mapping, be resolved with over-the-horizon views, and the map becomes a "macroscope" to help us understand the spatial organization of the Earth's phenomena.", while in a 1951 United States Department of Agriculture Appropriation Bill, discussing a recently passed Forestry Management Act, Perry H. Merrill, State Forestor of Vermont, is reported as saying: "Through [this Act] I feel that we have made a great headway ... instead of looking through a microscope, maybe we can look through a "macroscope", if you want to call it such." The term was (re-)presented as new (Odum's prior use was mentioned in a footnote) by the French scientific thinker Joël de Rosnay, who wrote a detailed book explaining his concept in 1975: "We need, then, a new instrument. The microscope and the telescope have been valuable in gathering the scientific knowledge of the universe. Now a new tool is needed by all those who would try to understand and direct effectively their action in this world, whether they are responsible for major decisions in politics, in science, and in industry or are ordinary people as we are. I shall call this instrument the macroscope (from macro, great, and skopein, to observe)." In de Rosnay's view, the macroscope could be turned not only on the natural and physical worlds but also on human-related systems such as the growth of cities, economics, and the behaviour of humans in society. More recent workers have tended to use the term synonymously with a whole-of-Earth observational system, or portion thereof, underpinned particularly by satellite imagery derived from remote sensing, and/or by in situ observations obtained via sensor networks (see below). As an extension of its science context, the term "macroscope" has also been applied in the humanities, as a generic term for any tool permitting an overview of, and insight into "big data" collections in that or related areas. For completeness, it should be mentioned that the concept of a "reverse microscope" is not entirely new: around 80 years earlier, the author Lewis Carroll in the second volume of his novel Sylvie and Bruno, published in 1893, described a fictional professor who includes in his lecture an instrument that will shrink an elephant to the size of a mouse, that he termed the "megaloscope". The Dutch author Kees Boeke also wrote a 1957 book, Cosmic View: The Universe in 40 Jumps, the first portion of which presents images of aspects of the Earth at ever decreasing scales and parallels the subsequent principle of the hypothetical "macroscope" at a series of zoom levels. Interpretation and practical implementations The more practical aspect of exactly what constitutes a macroscope has varied through time and according to the interests, requirements, and field of activity of the workers concerned. Sommers and Vinge viewed the "macroscope" as an extended system of mapping to visualize the spatial relationships between items on the surface of the Earth, thus notionally prefiguring the concept of subsequently developed "seamless" geographic display systems via CD-ROMs and the world wide web along the lines of the "Atlas" facility of Microsoft Encarta, and Google Maps/Google Earth. Odum's concept was for the study of ecosystems, by integrating the results of existing methods of surveying, identifying, and classifying their contents, then eliminating fine scale detail to obtain a "big picture" view suitable for analysis and, as needed, simulation. De Rosnay viewed his "macroscope" as a systems-based viewpoint for the study of (among other things) the nature of human society, and understanding of the rationale for human actions. He wrote: From around the early 2000s onwards, interest in the "macroscope" concept has steadily increased, both with the vastly improved computing power in organisations and on scientists' desktops, and with access to more extensive sets of both locally acquired and publicly available data such as Earth observations. For some recent workers such as Dornelas et al. as referenced below, the macroscope is the envisaged set of the observational tools that collectively will deliver the desired synoptic suite of observations over the relevant field of study (in their case for the marine realm, itemised as satellites, drones, camera traps, passive acoustic samplers, biologgers, environmental DNA and human observations), Writing in 2019, these authors stated: For others, the macroscope is already here, as a sort of "virtual instrument", with data sources such as Landsat satellite imagery providing the requisite high resolution Earth view, and/or wireless sensor networks providing a suite of local, in situ observations. In the view of IBM researchers, the macroscope is the technical solution—basically within the realms of data management, data analysis and data mining—that will permit all existing earth and related observations to be integrated and queried for meaningful results. Writing in 2017 they stated: According to IBM in 2020, these "macroscope" principles were subsequently implemented as an experimental system named the "IBM PAIRS Geoscope", later re-badged as the Geospatial Analytics component within the IBM Environmental Intelligence Suite and described therein as "a platform specifically designed for massive geospatial-temporal (maps, satellite, weather, drone, IoT [="Internet of Things"]) query and analytics services". For Craig Mundey of Microsoft, the benefits of the macroscope are not only for observation of the Earth, but also of aspects of the people on it: Some 10 years later, during which time computing power and readily accessible data storage had continued to advance, Microsoft announced the planned development of its "Planetary Computer", an "approach to computing that is planetary in scale and allows us to query every aspect of environmental and nature-based solutions available in real time." Meanwhile, from around 2010 onwards, Google had already developed a somewhat similar facility entitled "Google Earth Engine" that uses cloud computing for numerical analysis of large quantities of satellite imagery; as at 2021, the project website states that "Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities. Scientists, researchers, and developers use Earth Engine to detect changes, map trends, and quantify differences on the Earth's surface." Such initiatives can perhaps be viewed as the "high end" for ingestion of massive, global scale input datasets and associated computation; at the other end of the scale, the development of cross-platform (open) standards for the exchange of digitized geographic information by the Open Geospatial Consortium since the early 2000s has enabled researchers equipped with minimal software to request, display, overlay and otherwise interact with subsets of remote global data streams via (for example) Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS) without a requirement to hold any of the data locally, capable of producing a type of "macroscope" functionality at modest cost (free in the case of open source solutions such as GeoServer, MapServer and more) for displaying information of the user's choice against a range of possible base maps. Other presently available solutions of a similar nature - where the client "virtual globe" software is installed either on the user's device or runs in a web browser, and can then access either remote, or locally held data layers for display over pre-prepared base maps - include NASA WorldWind and ESRI's ArcGIS Earth. In 2013-2014, the New York City Department of Health and Mental Hygiene (DOHMH) designed their own "NYC Macroscope", a surveillance system for electronic health records of New York City residents, designed to "measure health outcomes among the NYC adult population actively seeking medical care". The Indiana University School of Informatics and Computing also runs a mapping outreach program via its Cyberinfrastructure for Network Science Center entitled "Places and Spaces: Mapping Sciences" which in its 2016 program included "eight interactive macroscopes", accompanied by the following definition: "Macroscopes are software tools that help people focus on patterns in data that are too large or complex to see unaided. The world is a complex place, and macroscopes help us understand and manage that complexity. They are visual lenses we can use to see patterns and trends in large volumes of data." Another initiative that has been referred to as a "macroscope" is the Ocean Biogeographic Information System (OBIS), as described by Vanden Berghe et al. in 2012, who wrote: "Its ambition to become a 'Macroscope' (de Rosnay, 1979) for marine biodiversity will allow us to see past complexities and the idiosyncrasies of individual datasets to see the "big picture" of ocean life more clearly", a key activity for this project being the transformation of data existing previously in disparate, and sometimes inaccessible forms into a single, standardized format for ease of access and the production of summary information as desired. A putative "macroscope" of another variety is the Global Database of Events, Language, and Tone (GDELT Project), which monitors (most of) the world's news media creating "trillions of datapoints", then offering "realtime synthesis of global societal-scale behavior into a rich quantitative database allowing realtime monitoring and analytical exploration of those trends." According to the project's website, one of its outputs, the GDELT Global Knowledge Graph (GKG), compiles "a list of every person, organization, company, location and several million themes and thousands of emotions from every news report, using some of the most sophisticated named entity and geocoding algorithms in existence, designed specifically for the noisy and ungrammatical world that is the world's news media." In 2018, 3 partner agencies - The United Nations Development Program (UNDP), the United Nations Environment (UN Environment), and the Secretariat of the Convention on Biological Diversity - launched the "UN Biodiversity Lab" (UNBL) (https://unbiodiversitylab.org/), described as "enhancing access to big data for sustainable development" in the form of global spatial data on protected areas, endangered species, human impact on natural systems, watersheds for key cities, and more. Version 2.0 of the UNBL, released in October 2021, reportedly contains "over 400 spatial data layers across biodiversity, climate change, and development", also offering workspaces where national-level users can upload their own data in order to compile maps for reporting purposes and nation-scale biodiversity planning and monitoring of biodiversity. Some of the differences in approach described above are easier to understand if the macroscope is interpreted as a particular instance of the "value chain of big data" (with a particular focus on earth and/or biosphere observations), which as stated in Chen et al. (2014) can be divided into four phases, namely data generation, data acquisition (aka data assembly), data storage, and data analysis. For some workers such as M. Dornelas et al., the macroscope is the sum of the data collection systems (the generation element) that will provide the content that is needed for subsequent analysis, although some mention is also made of "a series of domain-specific data registries" which would then permit the content to be discovered. For others such as OBIS, the principal effort required to construct the macroscope is the data assembly component, which then permits the integrated analysis of previously disparate datasets (OBIS data can then either be viewed by the tools supplied, or downloaded to a user's own system for additional visualization and analysis); while for facilities interested in discovering patterns in the data (and with sufficient computing power to hand), the macroscope is the suite of temporal and spatial analytical and filtering tools ("lenses" in the terminology of the Indiana University Cyberinfrastructure for Network Science Center) which can be applied once the data are assembled. Since by analogy with the microscope, the macroscope is in essence a method of visualizing subjects too large to be seen completely in a conventional field of view, probably none of these approaches are incorrect, the differences in emphasis being complementary in that each is capable of contributing to the resulting "virtual instrument" that is envisaged by this concept. One trend that is observable, however, is that of increasing base dataset size and desired sampling density, today's "macroscopes" being built upon arrays of fine scale / high resolution data that would have been discarded as undesirable detail (obscuring the "big picture") in the original concepts of Odum and de Rosnay. Similar concepts A number of the concepts described above either reappear, or are paralleled, in the alternatively-named "Geoscope" proposal by Buckminster Fuller in 1962, which was suggested to be a giant representation of the Globe upon which "all relevant inventories of world data" could be displayed via a system of computers. Among the benefits of such a system would be: "With the Geoscope humanity would be able to recognize formerly invisible patterns and thereby to forecast and plan in vastly greater magnitude than heretofore." A similar concept re-emerged as a more concrete proposal entitled "Digital Earth" espoused by then U.S. vice president Al Gore in 1998, progress towards which was reviewed in a 2015 survey paper by Mahdavi-Amiri et al. Contrasting terminology The term macroscopic scale differs in usage from the science concept as discussed above; in essence it covers any item large enough to be seen with the unaided eye, in other words, not requiring a microscope to be visualized. Some authors also use "macroscopic" as part of a continuum of successively larger types of scale, commencing with microscopic, then macroscopic, then mesoscopic, and finally megascopic scales. By contrast, macro photography (short for macroscopic photography) is a term used to cover photographs where the subject appears magnified (greater than life size), strictly speaking at the film plane but in practice when reproduced as a print or on a screen, generally in the range of x1 to x10 magnification; while a Macroscope is also a designation for a type of optical microscope formerly marketed by the European manufacturers Wild Heerbrugg and Leica Microsystems, optimised for macro- and microphotography in the x8 to x40 magnification range; similar instruments, also under the name "Macroscopes", were also previously offered by other optical manufacturers including Bausch and Lomb, and Ednalite Research Corporation. Another use of the term "macroscope", pre-dating Odum's popularization of the science concept, occurs in the novelist Piers Anthony's 1969 science fiction book of the same name, in which his imaginary instrument is a sort of super-telescope, capable of focusing anywhere in space and time at the direction of the user, while in Jill Linz & Cindy Schwarz's 2009 children's novel Adventures in Atomville: The Macroscope, the titular instrument is a new invention by which atoms (which have identities in the book) can visualize the "outside world" for the first time. The term "macroscope" has also been employed in at least 2 instances in the names of commercial computer software products. See also Data fusion Digital Earth Earth observation Geospatial analysis Global Earth Observation System of Systems Landsat program Remote sensing Virtual globe Wireless sensor network Notes References External links "Reflections on "The Macroscope" - a tool for the 21st Century?" - Guest post by Tony Rees on R. Page's "iPhylo" blog site, 7 October 2021, plus associated discussion Google Earth Engine Microsoft Planetary Computer UN Biodiversity Lab (UNBL) Earth Natural environment Physical sciences Geographic data and information Remote sensing
Macroscope (science concept)
[ "Technology" ]
4,202
[ "Geographic data and information", "Data" ]
41,375,141
https://en.wikipedia.org/wiki/Minimum%20overlap%20problem
In number theory and set theory, the minimum overlap problem is a problem proposed by Hungarian mathematician Paul Erdős in 1955. Formal statement of the problem Let and be two complementary subsets, a splitting of the set of natural numbers , such that both have the same cardinality, namely . Denote by the number of solutions of the equation , where is an integer varying between . is defined as: The problem is to estimate when is sufficiently large. History This problem can be found amongst the problems proposed by Paul Erdős in combinatorial number theory, known by English speakers as the Minimum overlap problem. It was first formulated in the 1955 article Some remarks on number theory (in Hebrew) in Riveon Lematematica, and has become one of the classical problems described by Richard K. Guy in his book Unsolved problems in number theory. Partial results Since it was first formulated, there has been continuous progress made in the calculation of lower bounds and upper bounds of , with the following results: Lower Upper J. K. Haugland showed that the limit of exists and that it is less than 0.385694. For his research, he was awarded a prize in a young scientists competition in 1993. In 1996, he improved the upper bound to 0.38201 using a result of Peter Swinnerton-Dyer. This has now been further improved to 0.38093. In 2022, the lower bound was shown to be at least 0.379005 by E. P. White. The first known values of The values of for the first 15 positive integers are the following: It is just the Law of Small Numbers that it is References Additive number theory Unsolved problems in number theory
Minimum overlap problem
[ "Mathematics" ]
351
[ "Unsolved problems in mathematics", "Mathematical problems", "Unsolved problems in number theory", "Number theory" ]
41,375,311
https://en.wikipedia.org/wiki/Volkenborn%20integral
In mathematics, in the field of p-adic analysis, the Volkenborn integral is a method of integration for p-adic functions. Definition Let : be a function from the p-adic integers taking values in the p-adic numbers. The Volkenborn integral is defined by the limit, if it exists: More generally, if then This integral was defined by Arnt Volkenborn. Examples where is the k-th Bernoulli number. The above four examples can be easily checked by direct use of the definition and Faulhaber's formula. The last two examples can be formally checked by expanding in the Taylor series and integrating term-wise. with the p-adic logarithmic function and the p-adic digamma function. Properties From this it follows that the Volkenborn-integral is not translation invariant. If then See also P-adic distribution References Arnt Volkenborn: Ein p-adisches Integral und seine Anwendungen I. In: Manuscripta Mathematica. Bd. 7, Nr. 4, 1972, Arnt Volkenborn: Ein p-adisches Integral und seine Anwendungen II. In: Manuscripta Mathematica. Bd. 12, Nr. 1, 1974, Henri Cohen, "Number Theory", Volume II, page 276 Integrals
Volkenborn integral
[ "Mathematics" ]
285
[ "Mathematical analysis", "Mathematical analysis stubs" ]
41,375,522
https://en.wikipedia.org/wiki/C3H5I
{{DISPLAYTITLE:C3H5I}} The molecular formula C3H5I may refer to: Allyl iodide Iodocyclopropane
C3H5I
[ "Chemistry" ]
39
[ "Isomerism", "Set index articles on molecular formulas" ]
41,378,045
https://en.wikipedia.org/wiki/Reproductive%20Toxicology%20%28journal%29
Reproductive Toxicology is a peer-reviewed journal published bimonthly by Elsevier which focuses on the effects of toxic substances on the reproductive system. The journal was established in 1987 and is affiliated with the European Teratology Society. According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.3. References Toxicology journals Elsevier academic journals Academic journals established in 1987 Bimonthly journals English-language journals
Reproductive Toxicology (journal)
[ "Environmental_science" ]
89
[ "Toxicology journals", "Toxicology" ]
41,378,229
https://en.wikipedia.org/wiki/Ichthyander%20Project
The Ichthyander Project was the first project involving underwater habitats in the Soviet Union in 1960s. Inspired by information on experiments on underwater habitats abroad (in particular, by Jacques Cousteau's Conshelf), the members of the amateur diving club "Ichthyander" in Donetsk embarked on the project of their own at a site by Tarkhankut Cape, Crimea. The name is taken from the name of the protagonist of the Soviet film Amphibian Man. In August 1966, in the first experiment, purely amateur, Ichthyander-66, a person spent three days continuously underwater. After newspaper news, the experiment attracted attention of authorities and scientist, and during Ichthyander-67 the habitat operated for two weeks. After Ichtyander-68 Ichtyander-70, after unsuccessful attempts to elevate it to a professional level, with state support, the project was discontinued. A 1968 Soviet popular science book Homo Aquaticus writes: "It so happened that after the 1967 expedition, an order was issued to dissolve the club". Ichthyander-68 was carried out during a short-lived attempt the members of the club to attach themselves to the Mining Science-Technical Society (Горное научно-техническое общество) to start research in underwater geodesy and drilling. A memorial marker exists (a stone with a plaque and steel slabs) at the site. This project preceded and catalyzed several other early Soviet experiments with underwater habitats, such as Sadko (autumn 1966), Chernomor and Sprut. See also Ichthyander (disambiguation) References Further reading "Aquanautics in Ukraine" (АКВАНАВТИКА В УКРАИНЕ), "Aqua Magazine", 2003, #2, a section about Ichthyander Coastal construction Pressure vessels Science and technology in the Soviet Union Tarkhankut
Ichthyander Project
[ "Physics", "Chemistry", "Engineering" ]
415
[ "Structural engineering", "Chemical equipment", "Physical systems", "Construction", "Coastal construction", "Hydraulics", "Pressure vessels" ]
41,378,784
https://en.wikipedia.org/wiki/Immunodominance
Immunodominance is the immunological phenomenon in which immune responses are mounted against only a few of the antigenic peptides out of the many produced. That is, despite multiple allelic variations of MHC molecules and multiple peptides presented on antigen presenting cells, the immune response is skewed to only specific combinations of the two. Immunodominance is evident for both antibody-mediated immunity and cell-mediated immunity. Epitopes that are not targeted or targeted to a lower degree during an immune response are known as subdominant epitopes. The impact of immunodominance is immunodomination, where immunodominant epitopes will curtail immune responses against non-dominant epitopes. Antigen-presenting cells such as dendritic cells, can have up to six different types of MHC molecules for antigen presentation. There is a potential for generation of hundreds to thousands of different peptides from the proteins of pathogens. Yet, the effector cell population that is reactive against the pathogen is dominated by cells that recognize only a certain class of MHC bound to only certain pathogen-derived peptides presented by that MHC class. Antigens from a particular pathogen can be of variable immunogenicity, with the antigen that stimulates the strongest response being the immunodominant one. The different levels of immunogenicity amongst antigens forms what is known as dominance hierarchy. Mechanism CTL immunodominance The mechanisms of immunodominance are very poorly understood. What determines cytotoxic T lymphocyte (CTL) immunodominance can be a number of factors, many of which are debated. Of these, one in particular focuses on the timing of CTL clonal expansion. The dominant CTLs that arise were activated sooner so therefore proliferate faster than subdominant CTLs that were activated later, thus resulting in a greater number of CTLs for that immunodominant epitope. This can be in concordance with an additional theory which states that immunodominance may be dependent on the affinity of the T-cell receptor (TCR) to the immunodominant epitope. That is, T cells with a TCR that has high affinity for its antigen are most likely to be immunodominant. High affinity of the peptide to the TCR contributes to the T cell’s survival and proliferation, allowing for more clonal selection of the immunodominant T cells over the subdominant T cells. Immunodominant T cells also curtail subdominant T cells by outcompeting them for cytokine sources from antigen-presenting cells. This leads to a greater expansion of the T cells that recognize a high affinity epitope and is favoured since these cells are likely to clear the infection much more quickly and effectively than their subdominant counterparts. It is important to note, however, that immunodominance is a relative term. If subdominant epitopes are introduced without the dominant epitope, the immune response will be focused to that subdominant epitope. Meanwhile, if the dominant epitope is introduced with the subdominant epitope, the immune response will be directed against the dominant epitope while silencing the response against the subdominant epitope. Antibody immunodominance The mechanism of immunodominance in B cell activation focuses on the affinity of epitope binding to the B-cell receptor (BCR). If an epitope binds very strongly to a B cell BCR, it will then subsequently bind with high affinity to the resultant antibodies produced by that B cell upon activation. These antibodies then out-compete the BCR for the epitope, and thus that B cell lineage will be unavailable for subsequent stimulation. On the opposite end of the scale where BCRs have low affinity for the epitopes, these B cells are outcompeted for stimulation by B cells with BCRs that have higher affinities for their respective epitopes. Insufficient T cell stimulation by these B cells also leads to suppression of these B cells by the T cells. The immunodominant epitope will be a BCR that has a particular ‘goldilocks’ amount of affinity for its epitope determined by equilibrium binding affinity. This leads to initial IgM response directed at the strongly binding epitope, and the subsequent IgG response focused on the immunodominant epitope. That is, those within the ‘goldilocks zone’ for affinity will be available for subsequent T helper stimulation, allowing for class switching, affinity maturation and thus, resulting in immunodominance to that particular epitope. Implications Having the immune response focused on a specific immunodominant epitope is useful because it allows the strongest immune response against a certain pathogen to dominate, thus eliminating the pathogen fast and effectively. However, it can also cause a hindrance because of potential pathogen escape. In the case of HIV, immunodominance can be unfavourable because of the high mutation rate of HIV. The immunodominant epitope can be mutated in the virus, thus allowing HIV to avoid the adaptive immune response when reintroduced from latency. This is why the disease perpetuates, as the virus mutates to avoid the antibodies and T cells specific for the immunodominant epitope that is no longer expressed by the virus. Immunodominance can also have implications in cancer immunotherapy. Similar to HIV-escape, cancer can escape the immune system’s detection by antigenic variation. As the immunodominant epitope is mutated and/or lost in the cancer, the immune response no longer has Immunodominance also has implications in vaccine development. Immunodominant epitopes vary from person to person. This phenomenon is due to the variability of HLA types, which make up the MHC molecules that present the immunodominant epitopes. Therefore, people with different alleles may respond to different epitopes of the same pathogen. With vaccine development particularly for subunit-based and recombinant vaccines, this may lead to some individuals which have different HLA haplotype to not respond while others do. References Immunology
Immunodominance
[ "Biology" ]
1,351
[ "Immunology" ]
41,379,126
https://en.wikipedia.org/wiki/Soil%20defertilisation
Soil defertilisation refers to the practice of reducing soil fertility in order to reduce the number of plants that can grow on that soil. It is often done on land not intended for agriculture, such as city parks. Benefit On land not intended for agriculture, such as city parks or other communal spaces, undesired plants (weeds) can become a nuisance to the city's communal services, costing effort and money. In some cases, along with soil defertilisation, the soil's pH and water content can be altered. This may create a much different environment, allowing more specialised plants/vegetation to grow and take hold. In practice Soil defertilisation is done by growing specific cover crops (e.g., Phacelia, Sinapis alba, Lolium multiflorum) on them and then, instead of ploughing them under, removing them from the soil. By doing this, the nutrients that have accumulated in the crops are removed together with the crops. The crops may be used on other land that needs to be fertilsed (instead of defertilised), for example, agricultural land. See also Soil fertilisation References Fertilizers Sustainable technologies
Soil defertilisation
[ "Chemistry" ]
249
[ "Fertilizers", "Soil chemistry" ]
41,379,428
https://en.wikipedia.org/wiki/G%C3%B6kt%C3%BCrk-3
Göktürk-3 is a synthetic aperture radar (SAR) Earth observation satellite that will be designed and developed under prime contractorship of Turkish Aerospace Industries (TAI) with support of Military Electronic Industries (ASELSAN) and TÜBİTAK Space Technologies Research Institute (TÜBİTAK UZAY) for the Turkish Ministry of National Defence. Project The project is to provide high-resolution images from any location in the world in day and night, and in any weather condition without territorial waters and aerial domain restrictions to meet the requirements of the Turkish military. After the contract signed between Turkish Aerospace Industries (TAI) and Undersecretariat for Defence Industries (SSM) on 8 May 2013, the parties have started to work on the indigenous design of the satellite and ground stations. According to the announcement of the Undersecretariate of Ministry of National Defence, the launch of Göktürk-3 was planned by end 2019. See also Göktürk-1 Göktürk-2 List of Earth observation satellites References External links Reconnaissance satellites of Turkey Ministry of National Defense (Turkey) Earth observation satellites of Turkey Space synthetic aperture radar 2023 in spaceflight 2023 in Turkey
Göktürk-3
[ "Astronomy" ]
243
[ "Astronomy stubs", "Spacecraft stubs" ]
41,379,508
https://en.wikipedia.org/wiki/Node%20stream
A node stream is a method of transferring large amounts of data on mobile devices or websites (such as uploading detailed photographs) by breaking the file or data down into manageable chunks. The chunks of data do not use as much computer memory, so they are less likely to slow down the device, allowing the user to do other things on it whilst waiting for the file transfer to complete. In technical terms, in Node.js a node stream is a readable or writable continuous flow of data that can be manipulated asynchronously as data comes in (or out). This API can be used in data intensive web applications where scalability is an issue. A node stream can be many different things: a file stream, a parser, an HTTP request, a child process, etc. References External links Interactive exercises to help you understand node streams Streaming
Node stream
[ "Technology" ]
177
[ "Multimedia", "Streaming" ]
41,379,697
https://en.wikipedia.org/wiki/Daily%20light%20integral
Daily light integral (DLI) describes the number of photosynthetically active photons (individual particles of light in the 400-700 nm range) that are delivered to a specific area over a 24-hour period. This variable is particularly useful to describe the light environment of plants. Formula The equation for converting Photosynthetic Photon Flux Density (PPFD) to DLI, assuming constant PPFD, is below. whereLight-hours is the number of hours in a day active photons are delivered to the target area, measured in hours. Note that the factor 3.6·10−3 is due to the conversion factors coming from μmol being converted to mol and the unit of hours (from Light-Hours) being converted to seconds. Definition and units The daily light integral (DLI) is the number of photosynthetically active photons (photons in the PAR range) accumulated in a square meter over the course of a day. It is a function of photosynthetic light intensity and duration (day length) and is usually expressed as moles of light (mol photons) per square meter (m−2) per day (d−1), or: mol·m−2·d−1. DLI is usually calculated by measuring the photosynthetic photon flux density (PPFD) in μmol·m−2·s−1 (number of photons in the PAR range received in a square meter per second) as it changes throughout the day, and then using that to calculate total estimated number of photons in the PAR range received over a 24-hour period for a specific area. In other words, DLI describes the sum of the per second PPFD measurements during a 24-hour period. If the photosynthetic light intensity stays the same for the entire 24-hour period, DLI in mol m−2 d−1 can be estimated from the instantaneous PPFD from the following equation: μmol m−2 s−1 multiplied by 86,400 (number of seconds in a day) and divided by 106 (number of μmol in a mol). Thus, 1 μmol m−2 s−1 = 0.0864 mol m−2 d−1 if light intensity stays the same for the entire 24 hour period. Rationale for using DLI In the past, biologists have used lux or energy meters to quantify light intensity. They switched to using PPFD when it was realized that the flux of photons in the 400-700 nm range is the important factor in driving the photosynthetic process. However, PPFD is usually expressed as the photon flux per second. This is a convenient time scale when measuring short-term changes in photosynthesis in gas exchange systems, but falls short when the light climate for plant growth has to be characterized. First because it does not take into account the length of the day light period, but foremost because light intensity in the field or in glasshouses changes so much diurnally and from day to day. Scientists have tried to solve this by reporting light intensity measured for one or more sunny days at noon, but this is grasping the light level for only a very short period of the day. Daily light integral includes both the diurnal variation and day length, and can also be reported as a mean value per month or over an entire experiment. It has been shown to be better related to plant growth and morphology than PPFD at any moment or day length alone. Some energy meters are able to capture PPFD during an interval period such as 24-hours. Normal ranges Outdoors, DLI values vary depending on latitude, time of year, and cloud cover. Occasionally, values over 70 mol·m−2·d−1 can be reached at bright summer days at some locations. Monthly-averaged DLI values range between 20-40 in the tropics, 15-60 at 30° latitude and 1-40 at 60° latitude. For plants growing in the shade of taller plants, such as on the forest floor, DLI may be less than 1 mol·m−2·d−1, even in summer. In greenhouses, 30-70% of the outside light will be absorbed or reflected by the glass and other greenhouse structures. DLI levels in greenhouses therefore rarely exceed 30 mol·m−2·d−1. In growth chambers, values between 10 and 30 mol·m−2·d−1 are most common. New light modules are now available for the horticultural industry, where light intensity of the lamps used in glasshouses is regulated such that plants receive a set value of DLI, independent of outside weather conditions. Effects on plants DLI affects many plant traits. Generalised dose-response curves show that DLI is particularly limiting individual plant growth and functioning below 5 mol·m−2·d−1, whereas most traits approach saturation beyond a DLI of 20 mol·m−2·d−1. Although not all plants respond in the same way and different wavelengths have various effects, a range of general trends are found: Leaf anatomy High light increases leaf thickness, either because of an increase in the number of cell layers within the leaf, and/or because of an increase in the cell size within a cell layer. The density of a leaf increases as well, and so does the leaf dry mass per area (LMA). There are also more stomata per mm2. Leaf chemical composition Taken over all species and experiments, high light does not affect the organic nitrogen concentration, but decreases the concentration of chlorophyll and minerals. It increases the concentration of starch and sugars, soluble phenolics, and also the xanthophyll/chlorophyll ratio and the chlorophyll a/b ratio. Leaf physiology While the chlorophyll concentration decreases, leaves have more leaf mass per unit leaf area, and as a result the chlorophyll content per unit leaf area is relatively unaffected. This is also true for the light absorptance of a leaf. Leaf light reflectance goes up and leaf light transmittance goes down. Per unit leaf area there is more RuBisCO and a higher photosynthetic rate under light-saturated conditions. Expressed per unit leaf dry mass, however, photosynthetic capacity decreases. Plant growth Plants growing at high light invest less of their biomass in leaves and stems, and more in roots. They grow faster, per unit leaf area (ULR) and per unit total plant mass (RGR), and therefore high-light grown plants generally have more biomass. They have shorter internodes, with more stem biomass per unit stem length, but plant height is often not strongly affected. High-light plants do show more branches or tillers. Plant reproduction High-light grown plants generally have somewhat larger seeds, but produce many more flowers, and therefore there is a large increase in seed production per plant. Sturdy plants with short internodes and many flowers are important for horticulture, and hence a minimum amount of DLI is required for marketable horticultural plants. Measuring DLI over a growing season and comparing it to results can help determine which varieties of plants will thrive in a specific location. See also Grow light Photosynthesis PI curve References Photosynthesis
Daily light integral
[ "Chemistry", "Biology" ]
1,514
[ "Biochemistry", "Photosynthesis" ]
41,380,453
https://en.wikipedia.org/wiki/Molecular%20Biotechnology
Molecular Biotechnology is a peer-reviewed scientific journal published by Springer Science+Business Media. It publishes original research papers and review articles on the application of molecular biology to biotechnology. It was established in 1994 with John M. Walker as founding editor-in-chief. Prof Aydin Berenjian is the current editor-in-chief of the journal. Abstracting and indexing The journal is abstracted and indexed in: Science Citation Index Expanded PubMed/MEDLINE Scopus Inspec Embase Chemical Abstracts Service CAB International Academic OneFile AGRICOLA Biological Abstracts BIOSIS Previews EI-Compendex Elsevier BIOBASE Food Science and Technology Abstracts Global Health PASCAL External links English-language journals Biotechnology journals Academic journals established in 1994 Springer Science+Business Media academic journals 1994 in biotechnology 9 times per year journals
Molecular Biotechnology
[ "Biology" ]
168
[ "Biotechnology literature", "Biotechnology journals" ]
41,381,753
https://en.wikipedia.org/wiki/Relapse%20prevention
Relapse prevention (RP) is a cognitive-behavioral approach to relapse with the goal of identifying and preventing high-risk situations such as unhealthy substance use, obsessive-compulsive behavior, sexual offending, obesity, and depression. It is an important component in the treatment process for alcohol use disorder, or alcohol dependence. This model founding is attributed to Terence Gorski's 1986 book Staying Sober. Underlying assumptions Relapse is seen as both an outcome and a transgression in the process of behavior change. An initial setback or lapse may translate into either a return to the previous problematic behavior, known as relapse, or the individual turning again towards positive change, called prolapse. A relapse often occurs in the following stages: emotional relapse, mental relapse, and finally, physical relapse. Each stage is characterized by feelings, thoughts, and actions that ultimately lead to the individual's returning to their old behavior. Relapse is thought to be multi-determined, especially by self-efficacy, outcome expectancies, craving, motivation, coping, emotional states, and interpersonal factors. In particular, high self-efficacy, negative outcome expectancies, potent availability of coping skills following treatment, positive affect, and functional social support are expected to predict positive outcome. Craving has not historically been shown to serve as a strong predictor of relapse. Techniques Relapse prevention techniques can include having booster sessions with a therapist, being vigilant for and trying to prevent or avoid high risk situations, and being ready to re-apply previously used therapies if a disturbance does occur. Efficacy and effectiveness Carroll et al. conducted a review of 24 other trials and concluded that RP was more effective than no treatment and was equally effective as other active treatments such as supportive psychotherapy and interpersonal therapy in improving substance use outcomes. Irvin and colleagues also conducted a meta-analysis of RP techniques in the treatment of alcohol, tobacco, cocaine, and polysubstance use, and upon reviewing 26 studies, concluded that RP was successful in reducing substance use and improving psychosocial adjustment. RP seemed to be most effective for individuals with alcohol problems, suggesting that certain characteristics of alcohol use are amenable to the RP. Miller et al. (1996) found the GORSKI/CENAPS relapse warning signs to be a good predictor of the occurrence of relapse on the AWARE scale (r = .42, p < .001). Prevention approaches General prevention theories Some theorists, including Katie Witkiewitz and G. Alan Marlatt, borrowing ideas from systems theory, conceptualize relapse as a multidimensional, complex system. Such a nonlinear dynamical system is believed to be able to best predict the data witnessed, which commonly includes cases where small changes introduced into the equation seem to have large effects. The model also introduces concepts of self-organization, feedback loops, timing/context effects, and the interplay between tonic and phasic processes. Rami Jumnoodoo and Patrick Coyne, in London UK, have been working with National Health Service users and carers over the past ten years to transfer RP theory into the field of adult mental health. The uniqueness of the model is the sustainment of change by developing service users and carers as 'experts' – following RP as an educational process and graduating as Relapse Prevention Practitioners. The work has won many national awards, been presented at many conferences, and has resulted in many publications. Terence Gorski MA has developed the CENAPS (Center for Applied Science) model for relapse prevention including Relapse Prevention Counseling (Gorski, Counseling For Relapse Prevention, 1983) and a system for certification of Relapse Prevention Specialists (CRPS). Substance use disorder Relapse prevention is a specific intervention modality in the treatment of substance use disorder that focuses on developing skills and cognitive-behavioral techniques to help patients and their clinicians identify and manage situations that increase the risk of relapse. These situations can include both internal experiences, such as automatic thoughts related to substance use, and external cues, like people or places that are associated with substance use. In the relapse prevention model, patients and clinicians work together to develop strategies that target these high-risk situations, using both cognitive and behavioral techniques. By increasing coping skills and confidence, patients learn to handle challenging situations without turning to alcohol. or drugs, thus increasing their self-efficacy. Depression For the prevention of relapse in major depressive disorder, several approaches and intervention programs have been proposed. Mindfulness-based cognitive therapy is commonly used and was found to be effective in preventing relapse, especially in patients with more pronounced residual symptoms. Another approach often used in patients who wish to taper down antidepressant medication is preventive cognitive therapy, an 8-week psychological intervention program delivered in individual or group sessions that focuses on changing dysfunctional attitudes, enhancing memories of positive experiences and helping patients to develop personal relapse prevention strategies. Preventive cognitive therapy has been found to be equally effective in preventing a return of depressive symptoms as antidepressant medication use alone in the long-term treatment of major depressive disorder. In combination with pharmaceuticals, it was found to be even more effective than antidepressant use alone. See also Cognitive-behavioral therapy Substance use disorder References Clinical psychology Drug rehabilitation Addiction Addiction medicine Substance-related disorders
Relapse prevention
[ "Biology" ]
1,138
[ "Behavioural sciences", "Behavior", "Clinical psychology" ]
41,382,018
https://en.wikipedia.org/wiki/Phosphorus%20selenide
Phosphorus selenides are a relatively obscure group of compounds. There have been some studies of the phosphorus - selenium phase diagram and the glassy amorphous phases are reported. The compounds that have been reported are shown below. While some of phosphorus selenides are similar to their sulfide analogues, there are some new forms, molecular and the polymeric catena-. There is also some doubt about the existence of molecular . Crystallographically confirmed compounds Molecular has a norbornane like structure with two phosphorus atoms with oxidation state +3 bridged by two diselenide units (, analogous to disulfide) and one selenide unit (). It was isolated by solvent () extraction from a amorphous phase made from the elements. has been characterised crystallographically and has the same structure as the low temperature form of . It can be prepared from the elements. One preparation is to extract and recrystallise using tetralin. The molecule has the same structure as . It was prepared by reacting with bromine in . catena- This compound consists of polymeric chains of norbornane-like units joined by Se atoms. As each P atom in the repeat unit is bonded to another P atom and to two Se atoms, each P atom has a formal oxidation state of +2. Compounds confirmed spectroscopically has two crystalline forms α- with the same molecular structure as α- and β- with same molecular structure as β-. A fully characterised compound contains with a β- structure. This has been reported to have the same structure as . One well-known textbook does not mention it at all. Molecular has been reported to share the same structure as and , but one well-known textbook does not mention it at all. A review (2001) examining P-Se amorphous phases did not confirm the presence of molecular . The isoelectronic anion which has the adamantane like structure is known, an example is the sodium salt . Other compounds , have been reported. Phosphorus - selenium glasses Phosphorus - selenium glasses have been examined using 31P-NMR and Raman spectroscopy. Glasses are formed in over the range of compositions 0 < x < 0.8 with a small window around 0.52 – 0.60 centred on 0.57 (corresponding to the compound ) where there is a tendency to crystallise. For x < 0.47 the glasses contain chain fragments, pyramidal P units (P oxidation state +3), quasi-tetrahedral P units (P oxidation state +5, with P=Se double bond) and units (P in formal oxidation state +4). There is no evidence for an amorphous phase containing molecular . References Inorganic phosphorus compounds Selenides
Phosphorus selenide
[ "Chemistry" ]
569
[ "Inorganic phosphorus compounds", "Inorganic compounds" ]
41,382,443
https://en.wikipedia.org/wiki/Ishak%20Efendi
Hoca Ishak Efendi ( – 1835) was an Ottoman mathematician and engineer. Life Ishak Efendi was born in Arta (now in Greece), probably in 1774, to a Jewish family. His father had converted to Islam. After his father died, he went to Constantinople, where he studied mathematics and foreign languages, learning French, Latin, Greek and Hebrew alongside Turkish, Arabic and Persian. As part of Sultan Mahmud II's attempts to modernize the Ottoman Empire, in 1815 or 1816 he was appointed instructor (hence his title of , ) at the Imperial School of Military Engineering (predecessor of the Istanbul Technical University). In July 1824 he was also named as interpreter (dragoman) to the Sublime Porte in succession to , a post he held until 1828/9, when he was dismissed, possibly due to fears by the secretary of state () Pertev Pasha that he might replace him. During the Russo-Turkish War of 1828-29, Ishak Efendi spent some time supervising the construction of fortresses, before resuming his teaching post at the Imperial School of Military Engineering, where he rose to become Head Instructor () in December 1830/January 1831. As Head Instructor, he tried with some success to reform the curriculum and raise the educational level of the faculty, but his influential predecessor, Seyyid Ali Pasha, managed through his connections at court to have him sent to Medina on the pretext of going to the Hajj, as well as to supervise various restorations to the holy sites there. During his return to Istanbul in 1834 or 1835, he died at Suez, where he was buried. Work His main work, the Madjmuʿa-i ʿUlum-i Riyaziyye ("Collected Works on Mathematical Sciences"), was a four-volume text published between 1831 and 1834. It contained translations, mostly of contemporary French works, on mathematics, physics, chemistry and geology, and played a crucial role in introducing many contemporary scientific concepts to the Muslim world: according to the Encyclopedia of Islam, it was "the first work in Turkish on the modern physical and natural sciences", being credited with introducing the "scientific terminology, based on Arabic, which was used in Turkey up to the 1930s and in some Arab countries still later". He also wrote a number of works, again mainly translated from European treatises, on engineering and military science. References Sources 1770s births 1835 deaths Engineers from the Ottoman Empire Translators from French People from Arta, Greece Military engineers Istanbul Technical University 19th-century writers from the Ottoman Empire 19th-century engineers Converts to Islam from Judaism 19th-century mathematicians 19th-century translators People of Jewish descent Mathematicians from the Ottoman Empire
Ishak Efendi
[ "Engineering" ]
553
[ "Military engineers", "Military engineering" ]
41,383,199
https://en.wikipedia.org/wiki/Two-photon%20circular%20dichroism
Two-photon circular dichroism (TPCD), the nonlinear counterpart of electronic circular dichroism (ECD), is defined as the differences between the two-photon absorption (TPA) cross-sections obtained using left circular polarized light and right circular polarized light (see Figure 1). Background Typically, two-photon absorption (TPA) takes place at twice the wavelength as one-photon absorption (OPA). This feature allows for the TPCD based study of chiral systems in the far to near ultraviolet (UV) region. ECD cannot be employed in this region due to interferences from strong linear absorption of typical buffers and solvents and also because of the scattering exhibited by inhomogeneous samples in this region. Several other advantages are associated with the use of non-linear absorption, i.e. high spatial resolution, enhanced penetration depth, improved background discrimination and reduced photodamage to living specimens. In addition, the fact that TPA transitions obey different selection rules than OPA (even-parity vs. odd-parity) leads to think that in chiral molecules ECD and TPCD should present different spectral features, thus making the two methods complementary. TPCD is very sensitive to small structural and conformational distortions of chiral molecules, and therefore, is potentially useful for the fundamental study of optically active molecules. Finally, TPCD has the potential to penetrate into the far-UV region, where important structural/conformational information is typically obscure to ECD. This would enable the discovery of new information about molecular systems of interest such as, peptides, biological macromolecules (allowing for a deeper understanding of diseases like Alzheimer's and Parkinson's) and potential candidates for negative refractive index (for the developing of cloaking devices). TPCD has been applied in experiments using pump-probe, intensity dependent multiphoton optical rotation, resonance-enhanced multiphoton ionization, and polarization modulation single beam Z-scan. The first experimental measurement of TPCD was performed in 1995 using a fluorescence based technique (FD-TPCD), but it was not until the introduction of the double L-scan technique in 2008 by Hernández and co-workers, that a more reliable and versatile technique to perform TPCD measurements became available. Since the introduction of the double L-scan several theoretical-experimental studies based on TPCD have been published, i.e. TPCD of asymmetric catalysts, effect of the curvature of the π-electron delocalization on the TPCD signal, fragmentation-recombination approach (FRA) for the study of TPCD of large molecules and the development of an FD-TPCD based microscopy technique. Additionally, Rizzo and co-workers have reported purely theoretical works on TPCD. Theory TPCD was theoretically predicted by Tinoco and Power in 1975, and computationally implemented three decades later by Rizzo and co-workers, using DALTON and later at the CC2 level in the TURBOMOLE package. The expression for TPCD, defined as, , was obtained by Tinoco in his 1975 paper as a semiclassical extension of the TPA formulae. Quantum electrodynamical equivalent expressions were obtained by Power, by Andrews and, in a series of papers, by Meath and Power who were able to generalize the approach to the case of n photons, and considered also the modifications occurring in the formulae when elliptical polarization is assumed. TPCD can be obtained theoretically using Tinoco’s equation where is the circular frequency of the incident radiation, is the circular frequency for a given 0→f transition, is the TPCD rotatory strength, is a normalized lineshape, is the electric constant and is the speed of light in vacuum. , is obtained from where the terms refer to the experimental relative orientation of the two incident photons. For the typical double-L scan setup, , and , which corresponds to two left or right circularly polarized photons propagating parallel to each other and in the same direction. The molecular parameters are obtained from the following equations, where the molecular parameters are defined in function of the two-photon generalized tensors, (involving magnetic transition dipole matrix elements), (involving electric transition dipole matrix elements in the form of the velocity operator) and (including electric quadrupole transition matrix elements, in the velocity formulation). Experiments Double L-scan The double L-scan is an experimental method that allows obtaining simultaneously polarization dependent TPA effects in chiral molecules. Performing measurements on equal “twin” pulses allows compensating for energy and mode fluctuations in the sample that can mask the small TPCD signal. To briefly describe the setup, short pulses coming from the excitation source (typically an OPG or an OPA) are split into “twin” pulses (at BS2), then the polarization of the pulses is controlled individually using quarter-waveplates (WP2 and WP3), allowing to perform simultaneous polarization dependent measurements. The sample is held in a 1 mm quartz cuvette and the incident angle of the light coming from both arms (M2 and M3) is 45°. The two incident beams have a separation on the vertical axis of about 1 cm, to avoid interference effects. Unlike Z-scan, in the double L-scan the sample is at fixed position and two identical focusing lenses (L2 and L3) move along the propagation axis (z axis). Calibration is required to ensure that z1= z2 during the entire scan. See also Birefringence Chirality (chemistry) Circular dichroism Cryptochirality Geometric phase Hyper–Rayleigh scattering optical activity Levorotation and dextrorotation Polarization Polarization rotator Raman optical activity (ROA) Specific rotation References Nonlinear optics Polarization (waves)
Two-photon circular dichroism
[ "Physics" ]
1,233
[ "Polarization (waves)", "Astrophysics" ]
41,383,683
https://en.wikipedia.org/wiki/Vapour%20pressure%20thermometer
A vapour pressure thermometer is a thermometer that uses a pressure gauge to measure the vapour pressure of a liquid. References Thermometers
Vapour pressure thermometer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
34
[ "Thermodynamics stubs", "Measuring instruments", "Thermodynamics", "Thermometers", "Physical chemistry stubs" ]
41,384,174
https://en.wikipedia.org/wiki/Filanesib
Filanesib (code name ARRY-520) is a kinesin spindle protein (KIF11) inhibitor which has recently been proposed as a cancer treatment, specifically for multiple myeloma. History of research In 2009, two in vitro studies on the effects of filanesib on either ovarian cancer cells or acute myeloid leukemia cells were published. The former reported that filanesib "...has similar anti-tumor activity in EOC [epithelial ovarian cancer] cells as that of paclitaxel. However, unlike paclitaxel, it does not induce these pro-tumor effects in Type I cells." The detrimental effects attributed to paclitaxel were alleged to be "...due to paclitaxel-induced enhancement of NF-κB and ERK activities, and cytokine production (e.g. IL-6), which promote chemoresistance and tumor progression." The latter study also reported promising results, concluding that filanesib "...potently induces cell cycle block and subsequent death in leukemic cells via the mitochondrial pathway and has the potential to eradicate AML [acute myeloid leukemia] progenitor cells." However, a clinical trial published in 2012 on patients with advanced myeloid leukemias found that the drug exhibited a "relative lack of clinical activity"; the trial was therefore halted before it was scheduled to end. In June 2013, preliminary results from a trial of the drug were presented at a conference of the European Hematology Association in Stockholm. On October 31, 2013, it was reported that the company which developed the drug, Array BioPharma (based in Boulder, Colorado), was planning on launching a phase III clinical trial of the drug to treat multiple myeloma. The study began in mid-2014, and paired filanesib with the proteasome inhibitor carfilzomib in several hundred patients. The study's primary endpoint was progression-free survival (i.e. the time until the cancer recurs). A previous trial had reported that 37% of patients receiving filanesib in conjunction with carfilzomib showed lower levels of paraprotein, also known as "M protein", whereas only 16% of controls (i.e. those receiving only carfilzomib) showed such a reduction. In addition, a report by the International Myeloma Working Group concluded that filanesib was "effective in monotherapy as well as in combination with dexamethasone in heavily pretreated patients." According to Jatin Shah, an assistant professor at University of Texas MD Anderson Cancer Center, the primary adverse effect of treatment with filanesib observed in trials conducted thus far is reversible neutropenia, though it is possible that it may cause low blood cell counts as well. Shah et al. have conducted a phase II study of filanesib both by itself, and in combination with dexamethasone, presented at the annual meeting of the American Society of Hematology. In December 2013, further clinical trial results were presented, also at the annual meeting of the American Society of Hematology; the results concluded that 16 percent of patients who had received a median of six prior therapies responded to single-agent filanesib. In the week after this presentation, Array BioPharma's stock fell by 16%. In February 2014, a review was published by researchers from the University of Salamanca in Spain, which concluded that "...some of these novel agents [to treat multiple myeloma] seem promising, such as monoclonal antibodies (anti-CD38 — daratumumab or anti-CS1 — elotuzumab) or the kinesin protein inhibitor Arry-520." A 2016 phase 1 dose-escalation study found that the studied dosing regimen of filanesib combined with bortezomib and dexamethasone had a favorable safety profile. The same study reported that this combination of drugs "appears to have durable activity in patients with recurrent/refractory multiple myeloma." References Experimental cancer drugs Fluoroarenes Thiadiazoles Ureas
Filanesib
[ "Chemistry" ]
892
[ "Organic compounds", "Ureas" ]
41,385,213
https://en.wikipedia.org/wiki/Integral%20of%20inverse%20functions
In mathematics, integrals of inverse functions can be computed by means of a formula that expresses the antiderivatives of the inverse of a continuous and invertible function in terms of and an antiderivative of This formula was published in 1905 by Charles-Ange Laisant. Statement of the theorem Let and be two intervals of Assume that is a continuous and invertible function. It follows from the intermediate value theorem that is strictly monotone. Consequently, maps intervals to intervals, so is an open map and thus a homeomorphism. Since and the inverse function are continuous, they have antiderivatives by the fundamental theorem of calculus. Laisant proved that if is an antiderivative of then the antiderivatives of are: where is an arbitrary real number. Note that it is not assumed that is differentiable. In his 1905 article, Laisant gave three proofs. First proof First, under the additional hypothesis that is differentiable, one may differentiate the above formula, which completes the proof immediately. Second proof His second proof was geometric. If and the theorem can be written: The figure on the right is a proof without words of this formula. Laisant does not discuss the hypotheses necessary to make this proof rigorous, but this can be proved if is just assumed to be strictly monotone (but not necessarily continuous, let alone differentiable). In this case, both and are Riemann integrable and the identity follows from a bijection between lower/upper Darboux sums of and upper/lower Darboux sums of The antiderivative version of the theorem then follows from the fundamental theorem of calculus in the case when is also assumed to be continuous. Third proof Laisant's third proof uses the additional hypothesis that is differentiable. Beginning with one multiplies by and integrates both sides. The right-hand side is calculated using integration by parts to be and the formula follows. Details One may also think as follows when is differentiable. As is continuous at any , is differentiable at all by the fundamental theorem of calculus. Since is invertible, its derivative would vanish in at most countably many points. Sort these points by . Since is a composition of differentiable functions on each interval , chain rule could be applied to see is an antiderivative for . We claim is also differentiable on each of and does not go unbounded if is compact. In such a case is continuous and bounded. By continuity and the fundamental theorem of calculus, where is a constant, is a differentiable extension of . But is continuous as it's the composition of continuous functions. So is by differentiability. Therefore, . One can now use the fundamental theorem of calculus to compute . Nevertheless, it can be shown that this theorem holds even if or is not differentiable: it suffices, for example, to use the Stieltjes integral in the previous argument. On the other hand, even though general monotonic functions are differentiable almost everywhere, the proof of the general formula does not follow, unless is absolutely continuous. It is also possible to check that for every in the derivative of the function is equal to In other words: To this end, it suffices to apply the mean value theorem to between and taking into account that is monotonic. Examples Assume that hence The formula above gives immediately Similarly, with and With and History Apparently, this theorem of integration was discovered for the first time in 1905 by Charles-Ange Laisant, who "could hardly believe that this theorem is new", and hoped its use would henceforth spread out among students and teachers. This result was published independently in 1912 by an Italian engineer, Alberto Caprilli, in an opuscule entitled "Nuove formole d'integrazione". It was rediscovered in 1955 by Parker, and by a number of mathematicians following him. Nevertheless, they all assume that or is differentiable. The general version of the theorem, free from this additional assumption, was proposed by Michael Spivak in 1965, as an exercise in the Calculus, and a fairly complete proof following the same lines was published by Eric Key in 1994. This proof relies on the very definition of the Darboux integral, and consists in showing that the upper Darboux sums of the function are in 1-1 correspondence with the lower Darboux sums of . In 2013, Michael Bensimhoun, estimating that the general theorem was still insufficiently known, gave two other proofs: The second proof, based on the Stieltjes integral and on its formulae of integration by parts and of homeomorphic change of variables, is the most suitable to establish more complex formulae. Generalization to holomorphic functions The above theorem generalizes in the obvious way to holomorphic functions: Let and be two open and simply connected sets of and assume that is a biholomorphism. Then and have antiderivatives, and if is an antiderivative of the general antiderivative of is Because all holomorphic functions are differentiable, the proof is immediate by complex differentiation. See also Integration by parts Legendre transformation Young's inequality for products References Calculus Theorems in analysis Theorems in calculus
Integral of inverse functions
[ "Mathematics" ]
1,088
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Theorems in calculus", "Calculus", "Mathematical problems" ]
49,087,286
https://en.wikipedia.org/wiki/Tecno%20Mobile
Tecno Mobile is a Chinese mobile phone manufacturer based in Shenzhen, China. It was established in 2006. It is a subsidiary of Transsion Holdings. Aimed at emerging markets, Tecno has focused its business on the African, Middle Eastern, Southeast Asian, Indian, Latin American, and Eastern European markets. History In 2006, Tecno Mobile was founded as Tecno Telecom Limited, but later changed its name to Transsion Holdings with Tecno Mobile serving as one of its subsidiaries. In 2007, Tecno created a second brand, Itel that sold in Africa. In early 2008, Tecno focused entirely to Africa following market research, and by 2010, it was among the top three mobile phone brands in Africa. In 2016, Tecno entered the Middle East mobile phone market. In 2017, it entered the Indian market, launching its ‘Made for India’ smartphones: the ‘i’ series - i5, i5 Pro, i3, i3 Pro and i7. The company initially started in Rajasthan, Gujarat, and Punjab, and by December 2017 was spread across the country. The firm has identified other emerging markets, besides Africa and India, with large populations but low purchasing power. It entered the Bangladesh and Nepal markets in 2017 and started trial sales in Pakistan. It is still trying to penetrate the Pakistani market and has started its sales online through various E-commerce channels including its own website. Product manufacturing The Tecno mobile phones sold in India are assembled in their manufacturing facility in Noida, Uttar Pradesh. List of products Tecno Camon 12 Tecno Camon 15 Tecno Camon 17 Tecno Camon 18 Tecno Camon 19 Tecno Camon 20 Tecno Camon 20 Pro Tecno Camon 20 Premier Tecno Camon 30 Tecno Camon 30s Tecno Camon 30s Pro Tecno Camon 30 Pro Tecno Camon 30 Premier Tecno Phantom V Flip Tecno Phantom V Flip 2 Tecno Phantom X Tecno Phantom X2 Tecno Phantom X2 Pro Tecno Phantom V Fold Tecno Phantom V Fold 2 Tecno Pouvoir 4 Tecno Spark 4 Tecno Spark 7 Tecno Spark 8 Tecno Spark 9 Tecno Spark 10 Tecno Spark 20 Tecno Spark 20C Tecno Spark 20 Pro Tecno Spark 20 Pro+ Tecno Spark 30C Tecno Spark 30C 5G Tecno Spark 30 Tecno Spark 30 Pro Tecno Pova 2 Tecno Pova 4 Tecno Pova 4 Pro Tecno Pova Neo 3 Tecno Pova 5 Tecno Pova 5 Pro 5G Tecno Pova 6 Tecno Pova 6 Neo Tecno Pova 6 Neo 5G Tecno Pova 6 Pro 5G Tecno Pova 5 Tecno Pova 5 Pro 5G Tecno Pova 4 Tecno Pova 4 Pro Tecno Go 1 References External links Computer companies of China Computer hardware companies Display technology companies Mobile phone companies of China Telecommunication equipment companies of China Mobile phone manufacturers Networking hardware companies Multinational companies headquartered in China Manufacturing companies based in Shenzhen Electronics companies established in 2006 Manufacturing companies established in 2006 Chinese companies established in 2006 Privately held companies of China Chinese brands Transsion
Tecno Mobile
[ "Technology" ]
699
[ "Computer hardware companies", "Computers" ]
49,087,369
https://en.wikipedia.org/wiki/Advanced%20airway
An advanced airway includes: endotracheal tube supraglottic airway Laryngeal mask airway Combitube King LT References Medical equipment Broad-concept articles
Advanced airway
[ "Biology" ]
41
[ "Medical equipment", "Medical technology" ]
49,088,255
https://en.wikipedia.org/wiki/Arnold%E2%80%93Beltrami%E2%80%93Childress%20flow
The Arnold–Beltrami–Childress (ABC) flow or Gromeka–Arnold–Beltrami–Childress (GABC) flow is a three-dimensional incompressible velocity field which is an exact solution of Euler's equation. Its representation in Cartesian coordinates is the following: where is the material derivative of the Lagrangian motion of a fluid parcel located at This ABС flow was analyzed by Dombre et al. 1986 who gave it the name A-B-C because this example was independently introduced by Arnold (1965) and Childress (1970) as an interesting class of Beltrami flows. For some values of the parameters, e.g., A=B=0, this flow is very simple because particle trajectories are helical screw lines. For some other values of the parameters, however, these flows are ergodic and particle trajectories are everywhere dense. The last result is a counterexample to some statements in traditional textbooks on fluid mechanics that vortex lines are either closed or they can not end in the fluid. That is, because for the ABC flows we have , vortex lines coincide with the particle trajectories and they are also everywhere dense for some values of the parameters A, B, and C. It is notable as a simple example of a fluid flow that can have chaotic trajectories. It is named after Vladimir Arnold, Eugenio Beltrami, and Stephen Childress. Ippolit S. Gromeka's (1881) name has been historically neglected, though much of the discussion has been done by him first. See also Beltrami flow References V. I. Arnold. "Sur la topologie des ecoulements stationnaires des fluides parfaits". C. R. Acad. Sci. Paris, 261:17–20, 1965. Chaos theory Fluid dynamics Differential equations
Arnold–Beltrami–Childress flow
[ "Chemistry", "Mathematics", "Engineering" ]
397
[ "Chemical engineering", "Mathematical objects", "Differential equations", "Equations", "Piping", "Fluid dynamics stubs", "Fluid dynamics" ]
49,088,528
https://en.wikipedia.org/wiki/Deep%20inspiration%20breath-hold
Deep inspiration breath-hold (DIBH) is a method of delivering radiotherapy while limiting radiation exposure to the heart and lungs. It is used primarily for treating left-sided breast cancer. The technique involves a patient holding their breath during treatment. In DIBH techniques, treatment is only delivered at certain points in the breathing cycle, where the patient holds their breath. Since the relative positions of organs in the chest naturally changes during breathing, this allows treatment to be delivered to the target (tumour) while other organs are in the optimal position to receive least dose. Treatment Methods In the DIBH technique, the patient is initially maintained at quiet tidal breathing (i.e. normal, relaxed breathing), followed by a deep inspiration, a deep expiration, a second deep inspiration, and breath-hold. At this point the patient is at approximately 100% vital capacity, and simulation, verification, and treatment take place during this phase of breath-holding. DIBH is performed with several tangential fields for left-sided breast cancer. A patient is instructed to hold the breath while viewing the breathing pattern and the breath-hold position through a head-mounted mirror, thereby ensuring reproducibility of the breath-hold position in each delivery. A pair of video goggles may also be used for monitoring the breathing cycle. Patients who cannot maintain DIBH can still benefit from lung tracking techniques, for example 4DCT. There are two basic methods of performing DIBH: free-breathing breath-hold, and spirometry-monitored deep inspiration breath hold. Free-breathing breath-hold Free-breathing breath-hold, also known as real-time position management (RPM) DIBH utilises an infra-red camera and markers placed on the patient to track movement of their chest, and their breathing. Another device for DIBH is known as Abches that monitors the breathing pattern. With the Abches, a patient is instructed to hold the breath at a specified breathing position by viewing a breathing level indicator, thereby reproducing an identical breath-hold position. Spirometry-monitored breath-hold Spirometry based designs are known as active breathing coordinator (ABC) DIBH systems. ABC utilises a mouth piece for the patient which can be used to control the flow of air to provide more reproducible results. Effectiveness The DIBH technique provides an advantage to conventional free-breathing treatment by decreasing lung density, reducing normal safety margins, and enabling more accurate treatment. These improvements contribute to the effective exclusion of normal lung tissue from the high-dose region and permit the use of higher treatment doses without increased risks of toxicity. Treatment of patients with the DIBH technique is feasible in a clinical setting. With this technique, consistent lung inflation levels are achieved in patients, as judged by both spirometry and verification films. Breathing-induced tumor motion is significantly reduced using DIBH compared to free breathing, enabling better target coverage. Future research There is currently no clear selection criteria to predict which patients will benefit most from the DIBH technique, other than left breast laterality. There is evidence to suggest parasagittal cardiac contact distance is a promising metric for selection and should be assessed in all future DIBH planning studies. References Radiation therapy Radiation health effects Medical physics Radiobiology
Deep inspiration breath-hold
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
665
[ "Radiation health effects", "Applied and interdisciplinary physics", "Radiobiology", "Medical physics", "Radiation effects", "Radioactivity" ]
49,090,429
https://en.wikipedia.org/wiki/SceneKit
SceneKit, sometimes rendered Scene Kit, is a 3D graphics application programming interface (API) for Apple Inc. platforms written in Objective-C. It is a high-level framework designed to provide an easy-to-use layer over the lower level APIs like OpenGL and Metal. SceneKit maintains an object based scene graph, along with a physics engine, particle system, and links to Core Animation and other frameworks to easily animate that display. SceneKit views can be mixed with other views, for instance, allowing a SpriteKit 2D display to be mapped onto the surface of an object in SceneKit, or a UIBezierPath from Core Graphics to define the geometry of a SceneKit object. SceneKit also supports import and export of 3D scenes using the COLLADA format. SceneKit was first released for macOS in 2012, and iOS in 2014. Basic concepts SceneKit maintains a scene graph based on a root object, an instance of the class SCNScene. The SCNScene object is roughly equivalent to the view objects found in most 2D libraries, and is intended to be embedded in a display container like a window or another view object. The only major content of the SCNScene is a link to the rootNode, which points to an SCNNode object. SCNNodes are the primary contents of the SceneKit hierarchy. Each Node has a Name, and pointers to optional Camera, Light and Geometry objects, as well as an array of childNodes and a pointer to its own parent. A typical scene will contain a single Scene object pointed to a conveniently named Node (often "root") whose primary purpose is to hold a collection of children Nodes. The children nodes can be used to represent cameras, lights, or the various geometry objects in the Scene. A simple Scene can be created by making a single SCNGeometry object, typically with one of the constructor classes like SCNBox, a single SCNCamera, one or more SCNLights, and then assigning all of these objects to separate Nodes. A single additional generic Node is then created and assigned to the SCNScene object's rootNode, and then all of the objects are added as children of that rootNode. Nevertheless, the number of lights is limited to 8. SCNScenes also contain a number of built-in user interface controls and input/output libraries to greatly ease implementing simple viewers and similar tasks. For instance, setting the Scene's autoenablesDefaultLighting and allowsCameraControl to true, and then adding an object tree read from a COLLADA file will produce viewable content of arbitrary complexity with a few lines of code. The integration with Xcode allows the Scene itself to be placed in a window in Interface Builder, without any code at all. There is a Scenekit archive file format, using the filename extension . References 3D graphics APIs
SceneKit
[ "Technology" ]
602
[ "Computing stubs" ]
49,090,922
https://en.wikipedia.org/wiki/Particulate%20organic%20matter
Particulate organic matter (POM) is a fraction of total organic matter operationally defined as that which does not pass through a filter pore size that typically ranges in size from 0.053 millimeters (53 μm) to 2 millimeters. Particulate organic carbon (POC) is a closely related term often used interchangeably with POM. POC refers specifically to the mass of carbon in the particulate organic material, while POM refers to the total mass of the particulate organic matter. In addition to carbon, POM includes the mass of the other elements in the organic matter, such as nitrogen, oxygen and hydrogen. In this sense POC is a component of POM and there is typically about twice as much POM as POC. Many statements that can be made about POM apply equally to POC, and much of what is said in this article about POM could equally have been said of POC. Particulate organic matter is sometimes called suspended organic matter, macroorganic matter, or coarse fraction organic matter. When land samples are isolated by sieving or filtration, this fraction includes partially decomposed detritus and plant material, pollen, and other materials. When sieving to determine POM content, consistency is crucial because isolated size fractions will depend on the force of agitation. POM is readily decomposable, serving many soil functions and providing terrestrial material to water bodies. It is a source of food for both soil organisms and aquatic organisms and provides nutrients for plants. In water bodies, POM can contribute substantially to turbidity, limiting photic depth which can suppress primary productivity. POM also enhances soil structure leading to increased water infiltration, aeration and resistance to erosion. Soil management practices, such as tillage and compost/manure application, alter the POM content of soil and water. Overview Particulate organic carbon (POC) is operationally defined as all combustible, non-carbonate carbon that can be collected on a filter. The oceanographic community has historically used a variety of filters and pore sizes, most commonly 0.7, 0.8, or 1.0 μm glass or quartz fiber filters. The biomass of living zooplankton is intentionally excluded from POC through the use of a pre-filter or specially designed sampling intakes that repel swimming organisms. Sub-micron particles, including most marine prokaryotes, which are 0.2–0.8 μm in diameter, are often not captured but should be considered part of POC rather than dissolved organic carbon (DOC), which is usually operationally defined as < 0.2 μm. Typically POC is considered to contain suspended and sinking particles ≥ 0.2 μm in size, which therefore includes biomass from living microbial cells, detrital material including dead cells, fecal pellets, other aggregated material, and terrestrially-derived organic matter. Some studies further divide POC operationally based on its sinking rate or size, with ≥ 51 μm particles sometimes equated to the sinking fraction. Both DOC and POC play major roles in the carbon cycle, but POC is the major pathway by which organic carbon produced by phytoplankton is exported – mainly by gravitational settling – from the surface to the deep ocean and eventually to sediments, and is thus a key component of the biological pump. Terrestrial ecosystems Soil organic matter Soil organic matter is anything in the soil of biological origin. Carbon is its key component comprising about 58% by weight. Simple assessment of total organic matter is obtained by measuring organic carbon in soil. Living organisms (including roots) contribute about 15% of the total organic matter in soil. These are critical to operation of the soil carbon cycle. What follows refers to the remaining 85% of the soil organic matter - the non-living component. As shown below, non-living organic matter in soils can be grouped into four distinct categories on the basis of size, behaviour and persistence. These categories are arranged in order of decreasing ability to decompose. Each of them contribute to soil health in different ways. Dissolved organic matter (DOM): is the organic matter which dissolves in soil water. It comprises the relatively simple organic compounds (e.g. organic acids, sugars and amino acids) which easily decompose. It has a turnover time of less than 12 months. Exudates from plant roots (mucilages and gums) are included here. Particulate organic matter (POM): is the organic matter that retains evidence of its original cellular structure, and is discussed further in the next section. Humus: is usually the largest proportion of organic matter in soil, contributing 45 to 75%. Typically it adheres to soil minerals, and plays an important role structuring soil. Humus is the end product of soil organism activity, is chemically complex, and does not have recognisable characteristics of its origin. Humus is of very small unit size and has large surface area in relation to its weight. It holds nutrients, has high water holding capacity and significant cation exchange capacity, buffers pH change and can hold cations. Humus is quite slow to decompose and exists in soil for decades. Resistant organic matter: has a high carbon content and includes charcoal, charred plant materials, graphite and coal. Turnover times are long and estimated in hundreds of years. It is not biologically active but contributes positively to soil structural properties, including water holding capacity, cation exchange capacity and thermal properties. Role of POM in soils Particulate organic matter (POM) includes steadily decomposing plant litter and animal faeces, and the detritus from the activity of microorganisms. Most of it continually undergoes decomposition by microorganisms (when conditions are sufficiently moist) and usually has a turnover time of less than 10 years. Less active parts may take 15 to 100 years to turnover. Where it is still at the soil surface and relatively fresh, particulate organic matter intercepts the energy of raindrops and protects physical soil surfaces from damage. As it is decomposes, particulate organic matter provides much of the energy required by soil organisms as well as providing a steady release of nutrients into the soil environment. The decomposition of POM provides energy and nutrients. Nutrients not taken up by soil organisms may be available for plant uptake. The amount of nutrients released (mineralized) during decomposition depends on the biological and chemical characteristics of the POM, such as the C:N ratio. In addition to nutrient release, decomposers colonizing POM play a role in improving soil structure. Fungal mycelium entangle soil particles and release sticky, cement-like, polysaccharides into the soil; ultimately forming soil aggregates Soil POM content is affected by organic inputs and the activity of soil decomposers. The addition of organic materials, such as manure or crop residues, typically results in an increase in POM. Alternatively, repeated tillage or soil disturbance increases the rate of decomposition by exposing soil organisms to oxygen and organic substrates; ultimately, depleting POM. Reduction in POM content is observed when native grasslands are converted to agricultural land. Soil temperature and moisture also affect the rate of POM decomposition. Because POM is a readily available (labile) source of soil nutrients, is a contributor to soil structure, and is highly sensitive to soil management, it is frequently used as an indicator to measure soil quality. Freshwater ecosystems In poorly-managed soils, particularly on sloped ground, erosion and transport of soil sediment rich in POM can contaminate water bodies. Because POM provides a source of energy and nutrients, rapid build-up of organic matter in water can result in eutrophication. Suspended organic materials can also serve as a potential vector for the pollution of water with fecal bacteria, toxic metals or organic compounds. Marine ecosystems Life and particulate organic matter in the ocean have fundamentally shaped the planet. On the most basic level, particulate organic matter can be defined as both living and non-living matter of biological origin with a size of ≥0.2 μm in diameter, including anything from a small bacterium (0.2 μm in size) to blue whales (20 m in size). Organic matter plays a crucial role in regulating global marine biogeochemical cycles and events, from the Great Oxidation Event in Earth's early history to the sequestration of atmospheric carbon dioxide in the deep ocean. Understanding the distribution, characteristics, dynamics, and changes over time of particulate matter in the ocean is hence fundamental in understanding and predicting the marine ecosystem, from food web dynamics to global biogeochemical cycles. Measuring POM Optical particle measurements are emerging as an important technique for understanding the ocean carbon cycle, including contributions to estimates of their downward flux, which sequesters carbon dioxide in the deep sea. Optical instruments can be used from ships or installed on autonomous platforms, delivering much greater spatial and temporal coverage of particles in the mesopelagic zone of the ocean than traditional techniques, such as sediment traps. Technologies to image particles have advanced greatly over the last two decades, but the quantitative translation of these immense datasets into biogeochemical properties remains a challenge. In particular, advances are needed to enable the optimal translation of imaged objects into carbon content and sinking velocities. In addition, different devices often measure different optical properties, leading to difficulties in comparing results. Ocean primary production Marine primary production can be divided into new production from allochthonous nutrient inputs to the euphotic zone, and regenerated production from nutrient recycling in the surface waters. The total new production in the ocean roughly equates to the sinking flux of particulate organic matter to the deep ocean, about 4 × 109 tons of carbon annually. Model of sinking oceanic particles Sinking oceanic particles encompass a wide range of shape, porosity, ballast and other characteristics. The model shown in the diagram at the right attempts to capture some of the predominant features that influence the shape of the sinking flux profile (red line). The sinking of organic particles produced in the upper sunlit layers of the ocean forms an important limb of the oceanic biological pump, which impacts the sequestration of carbon and resupply of nutrients in the mesopelagic ocean. Particles raining out from the upper ocean undergo remineralization by bacteria colonized on their surface and interior, leading to an attenuation in the sinking flux of organic matter with depth. The diagram illustrates a mechanistic model for the depth-dependent, sinking, particulate mass flux constituted by a range of sinking, remineralizing particles. Marine snow varies in shape, size and character, ranging from individual cells to pellets and aggregates, most of which is rapidly colonized and consumed by heterotrophic bacteria, contributing to the attenuation of the sinking flux with depth. Sinking velocity The range of recorded sinking velocities of particles in the oceans spans from negative (particles float toward the surface) to several km per day (as with salp fecal pellets) When considering the sinking velocity of an individual particle, a first approximation can be obtained from Stokes' law (originally derived for spherical, non-porous particles and laminar flow) combined with White's approximation, which suggest that sinking velocity increases linearly with excess density (the difference from the water density) and the square of particle diameter (i.e., linearly with the particle area). Building on these expectations, many studies have tried to relate sinking velocity primarily to size, which has been shown to be a useful predictor for particles generated in controlled environments (e.g., roller tanks. However, strong relationships were only observed when all particles were generated using the same water/plankton community. When particles were made by different plankton communities, size alone was a bad predictor (e.g., Diercks and Asper, 1997) strongly supporting notions that particle densities and shapes vary widely depending on the source material. Packaging and porosity contribute appreciably to determining sinking velocities. On the one hand, adding ballasting materials, such as diatom frustules, to aggregates may lead to an increase in sinking velocities owing to the increase in excess density. On the other hand, the addition of ballasting mineral particles to marine particle populations frequently leads to smaller more densely packed aggregates that sink slower because of their smaller size. Mucous-rich particles have been shown to float despite relatively large sizes, whereas oil- or plastic-containing aggregates have been shown to sink rapidly despite the presence of substances with an excess density smaller than seawater. In natural environments, particles are formed through different mechanisms, by different organisms, and under varying environmental conditions that affect aggregation (e.g., salinity, pH, minerals), ballasting (e.g., dust deposition, sediment load; van der Jagt et al., 2018) and sinking behaviour (e.g., viscosity;). A universal conversion of size-to-sinking velocity is hence impracticable. Role in the lower aquatic food web Along with dissolved organic matter, POM drives the lower aquatic food web by providing energy in the form of carbohydrates, sugars, and other polymers that can be degraded. POM in water bodies is derived from terrestrial inputs (e.g. soil organic matter, leaf litterfall), submerged or floating aquatic vegetation, or autochthonous production of algae (living or detrital). Each source of POM has its own chemical composition that affects its lability, or accessibility to the food web. Algal-derived POM is thought to be most labile, but there is growing evidence that terrestrially-derived POM can supplement the diets of micro-organisms such as zooplankton when primary productivity is limited. The biological carbon pump The dynamics of the particulate organic carbon (POC) pool in the ocean are central to the marine carbon cycle. POC is the link between surface primary production, the deep ocean, and sediments. The rate at which POC is degraded in the dark ocean can impact atmospheric CO2 concentration. Therefore, a central focus of marine organic geochemistry studies is to improve the understanding of POC distribution, composition, and cycling. The last few decades have seen improvements in analytical techniques that have greatly expanded what can be measured, both in terms of organic compound structural diversity and isotopic composition, and complementary molecular omics studies. As illustrated in the diagram, phytoplankton fix carbon dioxide in the euphotic zone using solar energy and produce POC. POC formed in the euphotic zone is processed by marine microorganisms (microbes), zooplankton and their consumers into organic aggregates (marine snow), which is then exported to the mesopelagic (200–1000 m depth) and bathypelagic zones by sinking and vertical migration by zooplankton and fish. The biological carbon pump describes the collection of biogeochemical processes associated with the production, sinking, and remineralization of organic carbon in the ocean. In brief, photosynthesis by microorganisms in the upper tens of meters of the water column fix inorganic carbon (any of the chemical species of dissolved carbon dioxide) into biomass. When this biomass sinks to the deep ocean, a portion of it fuels the metabolism of the organisms living there, including deep-sea fish and benthic organisms. Zooplankton play a critical role in shaping particle flux through ingestion and fragmentation of particles, production of fast-sinking fecal material and active vertical migration. Besides the importance of "exported" organic carbon as a food source for deep ocean organisms, the biological carbon pump provides a valuable ecosystem function: Exported organic carbon transports an estimated 5–20 Gt C each year to the deep ocean, where some of it (~0.2–0.5 Gt C) is sequestered for several millennia. The biological carbon pump is hence of similar magnitude to current carbon emissions from fossil fuels (~10 Gt C year−1). Any changes in its magnitude caused by a warming world may have direct implications for both deep-sea organisms and atmospheric carbon dioxide concentrations. The magnitude and efficiency (amount of carbon sequestered relative to primary production) of the biological carbon pump, hence ocean carbon storage, is partly determined by the amount of organic matter exported and the rate at which it is remineralized (i.e., the rate with which sinking organic matter is reworked and respired in the mesopelagic zone region. Especially particle size and composition are important parameters determining how fast a particle sinks, how much material it contains, and which organisms can find and utilize it. Sinking particles can be phytoplankton, zooplankton, detritus, fecal pellets, or a mix of these. They range in size from a few micrometers to several centimeters, with particles of a diameter of >0.5 mm being referred to as marine snow. In general, particles in a fluid are thought to sink once their densities are higher than the ambient fluid, i.e., when excess densities are larger than zero. Larger individual phytoplankton cells can thus contribute to sedimentary fluxes. For example, large diatom cells and diatom chains with a diameter of >5 μm have been shown to sink at rates up to several 10 s meters per day, though this is only possible owing to the heavy ballast of a silica frustule. Both size and density affect particle sinking velocity; for example, for sinking velocities that follow Stokes' Law, doubling the size of the particle increases the sinking speed by a factor of 4. However, the highly porous nature of many marine particles means that they do not obey Stokes' Law because small changes in particle density (i.e., compactness) can have a large impact on their sinking velocities. Large sinking particles are typically of two types: (1) aggregates formed from a number of primary particles, including phytoplankton, bacteria, fecal pellets, live protozoa and zooplankton and debris, and (2) zooplankton fecal pellets, which can dominate particle flux events and sink at velocities exceeding 1,000 m d−1. Knowing the size, abundance, structure and composition (e.g. carbon content) of settling particles is important as these characteristics impose fundamental constraints on the biogeochemical cycling of carbon. For example, changes in climate are expected to facilitate a shift in species composition in a manner that alters the elemental composition of particulate matter, cell size and the trajectory of carbon through the food web, influencing the proportion of biomass exported to depth. As such, any climate-induced change in the structure or function of phytoplankton communities is likely to alter the efficiency of the biological carbon pump, with feedbacks on the rate of climate change. Bioluminescent shunt hypothesis The consumption of the bioluminescent POC by fish can lead to the emission of bioluminescent fecal pellets (repackaging), which can also be produced with non-bioluminescent POC if the fish gut is already charged with bioluminescent bacteria. In the diagram on the right, the sinking POC is moving downward followed by a chemical plume. The plain white arrows represent the carbon flow. Panel (a) represents the classical view of a non-bioluminescent particle. The length of the plume is identified by the scale on the side. Panel (b) represents the case of a glowing particle in the bioluminescence shunt hypothesis. Bioluminescent bacteria are represented aggregated onto the particle. Their light emission is shown as a bluish cloud around it. Blue dotted arrows represent the visual detection and the movement toward the particle of the consumer organisms. Increasing the visual detection allows a better detection by upper trophic levels, potentially leading to the fragmentation of sinking POC into suspended POC due to sloppy feeding. See also Microbial loop Particulate matter Total organic carbon References Chemical oceanography Environmental chemistry Soil
Particulate organic matter
[ "Chemistry", "Environmental_science" ]
4,202
[ "Chemical oceanography", "Environmental chemistry", "nan" ]
49,091,629
https://en.wikipedia.org/wiki/Fructoselysine
Fructoselysine is an Amadori adduct of glucose to lysine. It breaks down into furosine on acid-catalysed hydrolysis. E. coli breaks it down using the enzymes fructoselysine-6-kinase and fructoselysine 6-phosphate deglycase into glucose 6-phosphate and lysine, a set of enzymes located on the frl (fructoselysine) operon. References Monosaccharide derivatives Alpha-Amino acids Amino acid derivatives
Fructoselysine
[ "Chemistry" ]
115
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
49,092,909
https://en.wikipedia.org/wiki/Ghana%20Code%20Club
The Ghana Code Club is an after-school programme in Ghana that teaches children computer programming skills. The programme was founded by Ernestina Edem Appiah, and the programme is organized in various schools by Healthy Career Initiative, a nonprofit organisation in Ghana. Healthy Career Initiative was also founded by Appiah, in 2007. Ghana Code Club is a digital fun club that is being organized in schools in Ghana for children between the ages of 8–17. It is an after-school extracurricular programme. As of January 2016 in Ghana, the present information communications technology curriculum in Ghana does not include learning activities for technology. The club encourages children to attain modern skills in computer technology to help them with future careers. Some of the activities include learning how to create websites, animations and video games. In January 2016, the programme began operations in five schools, and the organisation is prepared to expand into most schools in Ghana in 2016, with a goal of educating at least 20,000 children. Challenges faced by the Ghana Code Club includes problems with internet connectivity in Ghana, and access to capital and computer equipment. As a result, a significant portion of the classes are presently taught on paper with printouts. History The founder and CEO of Ghana Code Club is Ernestina Edem Appiah, her interest was initiated by an article she read about children in the United Kingdom learning coding (computer programming). Appiah was one of the BBC's 100 Women. See also CoderDojo References External links Education in Ghana Computer programming
Ghana Code Club
[ "Technology", "Engineering" ]
305
[ "Software engineering", "Computer programming", "Computers" ]
49,093,658
https://en.wikipedia.org/wiki/Kepler-89e
Kepler-89e, also known as KOI-94e, is an exoplanet in the constellation of Cygnus. It orbits Kepler-89. Physical properties It is classed as a type III planet, making it cloudless and blue, and giving it the appearance of a larger version of Uranus and Neptune. It has a mass around 35 times that of Earth. It has a similar density to Saturn, 0.60 g/cm3, giving it a radius 6.56 times that of the Earth. It orbits an F-type main-sequence star at a distance of 0.305 astronomical units (au), with a period of 54.32031 days, making its orbit smaller than that of Mercury's. It has a very low eccentricity of 0.019. It has a temperature of 584 K. Host star Kepler-89e orbits the star Kepler-89. Kepler-89 has a mass of 1.18 solar masses, and a radius of 1.32 solar radii. It is 3.3 billion years old, younger than the Sun, making its planets about 3,000,000,000 years old (3 Gyr). It has a temperature of 6,210 K, making it appear bright yellowish-white. References Transiting exoplanets Exoplanets discovered in 2013 Cygnus (constellation)
Kepler-89e
[ "Astronomy" ]
284
[ "Cygnus (constellation)", "Constellations" ]
49,093,735
https://en.wikipedia.org/wiki/Wyman-Gordon%2050%2C000%20ton%20forging%20press
The Wyman-Gordon 50,000-ton forging press is a forging press located at the Wyman-Gordon Grafton Plant that was built as part of the Heavy Press Program by the United States Air Force. It was manufactured by Loewy Hydropress of Pittsburgh, Pennsylvania and began operation in October, 1955. References External links Photographs of the press Wyman-Gordon Historic American Engineering Record in Massachusetts Industrial machinery
Wyman-Gordon 50,000 ton forging press
[ "Engineering" ]
86
[ "Industrial machinery" ]
49,093,864
https://en.wikipedia.org/wiki/Alcoa%2050%2C000%20ton%20forging%20press
The Alcoa 50,000 ton forging press is a heavy press operated at Howmet Aerospace's Cleveland Operations. It was built as part of the Heavy Press Program by the United States Air Force. It was manufactured by Mesta Machinery of West Homestead, Pennsylvania, and began operation on May 5, 1955. Alcoa ran the plant from the time of its construction, and purchased it outright in 1982. In 2008, cracks were discovered in the press, which had to be shut down for safety reasons. Repairs, originally estimated at a cost of $68 million (equivalent to $ million in ), cost a total of $100 million, and were completed in early 2012. This press was marked a National Historic Mechanical Engineering Landmark by the American Society of Mechanical Engineers in 1981. Specifications Source: Type: Push down Height: 87', 36' below ground level and 51' above. Weight: approximately 8000 tons. Stroke: 6 feet Daylight: 15 feet Operating hydraulic pressure: 4500 psi Number of rams: 8 References External links Alcoa Historic American Engineering Record in Ohio Industrial machinery
Alcoa 50,000 ton forging press
[ "Engineering" ]
223
[ "Industrial machinery" ]
49,095,574
https://en.wikipedia.org/wiki/Code%3A%20Debugging%20the%20Gender%20Gap
CODE: Debugging the Gender Gap is a 2015 documentary by Robin Hauser Reynolds. It focuses on the lack of women and minorities in the field of software engineering. It premiered on April 19, 2015 at the Tribeca Film Festival in New York. The film focuses on inspiring young girls to pursue careers in computer science by profiling successful women in computer programming, such as, Danielle Feinberg of Pixar, Aliya Rahman of Code for Progress, and Julie Ann Horvath. By profiling and displaying the careers of these women, the film makers hope to show that computer science can be creative, lucrative, and rewarding. The film traces the history of women in the U.S. technology industries, from the work of Ada Lovelace, Grace Hopper, and the women of ENIAC. It then follows the decline of women graduates in mathematics and computer science during the 1980s, linking the phenomenon to the release of the 1983 film WarGames, and a cultural shift that depicted men and boys as technology workers, and increasing hostility for women and girls in the tech industries. Additionally, the film highlights the work of women in the field, by featuring interviews with women in the tech industry, such as Kimberly Bryant (founder of Black Girls Code), Debbie Sterling (founder of GoldieBlox), Maria Klawe (president of Harvey Mudd College), and Danielle Feinberg (director of photography at Pixar). Fundraising Funding for the film was partially raised via Indiegogo and Reynolds was able to successfully receive additional funding from corporations like CapitalOne, MasterCard, Ericsson, NetApp, Qualcomm, and Silicon Valley Bank. Reception The general reception of the film by popular press has been positive. Stephen Cass of IEEE Spectrum, stated of the film: "Code doesn't have all the answers, of course. But ultimately, it does make a good case that everyone should think deliberately about diversity in their hiring." Some criticism has focused on the apparent lack of attention paid to the Gamergate controversy, and the work and experience of women in the gaming industries. Graham Winfrey of Inc. magazine wrote, "CODE makes a compelling case that the lack of women in tech poses a significant threat to America's future." Awards Gold Audience Award for Active Cinema at the Mill Valley Film Festival (2015, won) References External links 2015 documentary films 2015 films Documentary films about computing American documentary films Diversity in computing Gender and employment History of women in the United States 2010s English-language films 2010s American films English-language documentary films
Code: Debugging the Gender Gap
[ "Technology" ]
519
[ "Computing and society", "Diversity in computing" ]
49,096,673
https://en.wikipedia.org/wiki/Jeanne%20Harris
Jeanne Harris is an American author, academic, and business executive. Harris is a faculty member of Columbia University, where she teaches a graduate level course on Business Analytics Management. Jeanne retired as the managing director of Information Technology Research for the Accenture Institute for High Performance.She is the co-author with Thomas H. Davenport of Competing on Analytics: The New Science of Winning, revised edition (Harvard Business Review Press, 2017) and Analytics at Work: Smarter Decisions, Better Results. (Harvard Business School Press, 2010) Harris also serves on the INFORMS Analytics Certification Board. Career Harris is the former Global Managing Director of Information Technology Research at the Accenture Institute for High Performance in Chicago. Before Jeanne retired she led the Institute's global research agenda in the areas of information, technology, and analytics. She has been interviewed by ComputerWeekly and others. Awards In 2009, Harris received Consulting Magazine's Women Leaders in Consulting award for Lifetime Achievement. References External links Jeanne Harris Jeanne Harris American women business executives Information systems researchers Living people Year of birth missing (living people) Place of birth missing (living people) Columbia University faculty
Jeanne Harris
[ "Technology" ]
227
[ "Information systems", "Information systems researchers" ]
49,096,803
https://en.wikipedia.org/wiki/Pink%20rot
Pink rot is a fungal disease of various plants, caused by various organisms: Phytophthora erythroseptica – pink rot of potatoes, carrots (tubers) Trichothecium roseum – pink rot of apples, grapes, avocadoes, peaches, nectarines (fruit) Nalanthamala vermoeseni or Gliocladium vermoeseni – pink rot of date palm (inflorescence)
Pink rot
[ "Biology" ]
97
[ "Set index articles on fungus common names", "Set index articles on organisms" ]
49,097,223
https://en.wikipedia.org/wiki/Laboratory%20Syrian%20hamster
Syrian hamsters (Mesocricetus auratus) are one of several rodents used in animal testing. Syrian hamsters are used to model human medical conditions including various cancers, metabolic diseases, non-cancer respiratory diseases, cardiovascular diseases, infectious diseases, and general health concerns. In 2014, Syrian hamsters accounted for 14.6% of the total animal research participants in the United States covered by the Animal Welfare Act. Use in research Since 1972 the use of hamsters in animal testing research has declined. In 2014 in the United States, animal research used about 120,000 hamsters, which was 14.6% of the total research animal use (under the Animal Welfare Act which excludes mice, rats, and fish) for that year in that country. According to the Canadian Council for Animal Care, a total of 1,931 hamsters were used for research in 2013 in Canada, making them the sixth-most popular rodent after mice (1,233,196), rats (228,143), guinea pigs (20,687), squirrels (4,446) and voles (2,457). Human medical research Cancer research Humans get lung cancer from tobacco smoking. Syrian hamsters are a model for researching non-small-cell lung carcinoma, which is one of the types of human lung cancer. In research, when hamsters are injected with the carcinogen NNK several times over six months, they will develop that sort of cancer. In both Syrian hamsters and humans, this cancer is associated with mutations to the KRAS gene. For various reasons, collecting data on the way that Syrian hamsters develop this lung cancer provides insight on how humans develop it. Oral squamous-cell carcinoma is a common cancer in humans and difficult to treat. Scientists studying this disease broadly accept Syrian hamsters as animal models for researching it. In this research, the hamster is given anesthesia, has its mouth opened to expose the inside of its cheeks, and the researcher brushes the carcinogen DMBA on its cheeks. The scientist can take cell samples from the mouth of the hamster to measure the development of the cancer. This process has good reproducibility. The cancer itself develops tumors in a predictable way starting with hyperkeratosis, then hyperplasia, then dysplasia, then carcinoma. In humans with this cancer there is increased ErbB2 production of receptor tyrosine kinase and Syrian hamsters with this cancer also have increased levels of that kinase. As the tumor develops in the hamster, they also have increased gene expression in p53 and c-myc which is similar to human cancer development. Because hamsters develop this cancer so predictably, researchers are comfortable in using hamsters in research on prevention and treatment. There is scientific and social controversy about the virus SV40 causing cancers in human. Leaving that controversy aside, Syrian hamsters injected with SV40 certainly will develop various cancers in predictable ways depending on how they are exposed to the virus. The hamster has been used as a research model to clarify what SV40 does in humans. The golden hamster can contract contagious reticulum cell sarcoma which can be transmitted from one golden hamster to another by means of the bite of the mosquito Aedes aegypti. Metabolic disorders Syrian hamsters are susceptible to many metabolic disorders which affect humans. Because of this, hamsters are an excellent animal model for studying human metabolic disorder. Gallstones may be induced in Syrian hamsters by giving the hamster excess dietary cholesterol or sucrose. Hamsters metabolize cholesterol in a way that is similar to humans. Different sorts of fats are more or less likely to produce gallstones in hamsters. The gender differences in gallstone formation in hamsters is significant. Hamsters of different genetic strains have significant differences in susceptibility to forming gallstones. Diabetes mellitus is studied in various ways using Syrian hamsters. Hamsters which are feed fructose for 7 days get hyperinsulinemia and hyperlipidemia. Such hamsters then have an increase in hepatic lipase and other measurable responses which are useful for understanding diabetes in humans. Streptozotocin or alloxan may be administered to induce chronic diabetes in hamsters. Atherosclerosis may be studied with Syrian hamsters because both organisms have similar lipid metabolism. Hamsters develop atherosclerosis as a result of dietary manipulation. Hamsters develop atherosclerotic plaques as humans do. Non-cancer respiratory disease Smoke inhalation can be studied on Syrian hamsters by putting the hamster in a laboratory smoking machine. Pregnant hamsters have been used to model the effects of smoking on pregnant humans. The emphysema component of COPD may be induced in hamsters by injecting pancreatic elastase into their tracheas. Pulmonary fibrosis may be induced in hamsters by injecting bleomycin into their tracheas. Cardiovascular Cardiomyopathy in hamsters is an inherited condition and there are genetic lines of hamsters which are bred to retain this gene so that they may be used to study the disease. Microcirculation may be studied in hamster cheek pouches. The pouches of hamsters are thin, easy to examine without stopping bloodflow, and highly vascular. When examined, the cheek pouch is pulled through the mouth while being grasped with forceps. At this point the cheek is everted and can be pinned onto a mount for examination. Reperfusion injury may be studied with everted hamster pouches also. To simulate reperfusion, one method is to tie a cuff around the pouch to restrict blood flow and cause ischemia. Another method could be to compress the veins and arteries with microvascular clips which do not cause trauma. In either case, after about an hour of restricting the blood, the pressure is removed to study how the pouch recovers. Several inbred strains of hamsters have been developed as animal models for human forms of dilated cardiomyopathy. The gene responsible for hamster cardiomyopathy in a widely studied inbred hamster strain, BIO14.6, has been identified as being delta-sarcoglycan. Pet hamsters are also potentially prone to cardiomyopathy, which is a not infrequent cause of unexpected sudden death in adolescent or young adult hamsters. Infection research Syrian hamsters have been infected with a range of disease causing agents to study both the disease and the cause of the disease. Hantavirus pulmonary syndrome is a medical condition in humans caused by any of the Hantavirus species. Syrian hamsters easily contract Hantavirus species, but they do not get the same symptoms as humans, and the same infection that is deadly in humans have effects ranging from nothing to flu to death in Syrian hamsters. Because hamsters become easily infected, they are used to study the pathogenesis of Hantavirus. Andes virus and Maporal viruses infect hamsters and cause pneumonia and edema. The Sin Nombre virus and Choclo virus will infect hamsters but not cause any disease. SARS coronavirus causes severe acute respiratory syndrome in humans. Syrian hamsters may be infected with the virus, and like humans will have viral replication and lesions in the respiratory tract which can be examined with histopathological tests. However, hamsters do not develop clinical symptoms of the disease. Hamsters might be used to study the infection process. Leptospira viruses cause Leptospirosis in humans and similar symptoms in Syrian hamsters. Syrian hamsters are used to test drugs to treat the disease. Bacteria that have been studied by infection Syrian hamsters with them include Leptospira, Clostridioides difficile, Mycoplasma pneumoniae, and Treponema pallidum. Parasites which have been studied by infecting Syrian hamsters with them include Toxoplasma gondii, Babesia microti, Leishmania donovani, Trypanosoma cruzi, Opisthorchis viverrini, Taenia, Ancylostoma ceylanicum, and Schistosoma. Syrian hamsters are infected with scrapie so that they get transmissible spongiform encephalopathy. In March 2020, researchers from the University of Hong Kong have shown that Syrian hamsters could be a model organism for COVID-19 research. Other medical conditions Scientists use male hamsters to study the effects of steroids on male behavior. The behavior of castrated hamsters is compared to typical male hamsters. Castrated hamsters are then given steroids and their behavior noted. Some steroid treatments will cause castrated hamsters to do behaviors that typical male hamsters do. Poor nutrition may cause female infertility in mammals. When hamsters do not have enough of the right food, they have fewer estrous cycles. Studies in hamsters identify the nutritional needs for maintaining fertility. Syrian hamsters are used to study how NSAIDs can cause reactive gastropathy. One way to study is to inject hamsters with indometacin, which causes an ulcer within 1–5 hours depending on the dose. If repeatedly given doses, hamsters get severe lesions and die within 5 days from peptic ulcers in their pyloric antrum. A model for creating a chronically ill hamster which will not die from the ulcers is to give naproxen by gavage. When the hamster is chronically ill, it can be used to test anti-ulcer drugs. Syrian hamsters are also widely used in research into alcoholism, by virtue of their large livers, and ability to metabolise high doses. Research on Syrian hamsters themselves In captivity, golden hamsters follow well-defined daily routines of running in their hamster wheel, which has made them popular subjects in circadian rhythms research. For example, Martin Ralph, Michael Menaker, and colleagues used this behavior to provide definitive evidence that the suprachiasmatic nucleus in the brain is the source of mammalian circadian rhythms. Hamsters have a number of fixed action patterns that are readily observed, including scent-marking and body grooming, which is of interest in the study of animal behavior. Scientific studies of animal welfare concerning captive golden hamsters have shown they prefer to use running wheels of large diameters (35 cm diameter was preferred over 23 cm, and 23  cm over 17.5  cm,), and that they prefer bedding material which allows them to build nests, if nesting material is not already available. They prefer lived-in bedding (up to two weeks old – longer durations were not tested) over new bedding, suggesting they may prefer bedding changes at two-week intervals rather than weekly or daily. They also prefer opaque tubes closed at one end, 7.6 cm in diameter, to use as shelter in which to nest and sleep. Notes References Laboratory rodents Golden hamster
Laboratory Syrian hamster
[ "Biology" ]
2,278
[ "Molecular genetics", "Laboratory rodents" ]
49,097,972
https://en.wikipedia.org/wiki/Cannabinodiol
Cannabinodiol (CBND), also known as cannabidinodiol, cannabinoid that is present in the plant Cannabis sativa at low concentrations. It is the fully aromatized derivative of cannabidiol (CBD) and can occur as a product of the photochemical conversion of cannabinol (CBN). See also Cannabidiol Cannabinol Cannabichromene Cannabimovone Delta-6-CBD Dimethylheptylpyran HU-210 Nabilone Parahexyl Tetrahydrocannabivarin References Cannabinoids Natural phenols Biphenyls Resorcinols Alkyl-substituted benzenes
Cannabinodiol
[ "Chemistry" ]
155
[ "Biomolecules by chemical classification", "Natural phenols" ]
49,098,434
https://en.wikipedia.org/wiki/Hoberman%20mechanism
A Hoberman mechanism, or Hoberman linkage, is a deployable mechanism that turns linear motion into radial motion. The Hoberman mechanism is made of two angulated ridged bars connected at a central point by a revolute joint, making it move much like a scissor mechanism. Multiple of these linkages can be joined together at the ends of the angulated bars by more revolute joints, expanding radially to make circle shaped mechanisms. The mechanism is a GAE (generalize angulated element) where the coupler curve is a radial straight line. This allows the Hoberman mechanism to act with a single degree of freedom, meaning that it is an over-constrained mechanism because the mobility formula predicts that it would have a smaller degree of freedom than it does, as the mechanism has more degrees of freedom than the mobility formula predicts. The kinematic theory behind the Hoberman mechanism has been used to help further the understanding of mobility and foldability of deployable mechanisms. History The Hoberman mechanism originates from the idea of making something bigger become smaller. Chuck Hoberman, a fine arts graduate from Cooper Union, realized that his lack in knowledge of engineering was holding him back from creating the things he could picture in his head. He enrolled in Columbia University to get a masters in mechanical engineering. After this he started working with origami, studying the way that it folded and changed shape. He soon realized that his interests lay in the expansion and shrinkage of the objects he was making. Hoberman started to experiment with different expanding mechanisms and started to create mechanisms of his own. He later patented a system that uses two identical bent rods connected in the middle by a joint, which he called the Hoberman mechanism. The creation of the Hoberman mechanism has since helped with more mechanical discoveries and research concerning foldability and mobility of mechanisms. Mechanics How it works The Hoberman mechanism is made of two identical angulated rods joined together at their bends by one central revolute joint. These mechanisms can be linked by connecting the ends of the pairs together with two more revolute joints. Due to the mechanism's design, however, the revolute joints act as if they are prismatic-revolute joints because they move along a straight axis as the system changes shape. By pushing or pulling on any of the joints, the entire system moves and changes shape, gaining volume or folding into itself. These systems of linkages can be expanded to a full circle where it moves as one system, turning linear motion from a single axis of a joint into radial motion across the entire mechanism. Kinematic theory The Hoberman mechanism is a single degree of freedom structure meaning that the system can be driven with a single actuator. The mechanism is made of two identical angulated rods joined together by a central revolute pivot and four end pivots constrained to move along a single line. Because the four end pivots are restrained in this way, the mechanism can be treated as pair of PRRP (prismatic-revolute-revolute-prismatic) mechanisms joined at a central point. The two PRRP linkages trace a pair of identical straight lines from the origin of the mechanism to their coupler points, so they have the same coupler curve. The equation for the coupler curve of the PRRP linkages in a Hoberman mechanism follows the coupler point B(x,y) in Fig. 1: For parameters {r1,r2,α}, this equation of the coupler curve follows the equation for a straight line (y = mx). Because the two angulated rods that make up a Hoberman mechanism are identical, they have the same r1 and r2 values and thus the same coupler curve. A pair of PRRP linkages that share a coupler curve at a common coupler point have a single degree of freedom, which is why the Hoberman mechanism has a single degree of freedom. The motion that the Hoberman mechanism produces is radial motion, even though it looks like linear motion, because the motion follows the coupler curve, which is a radial strait line. The mobility formula for a single degree of freedom , where M is the degrees of freedom, n is the number of moving elements, and j is the number of joints, predicts that a Hoberman mechanism of 12 bars and 18 joints would have −3 degrees of freedom. That makes the Hoberman mechanism a over-constrained mechanism because all Hoberman mechanisms have a single degree of freedom. Applications The Hoberman mechanism has been used in many different parts of everyday life. Art The Hoberman mechanism is featured in works of art, mostly made by artist and inventor of the Hoberman mechanism, Chuck Hoberman. Structures designed by Chuck Hoberman that included the Hoberman mechanism were featured in The Elaine Dannheisser Projects Series from MoMA. A Hoberman sphere also was on display at the MoMA in New York as a part of the Century of the Child exhibit. More large Hoberman spheres featuring the Hoberman mechanism are scattered around the world; they can be found anywhere from science centers around the US to wineries in France. Toys The most commonly seen form of the Hoberman mechanism is in the toy made by Chuck Hoberman called the Mega Sphere or Hoberman sphere. The Mega Sphere is a plastic, sphere shaped toy that expands and retracts as it is pushed and pulled on. The toy is made of six full rings of Hoberman mechanisms that are all connected to each other so as one piece of it retracts or expands, the entirety of the structure follows. They are multicolored and range in size from a meter to just a few inches. Architecture The Hoberman mechanism has also been used in larger scale architectural projects. One of these structures is the Hoberman Arch featured the 2002 winter Olympics in Utah. The Arch was designed by Chuck Hoberman; it was built to open and close using many interlocked Hoberman mechanisms, acting as a mechanical curtain on the award ceremony stage. Engineering By replacing the pin joints with temperature reactive shape memory polymers, a Hoberman mechanism can be used to autonomously deploy engineering systems such as photovoltaic cells when the ambient temperature increases above a threshold. References Mechanisms (engineering)
Hoberman mechanism
[ "Engineering" ]
1,280
[ "Mechanical engineering", "Mechanisms (engineering)" ]
49,098,439
https://en.wikipedia.org/wiki/Vivipary
In plants, vivipary occurs when seeds or embryos begin to develop before they detach from the parent. Plants such as some Iridaceae and Agavoideae grow cormlets in the axils of their inflorescences. These fall and in favourable circumstances they have effectively a whole season's start over fallen seeds. Similarly, some Crassulaceae, such as Bryophyllum, develop and drop plantlets from notches in their leaves, ready to grow. Such production of embryos from somatic tissues is asexual vegetative reproduction that amounts to cloning. Description Most seed-bearing fruits produce a hormone that suppresses germination until after the fruit or parent plant dies, or the seeds pass through an animal's digestive tract. At this stage, the hormone's effect will dissipate and germination will occur once conditions are suitable. Some species lack this suppressant hormone as a central part of their reproductive strategy. For example, fruits that develop in climates without large seasonal variations. This phenomenon occurs most frequently on ears of corn, tomatoes, strawberries, peppers, pears, citrus fruits, and plants that grow in mangrove environments. In some species of mangroves, for instance, the seed germinates and grows from its own resources while still attached to its parent. Seedlings of some species are dispersed by currents if they drop into the water, but others develop a heavy, straight taproot that commonly penetrates mud when the seedling drops, thereby effectively planting the seedling. This contrasts with the examples of vegetative reproduction mentioned above, in that the mangrove plantlets are true seedlings produced by sexual reproduction. In some trees, like jackfruit, some citrus, and avocado, the seeds can be found already germinated while the fruit goes overripe; strictly speaking this condition cannot be described as vivipary, but the moist and humid conditions provided by the fruit mimic a wet soil that encourages germination. However, the seeds also can germinate under moist soil. In some species of cacti, such as Escobaria vivipara, seeds germinate while still inside of the fruit. When the fruit is broken open, it bears many cacti propagules. This is thought to be an adaptation to rapid photoperiod, or daylight changes, since Escobaria vivipara is one of the few cacti that naturally occurs above the frost line in Canada. Reproduction Vivipary includes reproduction via embryos, such as shoots or bulbils, as opposed to germinating externally from a dropped, dormant seed, as is usual in plants; Pseudovivipary A few plants are pseudoviviparous – instead of reproducing with seeds, there are monocots that can reproduce asexually by creating new plantlets in their spikelets. Examples are seagrass species belonging to the genus Posidonia and the alpine meadow-grass, Poa alpina. See also False vivipary References Asexual reproduction Cloning Plant reproduction
Vivipary
[ "Engineering", "Biology" ]
642
[ "Behavior", "Plant reproduction", "Plants", "Reproduction", "Cloning", "Genetic engineering", "Asexual reproduction" ]
49,099,433
https://en.wikipedia.org/wiki/Tobias%20acid
Tobias acid (2-amino-1-naphthalenesulfonic acid) is an organic compound with the formula C10H6(SO3H)(NH2). It is named after the German chemist Georg Tobias. It is one of several aminonaphthalenesulfonic acids, which are derivatives of naphthalene containing both amine and sulfonic acid functional groups. It is a white solid, although commercial samples can appear otherwise. It is used in the synthesis of azo dyes such as C.I. Acid Yellow 19 and C.I. Pigment Red 49. It is prepared via the Bucherer reaction of 2-hydroxynaphthalene-1-sulfonic acid with ammonia and ammonium sulfite. References External links 2-Amino-1-naphthalenesulfonic acid, NIST Standard Reference Data Program Naphthylamines Sulfonic acids
Tobias acid
[ "Chemistry" ]
192
[ "Functional groups", "Organic compounds", "Sulfonic acids", "Organic compound stubs", "Organic chemistry stubs" ]
49,099,499
https://en.wikipedia.org/wiki/Arun%20Rai
Arun Rai (born 1963) is an Indian-born American scientist. Arun Rai is a permanent Regent's Professor at the Robinson College of Business at Georgia State University and holds the Howard S. Starks Distinguished Chair at the Robinson College of Business at Georgia State University. Education and Employment Arun Rai earned his integrated master's degree in science and technology from BITS Pilani in 1985, MBA from Clarion University of Pennsylvania in 1987, and PhD in Management Information Systems from Kent State University in 1990. He served as an assistant and later as an associate professor at Southern Illinois University at Carbondale from 1990 until 1997 before moving to Georgia State University in 1997. Awards and Recognitions In recognition of his significant global contributions to scientific research in the field of Information Systems, Rai was recognized as the Fellow of the Association of Information Systems (FAIS) in 2010. Rai was awarded the Information Systems Society (ISS) Distinguished Fellow in 2014. Arun Rai was the recipient of the prestigious LEO award (awarded in 2019 in Munich), which is named for the world's first business application of computing (The Lyons Electronic Office), and recognizes truly outstanding individuals in the field of Information Systems. Service to the Field of Information Systems He served as an Editor-in-Chief of Management Information Systems Quarterly (MISQ) for five years between 2016 and 2020. He has previously served as Senior Editor for Information Systems Research, MIS Quarterly, and Journal of Strategic Information Systems and as Associate Editor for several journals (e.g., Journal of Management Information Systems, Management Science, Decision Sciences, IEEE Transactions on Engineering Management, Information Systems Research, MIS Quarterly and Journal of the Association for Information Systems). References External links http://robinson.gsu.edu/profile/arun-rai/ http://arunrai.us/ American people of Indian descent Kent State University alumni 1963 births Living people Georgia State University faculty Information systems researchers Management Information Systems Quarterly editors
Arun Rai
[ "Technology" ]
394
[ "Information systems", "Information systems researchers" ]
51,419,245
https://en.wikipedia.org/wiki/William%20Garrow%20Lettsom
William Garrow Lettsom FRAS (1805 – 14 December 1887) was a British diplomat and scientist. He was instrumental in revealing the text of the secret Treaty of the Triple Alliance between Argentina, the Empire of Brazil and Uruguay. Early life Lettsom was born into a Quaker family at Fulham in March 1805. His paternal grandfather John Coakley Lettsom was a famous physician, philanthropist and abolitionist who held that sea-bathing was good for public health. His maternal grandfather − with whom he lived in his youth − was Sir William Garrow the celebrated criminal defender, afterwards a judge, who introduced the phrase "presumed innocent until proven guilty" into the common law and whose life inspired the television drama series Garrow's Law. Lettsom was educated at Westminster School and Cambridge University. Literary acquaintance As an undergraduate at Cambridge University Lettsom befriended the author William Makepeace Thackeray and was the (or an) editor of The Snob in which some of Thackeray's earliest work appeared; Lettsom has been identified as the character Tapeworm in Thackeray's novel Vanity Fair, a diplomat who fancies himself as a ladies' man. Lettsom was well acquainted with the cartoonist George Cruikshank, illustrator of the early works of Charles Dickens. Lettsom was a contributor to various literary periodicals under the pseudonym Dr. Bulgardo. Scientist Lettsom was a competent scientist in an age when this was still possible for an amateur. He was best known as the joint author of Greg and Lettsom's Manual of the Mineralogy of Great Britain and Ireland, which was the most complete and accurate work that had appeared on the mineralogy of the British Isles. First published in 1858, a century later it was still the standard work on the subject, when a reprint was issued. The mineral lettsomite is named after him. But his scientific interests were wider, and he corresponded with the most eminent workers in spectroscopy. He was a member of the London Electrical Society and the author of several papers on geological, electrical and spectroscopic subjects. He was elected a Fellow of the Royal Astronomical Society in 1849. In that year he communicated an experiment in bioelectricity: by making a wound in a finger and inserting the electrode of a galvanometer, while placing the other electrode in contact with an unwounded finger, a current was observed to flow. Lettsom observed that the experiment was repeatable for he had tried it himself. In 1857 while on diplomatic service in Mexico he sent to the Royal Entomological Society of London some seeds which, when put in a warm place, became "very lively". The grub responsible had not been investigated scientifically before, wrote Lettsom, and he asked the Society to do so. These were the celebrated Mexican jumping beans. While on diplomatic service in Uruguay he brought a 9 inch Henry Fitz telescope for astronomical observations in the southern hemisphere. Owing to unknown problems he sent the telescope back to New York to be checked and adjusted by the telescope maker. The telescope was received by Lewis Rutherford, pioneer astrophotographer and spectroscopist and associate of the Royal Astronomical Society, who helped Henry Fitz on this task. The telescope was left in Uruguay and is in use to this day by the Uruguayan Amateur Astronomers' Association. Diplomat Having been called to the Bar by Lincoln's Inn he entered the diplomatic service. After postings in Berlin, Munich (1831), Washington (1840), Turin (1849) and Madrid (1850) he was appointed secretary to the Legation at Mexico (1854) and became the Chargé d'affaires. In the unreformed British diplomatic service there were no examinations; candidates were appointed by the influence of political friends. This caused criticism. In the House of Commons on 22 May 1855 the motion wasThat it is the opinion of this House that the complete Revision of our Diplomatic Establishment recommended in the Report of the Select Committee of 1850 on Official Salaries should be carried into effect. In this debate Lettsom was used as a case in point to illustrate the defects of the unreformed system. It has been noted that Lettsom, "who had invariably conducted himself to the satisfaction of those who employed him", received one of the slowest promotions in the diplomatic service. A diplomat was expected to be a gentleman and to have a private income whereby he could receive unpaid diplomatic appointments. Hence nine of the twenty-three years of Lettsom's service were unsalaried; promotion was slow. This glacial treatment did not apply, however, to those who had powerful political friends, for they were soon appointed to agreeable capitals at enormous salaries. The motion was carried by 112 votes to 57, Mr Otway MP remarking that "The person who had shown himself to be the fittest man, whether he was the son of a Peer or a tailor, should be chosen". While in Mexico the British government suspended relations with that country on Lettsom's representation, and he was the object of an attempted assassination. Between 1859 and 1869 Lettsom was appointed Consul-General and Chargé d'Affaires to the Republic of Uruguay. Treaty of the Triple Alliance In 1864 and early 1865 Paraguayan forces under the orders of Francisco Solano López seized Brazilian and Argentine shipping and invaded the provinces of the Mato Grosso and Rio Grande do Sul (Brazil) and Corrientes (Argentina). On 1 May 1865 Brazil, Argentina and Uruguay signed the Treaty of the Triple Alliance against Paraguay. By Article XVIII of the Treaty its provisions were to be kept secret until its "principal object" should be obtained. One of its provisions concerned the acquisition by Argentina of large tracts of territory then in dispute between it and Paraguay. Lettsom was not satisfied about this and surreptitiously obtained a copy of the Treaty from the Uruguayan diplomat Dr Carlos de Castro. He forwarded it to London and the British government ordered it to be translated into English and published to Parliament. When the text became available in South America there was outrage in several quarters, some because of the Treaty's content, others because it had been published at all. Lettsom has been cited as an exemplar of the nuance with which a substantial part of the British diplomatic corps saw the Paraguayan War. Later Lettsom retired from the diplomatic service in 1869. He never married. He died of acute bronchitis on 14 December 1887. Notes References 1805 births 1887 deaths Amateur scientists Spectroscopists British diplomats British mineralogists Paraguayan War People from Fulham
William Garrow Lettsom
[ "Physics", "Chemistry" ]
1,347
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
51,421,017
https://en.wikipedia.org/wiki/Neoendemism
Neoendemism is one of two sub-categories of endemism, the ecological state of a species being unique to a defined geographic location. Specifically, neoendemic species are those that have recently arisen, through divergence and reproductive isolation or through hybridization and polyploidy in plants. Paleoendemism, the other sub-category, refers to species that were formerly widespread but are now restricted to a smaller area. Examples "Darwin's finches", residents of the Galápagos Islands, have been used since the 19th century as an example of how the descendants of one ancestor can evolve through adaptive radiation into several species as they adapt to different conditions on various islands. Charles Darwin wrote:...one might really fancy that from an original paucity of birds in this archipelago, one species had been taken and modified for different ends." The Galápagos archipelago is also the home of paleoendemic species. The Santa Cruz cypress (Hesperocyparis abramsiana; formerly classified as Cupressus abramsiana) has a geographic range limited to a small section in the Monterey Bay region of California where subsea canyon topography reliably produces summer fog, owing to cold water upwelling. The U.S. Fish and Wildlife Service listed the species as endangered in 1987, due to increasing threats from habitat loss and disruption of natural forest fire regimes. In 2016, the conservation status of the Santa Cruz cypress was reduced to threatened. The cited reasoning was a decrease in threats against their habitat. However, a lengthy section of the 2016 federal report titled "Genetic introgression" (also known as introgressive hybridization) explains how the integrity of this species is also threatened by nearby horticultural plantings of a sister species, Monterey cypress, whose historically native range is nearby: on the opposite side of Monterey Bay. Hybridization is known to occur between the two endemics — as well as with a widely planted sister species native to Arizona: Arizona cypress. The ease of hybridization of cypress species in the American southwest has fostered a parallel history of taxonomic disagreements of where genus and species distinctions should apply. It thus provides a case study of neoendemism in conifers. As well, it illustrates an element of ongoing human impact — wind-dispersed pollen contamination from horticultural plantings — that cannot easily be corrected to meet conservation goals. See also Speciation Notes References Endemism
Neoendemism
[ "Biology" ]
494
[ "Endemism", "Biodiversity" ]
51,423,025
https://en.wikipedia.org/wiki/Chirality%20Medal
The Chirality Medal, instituted by the Società Chimica Italiana in 1991 to honor internationally recognized scientists who have made a distinguished contribution to all aspects of chirality, is awarded each year by a Chirality Medal Honor Committee composed of the Chirality International Committee members and the most recent recipients of the Chirality Medal. The medal is awarded to the recipient at the International Conference on Chirality. List of Past Winners See also List of chemistry awards References Chemistry awards
Chirality Medal
[ "Technology" ]
101
[ "Science and technology awards", "Chemistry awards" ]
51,423,466
https://en.wikipedia.org/wiki/Festival%20of%20International%20Virtual%20%26%20Augmented%20Reality%20Stories
Festival of International Virtual & Augmented Reality Stories (FIVARS) is a media festival that showcases stories or narrative forms from around the world using immersive technology that includes virtual reality, augmented reality, live VR performance theater and dance, projection mapping and spatialized audio. It is considered to be Canada's first dedicated virtual or augmented reality stories festival, and was the world's first virtual reality festival dedicated completely and exclusively to narrative pieces. FIVARS is operated by Constant Change Media Group, Inc. and VRTO (Virtual Reality, Toronto - a conference and Meetup group). 2015 - History Created, directed and curated by Keram Malicki-Sanchez and working with technical director Joseph Ellsworth, in Toronto, Ontario, Canada in the summer of 2015, the FIVARS - an acronym for the Festival of International Virtual & Augmented Reality Stories - featured a selection of virtual reality and augmented reality experiences that focus on narrative or a form of storytelling. The earliest recorded event was the Camp Wavelength music festival in Toronto, August 28-30th. FIVARS then ran its first official festival September 20 and 21 at rock music venue "UG3 Live" in downtown Toronto from 11am to 7pm as well as a showcase at the city's official civic rotunda across from the Toronto International Film Festival. FIVARS returned with a preview pavilion at the VRTO Virtual & Augmented Reality World Conference & Expo in June 2016 and to Camp Wavelength in August of the same year. 2016 Year 2, MSMU The festival continued in 2016 for a second year, September 16–18, 11am to 7pm at MSMU Studios, an abandoned furniture warehouse in west side of downtown Toronto. The program was listed a "Public Event You Should Attend" in the syllabus for the Future Cinema program at York University. The festival featured 30 selections - each enhanced by a haptic backpack created by Subpac FIVARS Festival Explores Human Experience Through Virtual Reality, in addition to a custom 3D Audio Chamber created by David McKevy and an augmented reality art gallery by artist Daniel Leighton and presentation by Joergen Geerds of KonceptVR and Daniel Burwen of Jaunt XR. Ellsworth moved into a tech consulting position. 2017 - House of VR, and Jesus In 2017, the FIVARS festival featured 35 pieces from 15 countries and was held at the House of VR in Toronto. World premieres included "Sanctuary" by Shivani Melukar, "City of Ghosts" from director Olivier Asselin and CieAR, and IN HIS PRESENCE from director Brenda Colonna and producer Philip Plough, among others. In 2019 the piece was covered again in an article The New Yorker about Virtual Reality for beginners penned by Patricia Marx, in which Malicki-Sanchez was interviewed and quoted. 2018 and the Resurrection of Historic Site: The Matador The 2018 took place at the legendary Matador Ballroom - a famous after-hours club and public hall about which Leonard Cohen wrote the song "Closing Time" in 1992. The ballroom was reopened for one weekend, the host the FIVARS festival. The festival showcased 36 new pieces from 12 countries. In this third year, Malicki-Sanchez was joined by associate producer Stephanie Greenall. 2019 - 5 Years, the Toronto Media Arts Centre For its 5th anniversary, the FIVARS festival moved to the Toronto Media Arts Centre in Toronto and expanded to six galleries, showcasing 31 new selections from 10 countries, including Germany, France, China, Netherlands, USA, UK, and Canada. The panels featured talks by Ed Callway of AMD, Thomas Wallner from Liquid Cinema VR Inc, director Brett Leonard (The Lawnmower Man (film)), Nimrod Shanit and Timur Musabay (The Holy City VR) and international sculptor and installation artist Marilene Oliver, among others. Malicki-Sanchez was again joined by co-producer and co-curator Stephanie Greenall. 2020 - FIVARS and Spatialized.Events The 2020 COVID-19 pandemic forced most festivals to become virtual events on the internet, and FIVARS turned to lead JanusWeb JanusWeb developer James Baicoianu to help build an online theater for the festival. Malicki-Sanchez and Baicoianu developed a user interface using WebXR, and powered by Amazon Web Services to deliver ultra high definition stereoscopic spherical video in the web browser that had no buffering delay. Baicoianu tweaked the transcoding settings higher than the maximum that Amazon typically allowed. FIVARS build upon a financial grant awarded through the Canadian Film Centre's incubator to develop their online virtual events. Malicki-Sanchez hosted FIVARS under the same umbrella - Spatialized.Events - as the VRTO 2020 conference. Ultimately they ended up developing the foundations for a new social, 3D platform for the web, featuring many accessibility features as well. These various elements were outlined at length in a Medium article penned by Malicki-Sanchez, and a 2-hour interview with Kent Bye on the Voices of VR podcast. The 6th edition of the festival featured here are 39 official selections from 16 countries — including Australia, Brazil, Canada, Finland, France, Germany, Korea, New Zealand, Poland, Qatar, Spain, and Sweden — chosen from among the more than 200 entries, including a documentary about a restored ship once owned by John Steinbeck, and the world premiere of a new single released by ambient music composer Steve Roach, created by Audri Phillips. The event was again produced by Malicki-Sanchez and Stephanie Greenall and featured 28 360 video pieces and 10 Virtual Reality and Augmented Reality pieces. 2021 - FIVARS in FEB - Pushing the Limits of Web3D In 2021 the festival launched a new event for the start of the year, now running against Sundance Festival, Tribeca Film Festival and SXSW. With the other festivals now also virtual, the playing field was levelled. FIVARS expanded the virtual event to include a variety of themed artistic 3D environments, continuing development on the JanusWeb engine while also incorporating the HighFidelity audio engine launched by Second Life creator Philip Rosedale. The environments were designed by Malicki-Sanchez, and the backend work was further developed by Baicoianu and featured projection-mapped domes, and a 3-screen immersive theater with 5.1 audio. The immersive theater featured a 24-minute hard rock concept album by music artist Militia Vox. The dome featured visual effects productions by Audri Phillips and Virtuality. Among the selections in the catalog was a stereoscopic spherical documentary by Gary Yost and award-winning film director Adam Loften titled "Inside COVID19" that follows Dr. Josiah Child, battling on the front lines as the novel coronavirus spreads. Another featured selection was a 360-degree music video for the song "Rukh" written recorded by Malicki-Sanchez and Don Garbutt, featuring Alex Lifeson on guitar, and visually produced by Audri Phillips. Among the interactive works was a piece produced by Elizabeth Leister, professor of immersive media at Cal Arts titled "All Her Bodies" featuring the stories of trauma and resilience by six different woman in a volumetric design produced with Depthkit and Unity software. The festival featured a live discussion with Leister on International Women's Day. The FIVARS in FEB show presented only people's choice awards and grouped all other juried categories together with the fall event, now titled FIVARS in FALL. 2021 - FIVARS in FALL (Hybrid, Los Angeles) In the autumn of 2021 the FIVARS festival became a hybrid event with an in-person showcase event at an art gallery space in West Hollywood from October 15 to 17, California and then followed by an online WebXR event that ran for two weeks spanning October 22 to November 2. The festival put stringent protocols to handle the COVID 19 virus outbreak including using medical-grade sanitization boxes created by Cleanbox technologies and requiring a negative CoVID test or proof of vaccination. The event would feature works from Ecuador, Israel, Taiwan, US, Canada, and other countries. FIVARS was nominated Event of the Year by the 2nd annual Poly Awards. 2022 - FIVARS in FEB (Hybrid: Toronto, Los Angeles) FIVARS in FEB saw the festival iterate on its Web3D platform, showcasing an even higher resolution of spherical video at 5.7k in the browser. The event also operated pop-ups for in-person attendees to view the interactive content at Two-Bit Circus (a virtual reality arcade operated by Brent Bushnell, son of Atari and Chuck E. Cheese founder Nolan Bushnell in Los Angeles, and Dark Slope in Toronto simultaneously. 2022 - FIVARS in FALL 2022 (Hybrid: Toronto, Los Angeles) 10th Festival FIVARS in FALL 2022 saw the festival continue to improve its 360 video transcode pipeline, switching to a new encoding method to improve playback across different devices. The event also operated pop-ups for in-person attendees to view the interactive content at Stackt in Toronto - a co-op artist compound and marketplace made entirely out of shipping containers. The 10th showcase for the festival featured a world premiere by Jacquelyn Ford Morie called "When Pixels Were Precious" operating in VRChat as a form of guided tour in Virtual Reality looking at her early digital graphics work in the 1980s. Additionally, the festival showcased a three-part opera "Antigone's Gone" in 360 degrees filmed in India, Indonesia, North Africa and the Middle East, and new works from Wales, Japan, USA, Australia, Canada, UK, Netherlands, France, Germany, and Israel. The festival also debuted new virtual reality theater works: "Offrail", in addition to "MetaMovie: Alien Rescue," and "We Should Meet in Air" - a telephonic immersive experience reenacting a live conversation with author Sylvia Plath. 2023 - FIVARS in FALL 2023 (Hybrid: Toronto, Web3D) 11th Festival FIVARS showed for five days in-person at the IDFK gallery in Toronto. It featured over 65 selections from 25 countries, including Malawi, Netherlands, Taiwan, Ireland, and Argentina - over 24 hours of new content. Selections included "Stay Alive, My Son," and world premieres of "The Carrier," "Errances" and many more. It participated in the first public Metatr@versal Portal Crawl in Web3D - an initiative focused on interoperability. The online version of the festival picked up a nomination for Innovator of the Year at the 2024 Poly Awards, dedicated to acknowledging excellence in WebXR development. Immersive Media Awards The festival awards prizes for People's Choice in two categories: Immersive Video and Interactive Experience and introduced a Grand Jury prize and Impact award in 2017. In 2019 the festival added additional awards for Best Experience Design, Best Visual Design, Best Audio Design. 2015 On Monday, September 21 the festival announced People's Choice awards for two categories at the Cadillac Lounge - a music venue and restaurant in Toronto. PEOPLE’S CHOICE: Best Interactive Experience: Apollo 11 Best Immersive Video: SONAR 2016 PEOPLE’S CHOICE: Best Interactive Experience: Pearl (Patrick Osborne) Best Immersive Video: Help (Justin Lin) JURIED: Grand Jury Award: Real (Connor Hair and Alex Meader) 2017 PEOPLE’S CHOICE: Best Interactive: Alteration Best Immersive (Passive): Guardian of the Guge Kingdom JURIED: Impact Award: Priya's Shakti / Priya's Mirror (Dan Goldman) Grand Jury Prize: Manifest 99 2018 PEOPLE’S CHOICE: Best Interactive: Museum of Symmetry (Paloma Dawkins) Best Immersive (Passive): Going Home (David Beier) JURIED: Impact Award: The Hidden (Annie Lukowski, BJ Schwartz) Grand Jury Prize: Battlescar (Nico Casavecchia, Martin Allais) 2019 PEOPLE’S CHOICE: Best Interactive: After Dan Graham (David Han/Friend Generator) Best Immersive (Passive): 2nd Step (Joerg Courtial) JURIED: Technical Achievement: tx-reverse Excellence in Experience Design: Battlescar (Nico Casavecchia, Martin Allais) Excellence in Sound Design: Unheard (Zhechuan Zhang) Excellence in Visual Design: Ex Anima (Pierre Zandrowicz) Impact Award: State Power (Jeff Stanzler) Grand Jury Prize: The Industry (Mirka Duijn) 2020 PEOPLE’S CHOICE: Best Interactive: Gravity VR (Fabito Rychter, Amir Admoni) Best Immersive (Passive): Warsaw Rising (Tomasz Dobosz) JURIED: Technical Achievement: The Cosmic Laughter of Cucci Binaca (Jonathan Sims) Excellence in Experience Design: Sleeping Eyes (Sojung Bahng, Sungeun Lee) Excellence in Sound Design: Symphony of Noise VR (Michaela Pnacekova) Excellence in Visual Design: Hominidae (Brian Andrews) Impact Award: Indirect Actions (Maranatha Hay) Grand Jury Prize: Minimum Mass (Raqi Syed, Areito Echevarria) 2021 FIVARS in FEB PEOPLE’S CHOICE: Best Interactive: CLAWS (Created by Evan Neiden & Directed by John Ertman) Best Immersive (Passive): Inside COVID 19 - (Gary Yost, Adam Loften) FIVARS in FALL PEOPLE’S CHOICE: Best Interactive: Samsara (Director: Hsin-Chien Huang) Best Immersive (Passive): The Invasion of Normandy Omaha Beach (Director: Uli Futschik) JURIED: Technical Achievement: Dark Threads (Director: Jonathon Corbiere) Excellence in Experience Design: Andy's World (Director: Liquan Liu) Excellence in Sound Design: Symphony (Director: Igor Cortadellas) Excellence in Visual Design: Mind VR Exploration (Director: Deng Zuyun) Outstanding Performance: Lori Kovachevich, Lena's Journey (Director: Wes Evans) Impact Award: Om Devi: Sheroes Revolution (Director: Claudio Casale) Grand Jury Prize: Montegelato (Director: Davide Rapp) 2022 FIVARS in FEB PEOPLE'S CHOICE Best Interactive: Severance Theory: Welcome to Respite (Lyndsie Scoggin, United States) Best Immersive (Passive): Beescapes (Alan Nguyen, Australia) FIVARS in FALL PEOPLE's CHOICE Best Interactive: Namuanki (Kevin Mack, United States) Best Immersive (Passive): Reimagined Vol. 1: Nyssa (Julie Cavaliere, United States) JURIED (Whole Year): Technical Achievement: Namuanki (Kevin Mack, United States) Excellence in Experience Design: Unframed: Hand Puppets, Paul Klee (Martin Charrière, Switzerland) Excellence in Visual Design: The Last Dance (Toshiaki Hanzaki, Japan) Excellence in Sound Design: Kingdom of Plants with David Attenborough (Iona McEwan, UK and USA) Outstanding Performance: Ari Tarr, OffRail (Ari Tarr, United States) Impact Award: Tearless (Gina Kim, South Korea) Grand Jury Prize: Klaxon. My dear sweet Friend (Nikita Shokhov, United States) 2023 Best Interactive: PULSAR Best Immersive (Passive): Behind the Dish JURIED Technical Achievement: VFC Excellence in Experience Design: Broken Spectre Excellence in Visual Design: Night Creatures Excellence in Sound Design: VFC Outstanding Performance: Origins Impact Award: LOU Grand Jury Prize: Stay Alive, My Son References Virtual reality organizations Digital media Film festivals in Toronto Augmented reality Film festivals in Los Angeles
Festival of International Virtual & Augmented Reality Stories
[ "Technology" ]
3,256
[ "Multimedia", "Digital media" ]
51,423,589
https://en.wikipedia.org/wiki/Syntrophus
Syntrophus is a Gram negative bacterial genus from the family of Syntrophaceae. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) See also List of bacterial orders List of bacteria genera References Further reading Thermodesulfobacteriota Bacteria genera
Syntrophus
[ "Biology" ]
81
[ "Bacteria stubs", "Bacteria" ]
51,424,743
https://en.wikipedia.org/wiki/Dragonfly%2044
Dragonfly 44 is an ultra diffuse galaxy in the Coma Cluster. This galaxy is well-known because observations of the velocity dispersion in 2016 suggested a mass of about one trillion solar masses, about the same as the Milky Way. This mass was consistent with a count of about 90 and 70 globular clusters observed around Dragonfly 44 in two different studies. Later, spatially resolved kinematics measured a mass of about 160 billion solar masses, six times less than early mass measurements and one order of magnitude less than the Milky Way's mass. The most recent work found 20 globular clusters around the galaxy, which is consistent with the current mass measurement. The lack of X-ray emission from the galaxy and its surroundings also shows that the number of globular clusters cannot be as many as was claimed before. The galaxy emits only 1% of the light emitted by the Milky Way. The galaxy was discovered with the Dragonfly Telephoto Array. Early study To determine the amount of dark matter in this galaxy, in 2016, astronomers used the DEIMOS instrument installed on Keck II to measure the velocities of stars for 33.5 hours for six nights so they could determine the galaxy's mass. The scientists then used the Gemini Multi-Object Spectrograph on the 8-m Gemini North telescope to reveal a halo of spherical clusters of stars around the galaxy's core. Following this observation, in August 2016, astronomers reported that this galaxy might be made almost entirely of dark matter. See also Low Surface Brightness galaxy (LSB galaxy) NGC 1052-DF2 – a galaxy thought to contain almost no dark matter. Type-cD galaxy or c-Diffuse galaxy type Type-D galaxy or Diffuse-type galaxy References External links Galaxies Coma Cluster Coma Berenices Dark galaxies
Dragonfly 44
[ "Physics", "Astronomy" ]
371
[ "Dark matter", "Unsolved problems in physics", "Constellations", "Dark galaxies", "Coma Berenices" ]
51,425,733
https://en.wikipedia.org/wiki/Adian%E2%80%93Rabin%20theorem
In the mathematical subject of group theory, the Adyan–Rabin theorem is a result that states that most "reasonable" properties of finitely presentable groups are algorithmically undecidable. The theorem is due to Sergei Adyan (1955) and, independently, Michael O. Rabin (1958). Markov property A Markov property P of finitely presentable groups is one for which: P is an abstract property, that is, P is preserved under group isomorphism. There exists a finitely presentable group with property P. There exists a finitely presentable group that cannot be embedded as a subgroup in any finitely presentable group with property P. For example, being a finite group is a Markov property: We can take to be the trivial group and we can take to be the infinite cyclic group . Precise statement of the Adyan–Rabin theorem In modern sources, the Adyan–Rabin theorem is usually stated as follows: Let P be a Markov property of finitely presentable groups. Then there does not exist an algorithm that, given a finite presentation , decides whether or not the group defined by this presentation has property P. The word 'algorithm' here is used in the sense of recursion theory. More formally, the conclusion of the Adyan–Rabin theorem means that set of all finite presentations (where is a fixed countably infinite alphabet, and is a finite set of relations in these generators and their inverses) defining groups with property P, is not a recursive set. Historical notes The statement of the Adyan–Rabin theorem generalizes a similar earlier result for semigroups by Andrey Markov, Jr., proved by analogous methods. It was also in the semigroup context that Markov introduced the above notion that that group theorists came to call the Markov property of finitely presented groups. This Markov, a prominent Soviet logician, is not to be confused with his father, the famous Russian probabilist Andrey Markov after whom Markov chains and Markov processes are named. According to Don Collins, the notion Markov property, as defined above, was introduced by William Boone in his Mathematical Reviews review of Rabin's 1958 paper containing Rabin's proof of the Adyan–Rabin theorem. Idea of the proof In modern sources, the proof of the Adyan–Rabin theorem proceeds by a reduction to the Novikov–Boone theorem via a clever use of amalgamated products and HNN extensions. Let be a Markov property and let be as in the definition of the Markov property above. Let be a finitely presented group with undecidable word problem, whose existence is provided by the Novikov–Boone theorem. The proof then produces a recursive procedure that, given a word in the generators of , outputs a finitely presented group such that if then is isomorphic to , and if then contains as a subgroup. Thus has property if and only if . Since it is undecidable whether , it follows that it is undecidable whether a finitely presented group has property . Applications The following properties of finitely presented groups are Markov and therefore are algorithmically undecidable by the Adyan–Rabin theorem: Being the trivial group. Being a finite group. Being an abelian group. Being a free group. Being a nilpotent group. Being a solvable group. Being a amenable group. Being a word-hyperbolic group. Being a torsion-free group. Being a polycyclic group. Being a group with a solvable word problem. Being a residually finite group. Being a group of finite cohomological dimension. Being an automatic group. Being a simple group. (One can take to be the trivial group and to be a finitely presented group with unsolvable word problem whose existence is provided by the Novikov-Boone theorem. Then Kuznetsov's theorem implies that does not embed into any finitely presentable simple group. Hence being a finitely presentable simple group is a Markov property.) Being a group of finite asymptotic dimension. Being a group admitting a uniform embedding into a Hilbert space. Note that the Adyan–Rabin theorem also implies that the complement of a Markov property in the class of finitely presentable groups is algorithmically undecidable. For example, the properties of being nontrivial, infinite, nonabelian, etc., for finitely presentable groups are undecidable. However, there do exist examples of interesting undecidable properties such that neither these properties nor their complements are Markov. Thus Collins (1969) proved that the property of being Hopfian is undecidable for finitely presentable groups, while neither being Hopfian nor being non-Hopfian are Markov. See also Higman's embedding theorem Bass–Serre theory References Further reading C. F. Miller, III, Decision problems for groups — survey and reflections. Algorithms and classification in combinatorial group theory (Berkeley, CA, 1989), pp. 1–59, Math. Sci. Res. Inst. Publ., 23, Springer, New York, 1992, Theorems in group theory Geometric group theory Undecidable problems
Adian–Rabin theorem
[ "Physics", "Mathematics" ]
1,099
[ "Geometric group theory", "Group actions", "Computational problems", "Undecidable problems", "Mathematical problems", "Symmetry" ]
51,425,999
https://en.wikipedia.org/wiki/DNA%20replication%20stress
DNA replication stress refers to the state of a cell whose genome is exposed to various stresses. The events that contribute to replication stress occur during DNA replication, and can result in a stalled replication fork. There are many events that contribute to replication stress, including: Misincorporation of ribonucleotides Unusual DNA structures Conflicts between replication and transcription Insufficiency of essential replication factors Common fragile sites Overexpression or constitutive activation of oncogenes Chromatin inaccessibility ATM and ATR are proteins that help to alleviate replication stress. Specifically, they are kinases that are recruited and activated by DNA damage. The stalled replication fork can collapse if these regulatory proteins fail to stabilize it. When this occurs, reassembly of the fork is initiated in order to repair the damaged DNA end. Replication fork The replication fork consists of a group of proteins that influence the activity of DNA replication. In order for the replication fork to stall, the cell must possess a certain number of stalled forks and arrest length. The replication fork is specifically paused due to the stalling of helicase and polymerase activity, which are linked together. In this situation, the fork protection complex (FPC) is recruited to help maintain this linkage. In addition to stalling and maintaining the fork structure, protein phosphorylation can also create a signal cascade for replication restart. The protein Mrc1, which is part of the FPC, transmits the checkpoint signal by interacting with kinases throughout the cascade. When there is a loss of these kinases (from replication stress), an excess of ssDNA is produced, which is necessary for the restarting of replication. Replication block removal DNA interstrand cross-links (ICLs) cause replication stress by blocking replication fork progression. This blockage leads to failure of DNA strand separation and a stalled replication fork. Repair of ICLs can be accomplished by sequential incisions, and homologous recombination. In vertebrate cells, replication of an ICL-containing chromatin template triggers recruitment of more than 90 DNA repair and genome maintenance factors. Analysis of the proteins recruited to stalled replication forks revealed a specific set of DNA repair factors involved in the replication stress response. Among these proteins, SLF1 and SLF2 were found to physically link the SMC5/6 DNA repair protein complex to RAD18. The SMC5/6 complex is employed in homologous recombination, and its linkage to RAD18 likely allows recruitment of SMC5/6 to ubiquitination products at sites of DNA damage. Replication-coupled repair Mechanisms that process damaged DNA in coordination with the replisome in order to maintain replication fork progression are considered to be examples of replication-coupled repair. In addition to the repair of DNA interstrand crosslinks, indicated above, multiple DNA repair processes operating in overlapping layers can be recruited to faulty sites depending on the nature and location of the damage. These repair processes include (1) removal of misincorporated bases; (2) removal of misincorporated ribonucleotides; (3) removal of damaged bases (e.g. oxidized or methylated bases) that block the replication polymerase; (4) removal of DNA-protein crosslinks; and (5) removal of double-strand breaks. Such repair pathways can function to protect stalled replication forks from degradation and allow restart of broken forks, but when deficient can cause replication stress. Single-strand break repair Singe-strand breaks are one of the most common forms of endogenous DNA damage. Replication fork collapse at leading strand nicks generates resected single-ended double-strand breaks that can be repaired by homologous recombination. Causation Replication stress is induced from various endogenous and exogenous stresses, which are regularly introduced to the genome. These stresses include, but are not limited to, DNA damage, excessive compacting of chromatin (preventing replisome access), over-expression of oncogenes, or difficult-to-replicate genome structures. Replication stress can lead to genome instability, cancer, and ageing. Uncoordinated replication–transcription conflicts and unscheduled R-loop accumulation are significant contributors. Specific events The events that lead to genome instability occur in the cell cycle prior to mitosis, specifically in the S phase. Disturbance to this phase can generate negative effects, such as inaccurate chromosomal segregation, for the upcoming mitotic phase. The two processes that are responsible for damage to the S phase are oncogenic activation and tumor suppressor inactivation. They have both been shown to speed up the transition from the G1 phase to the S phase, leading to inadequate amounts of DNA replication components. These losses can contribute to the DNA damage response (DDR). Replication stress can be an indicative characteristic for carcinogenesis, which typically lacks DNA repair systems. A physiologically short duration of the G1 phase is also typical of fast replicating progenitors during early embryonic development. Applications in cancer Normal replication stress occurs at low to mild levels and induces genomic instability, which can lead to tumorigenesis and cancer progression. However, high levels of replication stress have been shown to kill cancer cells. In one study, researchers sought to determine the effects of inducing high levels of replication stress on cancer cells. The results showed that with further loss of checkpoints, replication stress is increased to a higher level. With this change, the DNA replication of cancer cells may be incomplete or incorrect when entering into the mitotic phase, which can eventually result in cell death through mitotic catastrophe. Another study examined how replication stress affected APOBEC3B activity. APOBEC3 (apolipoprotein B mRNA editing enzyme, catalytic polypeptide-like 3) has been seen to mutate the cancer genome in various cancer types. Results from this study show that weakening oncogenic signaling or intensifying DNA replication stress can alter carcinogenic potential, and can be manipulated therapeutically. References DNA replication Molecular genetics
DNA replication stress
[ "Chemistry", "Biology" ]
1,261
[ "DNA replication", "Molecular genetics", "Genetics techniques", "Molecular biology" ]
51,426,494
https://en.wikipedia.org/wiki/Brandolini%27s%20law
Brandolini's law, also known as the bullshit asymmetry principle, is an internet adage coined in 2013 by Alberto Brandolini, an Italian programmer, that emphasizes the effort of debunking misinformation, in comparison to the relative ease of creating it in the first place. The law states: The rise of easy popularization of ideas through the internet has greatly increased the relevant examples, but the asymmetry principle itself has long been recognized. Origins The adage was publicly formulated in January 2013 by Alberto Brandolini, an Italian programmer. Brandolini stated that he was inspired by reading Daniel Kahneman's Thinking, Fast and Slow right before watching an Italian political talk show with former Prime Minister Silvio Berlusconi and journalist Marco Travaglio. Examples The persistent false claim that vaccines cause autism is a prime example of Brandolini's law. This famous case involved British doctor Andrew Wakefield who wrote an article about a study that claimed to find a relationship between the MMR vaccine and autism. The article's findings were later shown to be false; as a result, Dr. Wakefield lost his medical license and then disclaimed and recanted. The false claims, despite extensive investigation showing no relationship, have had a disastrous effect on public health due to vaccine avoidance. Decades of research and attempts to educate the public have failed to eradicate the misinformation, which is still widely believed. In another example, shortly after the Boston Marathon bombing, the claim that a student who had survived the Sandy Hook Elementary School shooting had been killed by the bombing began to spread across social media. Despite many attempts to debunk the rumor, including an investigation by Snopes, the false story was shared by more than 92,000 people and was covered by major news agencies. In an example of Brandolini's law during the COVID-19 pandemic, Jeff Yates, a disinformation journalist at Radio-Canada said of a very popular YouTube video, "He makes all kinds of different claims. I had to check every single one of them. I had to call relevant experts and talk to them. I had to transcribe those interviews. I had to write a text that is legible and interesting to read. It's madness. It took this guy 15 minutes to make his video and it took me three days to fact-check." Due to the rapid dissemination of information on social media, the public is much more susceptible to becoming victims of pseudoscientific trends such as Dr. Mehmet Oz's weight loss supplements and Dr. Joseph Mercola's tanning beds that were meant to reduce one's risk of developing cancer. Although government agencies were able to prevent further sales of these products, millions of dollars had already been spent by consumers and fans. Another example dates back to 2016, when Iceland's football team had eliminated England out of the UEFA European Championship. Nine months after the victory, Icelandic doctor Ásgeir Pétur Þorvaldsson jokingly tweeted out that a baby boom in Iceland had occurred due to this victory. Despite wide media coverage suggesting the truth behind this statement, statistical analysis carried out by curious researchers debunked the notion proposed by Þorvaldsson's tweet. Brandolini's Law is accentuated during larger scale and higher tension situations as well. Jevin West and Carl Bergstrom discuss in their analysis of using Hydroxychloroquine for COVID-19 prevention that, despite Hydroxychloroquine being frequently proven to be ineffective in curing illnesses, including COVID-19, that it was extremely difficult to convince people that it would not prove effective against the highly contagious virus. Because of how afraid people were of COVID-19 during its inception, and how desperately people wanted a cure, widespread social media coverage and a desire for Hydroxychloroquine to work, made it extremely difficult to disprove the misinformation being presented. Social media According to the Media Education Journal, "Media portrayal of politics has always been subject to contested claims about accuracy and veracity but this has reached a new intensity." With social media, ideas, thoughts, opinions, and beliefs can be shared at an extraordinary speed. Social media amplifies Brandolini's Law due to these capabilities. Although there are advantages to social media, there are also disadvantages especially when considering the role it has when spreading misinformation. News and research can be misinterpreted and false beliefs can be spread farther and wider than before. Fake news has a tendency to spread faster and wider through social media than true news, peer review is almost nonexistent in regards to social media, and the way some true research can be presented through social media can make it easier to misunderstand. Combating the spreading of misinformation requires scientists to establish the validity and quality of research, stories, and claims with a rating system. Further applications In 2020, researchers did a study on sensitivity to bullshit and found that "people are more receptive to bullshit, and less sensitive to detecting bullshit, under conditions in which they possess relatively few self-regulatory resources." Within the context of scientific analysis, Brandolini's law can be put to use not just on the bullshit being presented, but can also bring the bullshitter under scrutiny as well. When the lying becomes apparent on multiple occasions throughout a stretch of scientific research, the bullshitter becomes more obvious than the bullshit itself, and because the bullshitter loses credibility, the ensuing bullshit is easier to identify. In addition, the challenge of refuting bullshit does not just come from its time-consuming nature, but also from the challenge of defying and confronting one's community. In accordance with Kieron O'Hara's research to further analyze how bullshitters operate as opposed to just analyzing the bullshit, while it still takes substantially more energy to disprove bullshit than to create it, the overall amount of energy exerted to discover a bullshitter is less than the amount of energy used to discover the bullshit itself. Bullshit and Brandolini's law has also has been involved in gender issues. The U.S. Department of State defines gendered disinformation as "a subset of misogynistic abuse and violence against women that uses false or misleading gender and sex-based narratives, often with some degree of coordination, to deter women from participating in the public sphere. Both foreign state and non-state actors strategically use gendered disinformation to silence women, discourage online political discourse, and shape perceptions toward gender and the role of women in democracies." This is a specific type of bullshit commonly found in politics where women are the victims of false claims. Misinformation is used frequently in fostering gender inequalities especially in social platforms and in political matters. As the refuting of bullshit takes a lot more energy than producing it, lives and jobs are affected especially by women. Mitigating the effects of Brandolini's law Environmental researcher Phil Williamson of University of East Anglia implored other scientists in 2016 to get online and refute falsehoods to their work whenever possible, despite the difficulty per Brandolini's law. He wrote, "the scientific process doesn't stop when results are published in a peer-reviewed journal. Wider communication is also involved, and that includes ensuring not only that information (including uncertainties) is understood, but also that misinformation and errors are corrected where necessary." Carl T. Bergstrom and Jevin West, researchers on the topic of bullshit, study how to refute the bullshit that takes a large amount of energy to discover. This complicated process depends on the audience the bullshit is intended to influence, the time and energy a person is willing to invest in this process, and the medium used to do the refuting. In order to refute one needs the following: Be correct by including all necessary information that was run by a friend and double checking facts. Be charitable by acknowledging the possibility of your own confusion, not attributing malice, and not assigning stupidity. Be clear and coherent about the argument you are making. Admit mistakes and faults. Other techniques for increasing the effectiveness of retracting misinformation include: preexposure warnings, repeated retractions, and providing an alternative narrative. Similar concepts The adage, "A lie can travel halfway around the world before the truth can get its boots on", has taken various forms since as early as 1710. In 1845, Frédéric Bastiat expressed an early notion of this law: Prior to Brandolini's definition, Italian blogger Uriel Fanelli and Jonathan Koomey, creator of Koomey's law and researcher, also shared thoughts aligning with the bullshit asymmetry principle, Fanelli stated, "An idiot can create more bullshit than you could ever hope to refute", when generally translated in Calling Bullshit: The Art of Skepticism in a Data-Driven World. Koomey states, "In fast-changing fields, like information technology, refutations lag nonsense production to a greater degree than in fields with less rapid change." See also Big lie Burden of proof False balance Gish gallop Hitchens's razor List of eponymous laws Poe's law Sealioning References Communication of falsehoods Adages Internet terminology Asymmetry Eponymous rules Eponyms 2013 neologisms
Brandolini's law
[ "Physics", "Technology" ]
1,953
[ "Computing terminology", "Internet terminology", "Symmetry", "Asymmetry" ]
51,427,319
https://en.wikipedia.org/wiki/Grete%20Kellenberger-Gujer
Grete Kellenberger-Gujer (1919–2011) was a Swiss molecular biologist known for her discoveries on genetic recombination and restriction modification system of DNA. She was a pioneer in the genetic analysis of bacteriophages and contributed to the early development of molecular biology. Biography After earning her matura in classics at the Töchterschule in Zürich, Grete Gujer studied chemistry at the Swiss Federal Institute of Technology in Zurich. There, she met Eduard Kellenberger, a physics student. The couple married in 1945. In 1946 they moved to Geneva, where Eduard Kellenberger began his doctoral work thesis under the supervision of Jean Weigle, professor of physics at the University of Geneva. Grete Kellenberger contributed to the development of new methods to prepare and analyse biological samples using an electron microscope, a new technique at the time. After Jean Weigle left for the California Institute of Technology in 1948, Grete Kellenberger took on an increasingly important role in the study of lambda phage and its mutations at the University of Geneva. Her collaboration with Jean Weigle, who returned to Geneva every summer, is demonstrated their by regular correspondence archived at Caltech and by numerous publications. It was Grete Kellenberger who gave Werner Arber, who carried out his PhD between 1954 and 1958, the conceptual basis and practices for his future studies in the genetics of bacteriophages. Grete Kellenberger published several articles with Arber between 1957 and 1966. Grete Kellenberger's major scientific contribution was the discovery that recombination is due to a physical exchange of DNA, and not to selective replication. An article on this subject authored by Grete Kellenberger, Maria Ludovica Zichichi, and Jean Weigle was published in the same issue of the Proceedings of the National Academy of Sciences (PNAS) as the article from Meselson and Weigle on the topic. However, although the data for Grete's article were obtained using a more original approach and were ready months before experiments were concluded in Meselson's laboratory, Grete's article appeared after Meselson's. Maria Ludovica Zichichi worked with Grete Kellenberger from 1960 to 1962, and their collaboration resulted in five publications. In 1965, Grete Kellenberger, her husband Eduard Kellenberger, and members of their research team left for Manhattan, Kansas for a sabbatical year. At Kansas State University, she worked closely with Ulrich Laemmli on phage T4. During this year, Eduard Kellenberger returned to Switzerland without his wife and they divorced in 1967. Grete Kellenberger-Gujer continued to work in Kansas and later accepted a position as an independent researcher in a lab run by Lucien Caro at the Oak Ridge National Laboratory in Tennessee. In 1971, she returned to Geneva and worked in Lucien Caro's lab in the Department of Molecular Biology until her retirement in 1980. From 1971 to 1975, she worked with Douglas Berg, with whom she shared an interest in genetic analysis of bacteriophages and plasmid lambda dv. She published three articles with Berg. She was an atheist and she respected religious believers. Awards and recognition In 1979, the Faculty of Medicine of the University of Geneva awarded Grete Kellenberger-Gujer the International Prize Nessim-Habif. An honoris causa doctorate for Grete-Kellenberger was discussed at the University of Geneva, but was never awarded. In 2009, three portraits of Grete Kellenberger-Gujer were created by the Roger Pfund studio and displayed as part of an exhibit commemorating representative individuals from the University of Geneva as part of the university's 450th anniversary. Grete's portrait is hung since 2010 in the university's Department of Molecular Biology seminar room, across from the portrait of Werner Arber. In September 2016, the Campus magazine of the University of Geneva published an article on the story of Grete-Kellenberger-Gujer described in, without including any of the sociological considerations and cultural and academic gender-bias facts explored in the article. References Molecular biologists 1919 births 2011 deaths 20th-century Swiss women scientists Women biologists Swiss atheists Scientists from Zurich 21st-century Swiss biologists 20th-century Swiss biologists 20th-century women scientists 21st-century Swiss women scientists
Grete Kellenberger-Gujer
[ "Chemistry" ]
902
[ "Biochemists", "Molecular biology", "Molecular biologists" ]
51,427,743
https://en.wikipedia.org/wiki/Necropolitics
Necropolitics is a sociopolitical theory of the use of social and political power to dictate how some people may live and how some must die. The deployment of necropolitics creates what Achille Mbembe calls deathworlds, or "new and unique forms of social existence in which vast populations are subjected to living conditions that confer upon them the status of the living dead." Mbembe, author of On the Postcolony, was the first scholar to explore the term in depth in his 2003 article, and later, his 2019 book of the same name. Mbembe identifies racism as a prime driver of necropolitics, stating that racialized people's lives are systemically cheapened and habituated to loss. Concept Necropolitics is often discussed as an extension of biopower, the Foucauldian term for the use of social and political power to control people's lives. Foucault first discusses the concepts of biopower and biopolitics in his 1976 work, The Will to Knowledge: The History of Sexuality Volume I. Foucault presents biopower as a mechanism for "protecting", but acknowledges that this protection often manifests itself as subjugation of non-normative populations. The creation and maintenance of institutions that prioritize certain populations as more valuable is, according to Foucault, how population control has been normalized. Mbembe's concept of necropolitics acknowledges that contemporary state-sponsored death cannot be explained by the theories of biopower and biopolitics, stating that "under the conditions of necropower, the lines between resistance and suicide, sacrifice and redemption, martyrdom and freedom are blurred." Jasbir Puar assumes that discussions of biopolitics and necropolitics must be intertwined, because "the latter makes its presence known at the limits and through the excess of the former; [while] the former masks the multiplicity of its relationships to death and killing in order to enable the proliferation of the latter." Mbembe was clear that necropolitics is more than simply a right to kill (Foucault's droit de glaive). While his view of necropolitics does include various forms of political violence such as the right to impose social or civil death, and the right to enslave others, it is also about the right to expose other people (including a country's own citizens) to mortal danger and death. Cultural theorist Lauren Berlant calls this gradual and persistent process of elimination slow death. According to Berlant, only specific populations are "marked out for wearing out" and the conditions of being worn out and dying are intimately linked with "the ordinary reproduction of [daily] life." Necropolitics is a theory of the walking dead, in which specific bodies are forced to remain in suspended states of being located somewhere between life and death. Mbembe provided a way of analyzing these "contemporary forms of subjugation of life to the power of death." He utilized examples of slavery, apartheid, the colonization of Palestine and the figure of the suicide bomber to illustrate differing forms of necropower over the body (statist, racialized, a state of exception, urgency, martyrdom) and how this reduces people to precarious life conditions. According to Marina Gržinić, necropolitics precisely defines the forms taken by neo-liberal global capitalist cuts in financial support for public health, social and education structures. To her, these extreme cuts present intensive neo-liberal procedures of ‘rationalization’ and ‘civilization’. Living death Mbembe's understanding of sovereignty, according to which the living are characterized as "free and equal men and women," informs how he expands the definition of necropolitics to include not only individuals experiencing death, but also experiencing social or political death. An individual unable to set their own limitations due to social or political interference is then considered, by Mbembe, to not be truly alive, as they are no longer sovereign over their own body. The ability for a state to subjugate populations so much so that they do not have the liberty of autonomy over their lives is an example of necropolitics. This creates zones of existence for the living dead, those who no longer have sovereignty over their own body. R. Guy Emerson writes that necropolitics exists beyond the limits of administrative or state power being imposed on bodies, but also becomes internalized, coming to control behaviors over fear of death or fear of exposure to death worlds. Frédéric Le Marcis discusses how the contemporary African prison system acts as an example of necropolitics. Referring to the concept of living death as "stuckness", Le Marcis details life in prison as a state-sponsored creation of death; some examples he provides include malnourishment through a refusal to feed inmates, a lack of adequate healthcare, and the excusing of certain violent actions between inmates. Racism, discussed by Foucault as an integral component of wielding biopower, is also present in Le Marcis' discussion of the necropolitical prison system, specifically regarding the ways in which murder and suicide are often overlooked among inmates. Mbembe also contends that matters of homicide and suicide within state-governed institutions housing "less valuable" members of the necroeconomy are simply another example of social or political death. Ilana Feldman brings as an example the experience of Palestinian refugees in the situation of prolonged displacement. In her ethnographic work, a number of interviewees share how the combination of bad leadership, poor services in refugee camps and lack of international support resulted in a collective climate of hopelessness. Queer and trans necropolitics Jasbir Puar coined the term queer necropolitics to analyze the post-9/11 queer outrage regarding gay bashing and simultaneous queer complicity with Islamophobia. Puar utilizes the discussions of Mbembe to address the dismissal of racism within the LGBTQ+ community as a form of assimilation and distancing from the non-normative populations generally affected by necropolitics. Puar's research centers specifically on the idea that, "the homosexual other is white, the racial other is straight," leaving no room for queer people of color, and ultimately accepting their fate as a non-valuable population destined for social, political, or literal death. Puar's prime example of this lives within the Israeli-Palestinian conflict, in which Israel, considered a haven for LGBTQ individuals, is then spared criticism from its Islamophobic violence against the Palestinian people, particularly queer Palestinians. Many scholars use Puar's queer necropolitics in conjunction with Judith Butler's concept of a grievable life. Butler's discussion of the HIV/AIDS epidemic acts as a necessary extension of the queer necropolitical field, as it addresses specifically the shortcomings of Foucault's concept of biopower for non-normative, less societally valuable populations, those populations experiencing multiple intersections of Other-ness. Butler connects the lives of queer individuals to that of, "war casualties that the United States inflicts," noting that one cannot publicly grieve these deaths because in order to do so, they must be deemed noteworthy by those who inflicted death upon them. Butler claims that the obituary is a tool for normalizing the necropolitics of queer lives, as well as the lives of people of color. In “Trans Necropolitics: A Transnational Reflection on Violence, Death, and the Trans of Color Afterlife” Snorton and Haritaworn investigate the necropolitical nature of trans people of color's lives. As they make sense of “trans of color afterlife,” Snorton and Haritaworn examine the ‘making dead’ of trans people of color, and especially trans women of color, as an intentionally violent political strategy. This reveals society's incredible failure to protect and care for trans people of color during their lives. Necroviolence Necroviolence is used both as a terms among academics, stemming from the necropolitics concept, as well in some press coverage and in United Nations documentation regarding the Israel–Palestine conflict. As defined by anthropologist Jason De León refers to “violence performed through the specific treatment of corpses” in ways that are offensive and enable "the powerful" to deny responsibility for the death. Israeli "necroviolence" against Palestinians In Gaza–Israel conflict Israeli forces have been accused of necroviolence in 2020 in Gaza, including violently scooping up a corpse with a bulldozer. Ongoing Israeli necroviolence methods Student of MA International Conflict Studies at King's College London Aymun Moosavi and Harvard PhD candidate in anthropology Randa May Wahbe have described Israeli necroviolence as including: ‘Ambiguous loss’; withholding Palestinian bodies in freezers, thus preventing Palestinian families from mourning their loved ones The cemeteries of numbers (cemeteries where graves are marked only with numbers and not names, thus dehumanizing the dead) Demolition of historic grave sites Against trans and gender-diverse people In the academic article Necropolitics and Trans Identities: Language Use as Structural Violence, authors Kinsey Stewart and Thomas Delgado argue that language can also harm the dead and that the use of language within medicolegal death investigation reflects and reinforces structural violence against transgender and gender diverse people. Further developments Khaled Al-Kassimi, author of International Law, Necropolitics, and Arab Lives, has recently expanded the theoretical framework of necropolitics by engaging in an epistemic inquiry deconstructing the philosophical and theological reasons as to why Western modernity necessitates deploying "necropower" for onto-epistemic coherence. In doing so, Al-Kassimi mentions that while racism is a material explanation to the exercise of necropolitics, it is the epistemic schism between both "spiritual Arabia" and "secular Europe" that demands the latter to "ban" the former from the juridical order and render them the "living-dead". By navigating Latin-European scholastics in the 15th century, including the positivist juridical turn during and after the Enlightenment period emphasizing Reason over Revelation, Al-Kassimi concludes that, "Arab epistemology emphasiz[ing] the spiritual rather than simply the material" requires "secular" Western modernity to demand the elevation of Arab subjects to the exception and rendering them through technologies of racism and essentialist narratives as "bare-life", "Muselmann", or the "living-dead"; that is, objects of sovereign necropower. See also Moral hazard Brinkmanship Desecration of graves Israeli razing of cemeteries in the Gaza Strip List of ways people dishonor the dead References Further reading Geller, P.L. (2021). "What Is Necropolitics?". In: Theorizing Bioarchaeology. Bioarchaeology and Social Theory. Springer, Cham. https://doi.org/10.1007/978-3-030-70704-0_5 Rouse, C.M. (2021). "Necropolitics versus Biopolitics: Spatialization, White Privilege, and Visibility during a Pandemic". In: Journal of Cultural Anthropology. Vol. 36, No. 3. https://doi.org/10.14506/ca36.3.03 2003 neologisms Sociological terminology Biopolitics Queer theory Philosophy of death
Necropolitics
[ "Engineering", "Biology" ]
2,401
[ "Biopolitics", "Genetic engineering" ]
51,428,892
https://en.wikipedia.org/wiki/C20H24O3
{{DISPLAYTITLE:C20H24O3}} The molecular formula C20H24O3 (molar mass: 312.40 g/mol) may refer to: Estrone acetate Ethinyl estriol (EE3), or 17α-ethynylestriol Trenbolone acetate Molecular formulas
C20H24O3
[ "Physics", "Chemistry" ]
75
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
51,428,919
https://en.wikipedia.org/wiki/Endoplasmic%20reticulum%20membrane%20protein%20complex
The endoplasmic reticulum membrane protein complex (EMC) is a putative endoplasmic reticulum-resident membrane protein (co-)chaperone. The EMC is evolutionarily conserved in eukaryotes (animals, plants, and fungi), and its initial appearance might reach back to the last eukaryotic common ancestor (LECA). Many aspects of mEMC biology and molecular function remain to be studied. Composition and structure The EMC consists of up to 10 subunits (EMC1 - EMC4, MMGT1, EMC6 - EMC10), of which only two (EMC8/9) are homologous proteins. Seven out of ten (EMC1, EMC3, EMC4, MMMGT1, EMC6, EMC7, EMC10) subunits are predicted to contain at least one transmembrane domain (TMD), whereas EMC2, EMC8 and EMC9 do not contain any predicted transmembrane domains are herefore likely to interact with the rest of the EMC on the cytosolic face of the endoplasmic reticulum (ER). EMC proteins are thought to be present in the mature complex in a 1:1 stoichiometry. Subunit primary structure The majority of EMC proteins (EMC1/3/4/MMGT1/6/7/10) contain at least one predicted TMD. EMC1, EMC7 and EMC10 contain an N-terminal signal sequence. EMC1 EMC1, also known as KIAA0090, contains a single TMD (aa 959-979) and Pyrroloquinoline quinone (PQQ)-like repeats (aa 21-252), which could form a β-propeller domain. The TMD is part of a domain a larger domain (DUF1620). The functions of the PQQ and DUF1620 domains in EMC1 remain to be determined. EMC2 EMC2 (TTC35) harbours three tetratricopeptide repeats (TPR1/2/3). TPRs have been shown to mediate protein-protein interactions and can be found in a large variety of proteins of diverse function. The function of TPRs in EMC2 is unknown. EMC8 and EMC9 EMC8 and EMC9 show marked sequence identity (44.72%) on the amino acid level. Both proteins are members of the UPF0172 family, a member of which (e.g. TLA1) are involved in regulating the antenna size of chlorophyll-a. Posttranslational modifications Several subunits of the mammalian EMC (mEMC) are posttranslationally modified. EMC1 contains three predicted N-glycosylation sites at positions 370, 818, and 913. EMC10 features a predicted N-glycosylation consensus motif at position 182. Evolutionary conservation EMC proteins are evolutionarily conserved in eukaryotes. No homologues are reported in prokaryotes. Therefore, the EMC has been suggested to have its evolutionary roots in the last eukaryote common ancestor (LECA). Function Protein folding and degradation at the ER The EMC was first identified in a genetic screen in yeast for factors involved in protein folding in the ER. Accordingly, deletion of individual EMC subunits correlates with the induction of an ER stress response in various model organisms. However, it is worth noting that in human osteosarcoma cells (U2OS cells), deletion of EMC6 does not appear to cause ER stress. When overexpressed, several subunits of the mammalian EMC orthologue (mEMC) have been found to physically interact with ERAD components (UBAC2, DER1, DER2) Genetic screens in yeast have shown EMC subunits to be enriched in alongside ERAD genes. Taken together, these findings imply a role of the mEMC in protein homeostasis. Chaperone Maturation of polytopic membrane proteins Several lines of evidence implicate the EMC in promoting the maturation of polytopic membrane proteins. The EMC is necessary to correctly and efficiently insert the first transmembrane domain (also called the signal anchor) of G-protein coupled receptors (GPCRs) such as the beta-adrenergic receptor. Determining features of transmembrane domains that favour EMC involvement seem to be moderate hydrophobicity and ambiguous distribution of TMD flanking charges. The substrate spectrum of the EMC appears to extend beyond GPCRs. Unifying properties of putative EMC clients are the presence of unusually hydrophilic transmembrane domains containing charged residues. However, mechanistic detail of how the EMC assists in orienting and inserting such problematic transmembrane domains is lacking. In many cases, evidence implicating the EMC in the biogenesis of a certain protein consists of co-depletion when individual subunts of the EMC are disrupted. A number of putative EMC clients are listed below, but the manner in which the EMC engages them and whether they directly or indirectly depend on the EMC merits further investigation: Loss of EMC function destabilises the enzyme sterol-O-acyltransferase 1 (SOAT1) and, in conjunction with overlooking the biogenesis of squalene synthase (SQS), helps to maintain cellular cholesterol homeostasis. SOAT1 is an obligatory enzyme for cellular cholesterol storage and detoxification. For SQS, an enzyme controlling the committing step in cholesterol biosynthesis, the EMC has been shown to be sufficient for its integration into liposomes in vitro. Depletion of EMC6 and additional EMC proteins reduces the cell surface expression of the nicotinic Acetylcholine receptors in C. elegans. Knockdown of EMC2 has been observed to correlate with decreased CFTRΔF508 levels. EMC2 contains three tetratricopeptide repeat domains (TRPs). TRPs have been shown to mediate protein-protein interaction and can be found in co-chaperones of Hsp90. Therefore, a role of EMC2 in mediating interactions with cytosolic chaperones is conceivable, but remains to be demonstrated. Loss of EMC subunits in D. melanogaster correlates with strongly reduced cell surface expression of rhodopsin-1 (Rh1), an important polytopic light receptor in the plasma membrane. In yeast, the EMC has been implicated in maturation or trafficking defects of the polytopic model substrate Mrh1p-GFP. Recently, structural and functional studies have identified a holdase function for the EMC in the assembly and maturation of the voltage gated calcium channel CaV1.2. Insertion proteins into the ER The EMC was shown to be involved in a pathway mediating the membrane integration of tail-anchored proteins containing an unusually hydrophilic or amphiphatic transmembrane domains. This pathway appears to operate in parallel to the conventional Get/Trc40 targeting pathway. Other suggested functions Mitochondrial tethering In S. cerevisiae, the EMC has been reported by Lahiri and colleagues to constitute a tethering complex between the ER and mitochondria. Close apposition of both organelles is a prerequisite for phosphatidylcholine (PS) biosynthesis in which phosphatidylserine (PS) is imported from the ER into mitochondria, and this was previously proposed as evidence for a membrane tether between these two organelles by Jean Vance. Disruption of the EMC by genetic deletion of multiple of its subunits was shown to reduce ER-mitochondrial tethering and to impair transfer of phosphatidylserine (PS) from the ER. Autophagosome formation EMC6 interacts with the small GTPase RAB5A and Beclin-1, regulators of autophagosome formation. This observation suggests that the mEMC, and not just EMC6, might be involved in regulating Rab5A and BECLIN-1. However, the molecular mechanism underlying the proposed modulation of autophagosome formation remains to be established. Involvement in disease The mEMC has repeatedly been implicated in a range of pathologies including susceptibility of cells to viral infection, cancer, and a congenital syndrome of severe physical and mental disability. None of these pathologies seem to be related by disruption of a single molecular pathway that might be regulated by the mEMC. Consequently, the involvement of the mEMC in these pathologies has only limited use for defining the primary function of this complex. As a host factor in viral infections Large-scale genetic screens imply several mEMC subunits in modulating the pathogenicity of flaviviruses such as West Nile virus (WNV), Zika virus (ZV), Dengue fever virus (DFV), and yellow fever virus (YFV). In particular, loss of several mEMC subunits (e.g. EMC2, EMC3) lead to inhibition of WNV-induced cell death. however, WNV was still able to infect and proliferate in cells lacking EMC subunits. The authors made a similar observation of the role of the mEMC in the cell-killing capacity of Saint Louis Encephalitis Virus. The underlying cause for the resistance of EMC2/3-deficient cells to WNV-induced cytotoxicity remains elusive. Cancer Dysregulation of individual mEMC subunits correlates with the severity of certain types of cancer. Expression of hHSS1, a secreted splice variant of EMC10 (HSM1), reduces the proliferation and migration of glioma cell lines. Overexpression of EMC6 has been found to reduce cell proliferation of glioblastoma cells in vitro and in vivo, whereas its RNAi-mediated depletion has the opposite effect. This indicates that the mEMC assumes (an) important function(s) in cancerous cells to establish a malignant tumour. Pathologies Mutations in the EMC1 gene have been associated with retinal dystrophy and a severe systemic disease phenotype involving developmental delay, cerebellar atrophy, scoliosis and hypotonia. Similarly, a homozygous missense mutation (c.430G>A, p.Ala144Thr) within the EMC1 gene has been correlated with the development of retinal dystrophy. Even though a set of disease-causing mutations in EMC1 has been mapped, their effect on EMC1 function and structure remain to be studied. References Proteins
Endoplasmic reticulum membrane protein complex
[ "Chemistry" ]
2,332
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]