id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
9,629,714 | https://en.wikipedia.org/wiki/Adenine%20nucleotide%20translocator | Adenine nucleotide translocator (ANT), also known as the ADP/ATP translocase (ANT), ADP/ATP carrier protein (AAC) or mitochondrial ADP/ATP carrier, exchanges free ATP with free ADP across the inner mitochondrial membrane. ANT is the most abundant protein in the inner mitochondrial membrane and belongs to the mitochondrial carrier family.
Free ADP is transported from the cytoplasm to the mitochondrial matrix, while ATP produced from oxidative phosphorylation is transported from the mitochondrial matrix to the cytoplasm, thus providing the cell with its main energy currency. ADP/ATP translocases are exclusive to eukaryotes and are thought to have evolved during eukaryogenesis. Human cells express four ADP/ATP translocases: SLC25A4, SLC25A5, SLC25A6 and SLC25A31, which constitute more than 10% of the protein in the inner mitochondrial membrane. These proteins are classified under the mitochondrial carrier superfamily.
Types
In humans, there exist three paraologous ANT isoforms:
SLC25A4 – found primarily in heart and skeletal muscle
SLC25A5 – primarily expressed in fibroblasts
SLC25A6 – primarily express in liver
Structure
ANT has long been thought to function as a homodimer, but this concept was challenged by the projection structure of the yeast Aac3p solved by electron crystallography, which showed that the protein was three-fold symmetric and monomeric, with the translocation pathway for the substrate through the centre. The atomic structure of the bovine ANT confirmed this notion, and provided the first structural fold of a mitochondrial carrier. Further work has demonstrated that ANT is a monomer in detergents and functions as a monomer in mitochondrial membranes.
ADP/ATP translocase 1 is the major AAC in human cells and the archetypal protein of this family. It has a mass of approximately 30 kDa, consisting of 297 residues. It forms six transmembrane α-helices that form a barrel that results in a deep cone-shaped depression accessible from the outside where the substrate binds. The binding pocket, conserved throughout most isoforms, mostly consists of basic residues that allow for strong binding to ATP or ADP and has a maximal diameter of 20 Å and a depth of 30 Å. Indeed, arginine residues 96, 204, 252, 253, and 294, as well as lysine 38, have been shown to be essential for transporter activity.
Function
ADP/ATP translocase transports ATP synthesized from oxidative phosphorylation into the cytoplasm, where it can be used as the principal energy currency of the cell to power thermodynamically unfavorable reactions. After the consequent hydrolysis of ATP into ADP, ADP is transported back into the mitochondrial matrix, where it can be rephosphorylated to ATP. Because a human typically exchanges the equivalent of their own mass of ATP on a daily basis, ADP/ATP translocase is an important transporter protein with major metabolic implications.
ANT transports the free, i.e. deprotonated, non-Magnesium, non-Calcium bound forms of ADP and ATP, in a 1:1 ratio. Transport is fully reversible, and its directionality is governed by the concentrations of its substrates (ADP and ATP inside and outside mitochondria), the chelators of the adenine nucleotides, and the mitochondrial membrane potential. The relationship of these parameters can be expressed by an equation solving for the 'reversal potential of the ANT" (Erev_ANT), a value of the mitochondrial membrane potential at which no net transport of adenine nucleotides takes place by the ANT. The ANT and the F0-F1 ATP synthase are not necessarily in directional synchrony.
Apart from exchange of ADP and ATP across the inner mitochondrial membrane, the ANT also exhibits an intrinsic uncoupling activity
ANT is an important modulatory and possible structural component of the Mitochondrial Permeability Transition Pore, a channel involved in various pathologies whose function still remains elusive. Karch et al. propose a "multi-pore model" in which ANT is at least one of the molecular components of the pore.
Translocase mechanism
Under normal conditions, ATP and ADP cannot cross the inner mitochondrial membrane due to their high negative charges, but ADP/ATP translocase, an antiporter, couples the transport of the two molecules. The depression in ADP/ATP translocase alternatively faces the matrix and the cytoplasmic sides of the membrane. ADP in the intermembrane space, coming from the cytoplasm, binds the translocase and induces its eversion, resulting in the release of ADP into the matrix. Binding of ATP from the matrix induces eversion and results in the release of ATP into the intermembrane space, subsequently diffusing to the cytoplasm, and concomitantly brings the translocase back to its original conformation. ATP and ADP are the only natural nucleotides recognized by the translocase.
The net process is denoted by:
ADP3−cytoplasm + ATP4−matrix → ADP3−matrix + ATP4−cytoplasm
ADP/ATP exchange is energetically expensive: about 25% of the energy yielded from electron transfer by aerobic respiration, or one hydrogen ion, is consumed to regenerate the membrane potential that is tapped by ADP/ATP translocase.
The translocator cycles between two states, called the cytoplasmic and matrix state, opening up to these compartments in an alternating way. There are structures available that show the translocator locked in a cytoplasmic state by the inhibitor carboxyatractyloside, or in the matrix state by the inhibitor bongkrekic acid.
Alterations
Rare but severe diseases such as mitochondrial myopathies are associated with dysfunctional human ADP/ATP translocase. Mitochondrial myopathies (MM) refer to a group of clinically and biochemically heterogeneous disorders that share common features of major mitochondrial structural abnormalities in skeletal muscle. The major morphological hallmark of MM is ragged, red fibers containing peripheral and intermyofibrillar accumulations of abnormal mitochondria. In particular, autosomal dominant progressive external ophthalmoplegia (adPEO) is a common disorder associated with dysfunctional ADP/ATP translocase and can induce paralysis of muscles responsible for eye movements. General symptoms are not limited to the eyes and can include exercise intolerance, muscle weakness, hearing deficit, and more. adPEO shows Mendelian inheritance patterns but is characterized by large-scale mitochondrial DNA (mtDNA) deletions. mtDNA contains few introns, or non-coding regions of DNA, which increases the likelihood of deleterious mutations. Thus, any modification of ADP/ATP translocase mtDNA can lead to a dysfunctional transporter, particularly residues involved in the binding pocket which will compromise translocase efficacy. MM is commonly associated with dysfunctional ADP/ATP translocase, but MM can be induced through many different mitochondrial abnormalities.
Inhibition
ADP/ATP translocase is very specifically inhibited by two families of compounds. The first family, which includes atractyloside (ATR) and carboxyatractyloside (CATR), binds to the ADP/ATP translocase from the cytoplasmic side, locking it in a cytoplasmic side open conformation. In contrast, the second family, which includes bongkrekic acid (BA) and isobongkrekic acid (isoBA), binds the translocase from the matrix, locking it in a matrix side open conformation. The negatively charged groups of the inhibitors bind strongly to the positively charged residues deep within the binding pocket. The high affinity (Kd in the nanomolar range) makes each inhibitor a deadly poison by obstructing cellular respiration/energy transfer to the rest of the cell. There are structures available that show the translocator locked in a cytoplasmic state by the inhibitor carboxyatractyloside, or in the matrix state by the inhibitor bongkrekic acid.
History
In 1955, Siekevitz and Potter demonstrated that adenine nucleotides were distributed in cells in two pools located in the mitochondrial and cytosolic compartments. Shortly thereafter, Pressman hypothesized that the two pools could exchange nucleotides. However, the existence of an ADP/ATP transporter was not postulated until 1964 when Bruni et al. uncovered an inhibitory effect of atractyloside on the energy-transfer system (oxidative phosphorylation) and ADP binding sites of rat liver mitochondria.
Soon after, an overwhelming amount of research was done in proving the existence and elucidating the link between ADP/ATP translocase and energy transport. cDNA of ADP/ATP translocase was sequenced for bovine in 1982 and a yeast species Saccharomyces cerevisiae in 1986 before finally Battini et al. sequenced a cDNA clone of the human transporter in 1989. The homology in the coding sequences between human and yeast ADP/ATP translocase was 47% while bovine and human sequences extended remarkable to 266 out of 297 residues, or 89.6%. In both cases, the most conserved residues lie in the ADP/ATP substrate binding pocket.
See also
Mitochondrial carrier
Cellular respiration
Oxidative phosphorylation
References
External links
Solute carrier family
Cellular respiration | Adenine nucleotide translocator | [
"Chemistry",
"Biology"
] | 2,057 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
9,629,827 | https://en.wikipedia.org/wiki/Battle%20of%20Locus%20Castorum | The Battle of Locus Castorum took place during the Year of the Four Emperors between the armies of the rival Roman Emperors Otho and Vitellius. Locus Castorum was a village that existed in the 1st century Roman Empire roughly 15 kilometers from Cremona. It was also referred to as "the Castors" and "at Castor's." The village may have been the location of a temple to the Gemini twins, Castor and Pollux.
The forces of Otho met the forces of Vitellius there. It was one of three early victories for Otho (the first being in the Alps and the second being near Placentia), but Vitellius was eventually the victor at Betriacum.
Locus Castorum is mentioned in Suetonius The Lives of Twelve Caesars Life of Otho, 9 and Tacitus Histories II.24
References
Locus Castorum
Locus Castorum
69
Year of the Four Emperors
Locus Castorum
Castor and Pollux | Battle of Locus Castorum | [
"Astronomy"
] | 199 | [
"Castor and Pollux",
"Astronomical myths"
] |
9,629,917 | https://en.wikipedia.org/wiki/Building-integrated%20photovoltaics | Building-integrated photovoltaics (BIPV) are photovoltaic materials that are used to replace conventional building materials in parts of the building envelope such as the roof, skylights, or façades. They are increasingly being incorporated into the construction of new buildings as a principal or ancillary source of electrical power, although existing buildings may be retrofitted with similar technology. The advantage of integrated photovoltaics over more common non-integrated systems is that the initial cost can be offset by reducing the amount spent on building materials and labor that would normally be used to construct the part of the building that the BIPV modules replace. In addition, BIPV allows for more widespread solar adoption when the building's aesthetics matter and traditional rack-mounted solar panels would disrupt the intended look of the building.
The term building-applied photovoltaics (BAPV) is sometimes used to refer to photovoltaics that are retrofit – integrated into the building after construction is complete. Most building-integrated installations are actually BAPV. Some manufacturers and builders differentiate new construction BIPV from BAPV.
History
PV applications for buildings began appearing in the 1970s. Aluminum-framed photovoltaic modules were connected to, or mounted on, buildings that were usually in remote areas without access to an electric power grid. In the 1980s photovoltaic module add-ons to roofs began being demonstrated. These PV systems were usually installed on utility-grid-connected buildings in areas with centralized power stations. In the 1990s BIPV construction products specially designed to be integrated into a building envelope became commercially available. A 1998 doctoral thesis by Patrina Eiffert, entitled An Economic Assessment of BIPV, hypothesized that one day there would an economic value for trading Renewable Energy Credits (RECs). A 2011 economic assessment and brief overview of the history of BIPV by the U.S. National Renewable Energy Laboratory suggests that there may be significant technical challenges to overcome before the installed cost of BIPV is competitive with photovoltaic panels. However, there is a growing consensus that through their widespread commercialization, BIPV systems will become the backbone of the zero energy building (ZEB) European target for 2020. Despite the technical promise, social barriers to widespread use have also been identified, such as the conservative culture of the building industry and integration with high-density urban design. These authors suggest enabling long-term use likely depends on effective public policy decisions as much as the technological development.
Forms
The majority of BIPV products use one of two technologies: Crystalline Solar Cells (c-SI) or Thin-Film Solar Cells. C-SI technologies comprise wafers of single-cell crystalline silicon which generally operate at a higher efficiency that Thin-Film cells but are more expensive to produce. The applications of these two technologies can be categorized by five main types of BIPV products:
Standard in-roof systems. These generally take the form of applicable strips of photovoltaic cells.
Semi-transparent systems. These products are typically used in greenhouse or cold-weather applications where solar energy must simultaneously be captured and allowed into the building.
Cladding systems. There are a broad range of these systems; their commonality being their vertical application on a building façade.
Solar Tiles and Shingles. These are the most common BIPV systems as they can easily be swapped out for conventional shingle roof finishes.
Flexible Laminates. Commonly procured in thin-sheet form, these products can be adhered to a variety of forms, primarily roof forms.
With the exception of flexible laminates, each of the above categories can utilize either c-SI or Thin-Film technologies, with Thin-Film technologies only being applicable to flexible laminates – this renders Thin-Film BIPV products ideal for advanced design applications that have a kinetic aspect.
Between the five categories, BIPV products can be applied in a variety of scenarios: pitched roofs, flat roofs, curved roofs, semi-transparent façades, skylights, shading systems, external walls, and curtain walls, with flat roofs and pitched roofs being the most ideal for solar energy capture. The ranges of roofing and shading system BIPV products are most commonly used in residential applications whereas the wall and cladding systems are most commonly used in commercial settings. Overall, roofing BIPV systems currently have more of the market share and are generally more efficient than façade and cladding BIPV systems due to their orientation to the sun.
Building-integrated photovoltaic modules are available in several forms:
Flat roofs
The most widely installed to date is an amorphous thin film solar cell integrated to a flexible polymer module which has been attached to the roofing membrane using an adhesive sheet between the solar module backsheet and the roofing membrane. Copper Indium Gallium Selenide (CIGS) technology is now able to deliver cell efficiency of 17% as produced by a US-based company and comparable building-integrated module efficiencies in TPO single ply membranes by the fusion of these cells by a UK-based company.
Pitched roofs
Solar roof tiles are (ceramic) roof tiles with integrated solar modules. The ceramic solar roof tile is developed and patented by a Dutch company in 2013.
Modules shaped like multiple roof tiles.
Solar shingles are modules designed to look and act like regular shingles, while incorporating a flexible thin film cell.
It extends normal roof life by protecting insulation and membranes from ultraviolet rays and water degradation. It does this by eliminating condensation because the dew point is kept above the roofing membrane.
Metal pitched roofs (both structural and architectural) are now being integrated with PV functionality either by bonding a free-standing flexible module or by heat and vacuum sealing of the CIGS cells directly onto the substrate
Façade
Façades can be installed on existing buildings, giving old buildings a whole new look. These modules are mounted on the façade of the building, over the existing structure, which can increase the appeal of the building and its resale value.
Glazing
Photovoltaic windows are (semi)transparent modules that can be used to replace a number of architectural elements commonly made with glass or similar materials, such as windows and skylights. In addition to producing electric energy, these can create further energy savings due to superior thermal insulation properties and solar radiation control.
Photovoltaic Stained Glass: The integration of energy harvesting technologies into homes and commercial buildings has opened up additional areas of research which place greater considerations on the end product's overall aesthetics. While the goal is still to maintain high levels of efficiency, new developments in photovoltaic windows also aim to offer consumers optimal levels of glass transparency and/or the opportunity to select from a range of colors. Different colored 'stained glass' solar panels can be optimally designed to absorb specific ranges of wavelengths from the broader spectrum. Colored photovoltaic glass has been successfully developed using semi transparent, perovskite, and dye sensitized solar cells.
Plasmonic solar cells that absorb and reflect colored light have been created with Fabry-Pérot etalon technology. These cells are composed of "two parallel reflecting metal films and a dielectric cavity film between them." The two electrodes are made from Ag and the cavity between them is Sb2O3 based. Modifying the thickness and refractance of the dielectric cavity changes which wavelength will be most optimally absorbed. Matching the color of the absorption layer glass to the specific portion of the spectrum that the cell's thickness and refractance index is best tuned to transmit both enhances the aesthetic of the cell by intensifying its color and helps to minimize photocurrent losses. 34.7% and 24.6% transmittance was achieved in red and blue light devices respectively. Blue devices can convert 13.3% of light absorbed into power, making it the most efficient across all colored devices developed and tested.
Perovskite solar cell technology can be tuned to red, green and blue by changing the metallic nanowire thickness to 8, 20 and 45 nm respectively. Maximum power efficiencies of 10.12%, 8.17% and 7.72% were achieved by matching glass reflectance to the wavelength that the specific cell is designed to most optimally transmit.
Dye-sensitized solar cells employ liquid electrolytes to capture light and convert it into usable energy; this is achieved in a similar way to how natural pigments facilitate photosynthesis in plants. While chlorophyll is the specific pigment responsible for producing the green color in leaves, other dyes found in nature such as, carotenoid and anthocyanin, produce variations of orange and purples dyes. Researchers from the University of Concepcion have proved the viability of dye sensitized colored solar cells that both appear and selectively absorb specific wavelengths of light. This low cost solution uses extracting natural pigments from maqui fruit, black myrtle and spinach as sensitizers. These natural sensitizers are then placed between two layers of transparent glass. While the efficiency levels of these particularly low cost cells remains unclear, past research in organic dye cells have been able to achieve a "high power conversion efficiency of 9.8%."
Transparent and translucent photovoltaics
Transparent solar panels use a tin oxide coating on the inner surface of the glass panes to conduct current out of the cell. The cell contains titanium oxide that is coated with a photoelectric dye.
Most conventional solar cells use visible and infrared light to generate electricity. In contrast, the innovative new solar cell also uses ultraviolet radiation. Used to replace conventional window glass, or placed over the glass, the installation surface area could be large, leading to potential uses that take advantage of the combined functions of power generation, lighting and temperature control.
Another name for transparent photovoltaics is "translucent photovoltaics" (they transmit half the light that falls on them). Similar to inorganic photovoltaics, organic photovoltaics are also capable of being translucent.
Types of transparent and translucent photovoltaics
Non-wavelength-selective
Some non-wavelength-selective photovoltaics achieve semi-transparency by spatial segmentation of opaque solar cells. This method uses any type of opaque photovoltaic cell and spaces several small cells out on a transparent substrate. Spacing them out in this way reduces power conversion efficiencies dramatically while increasing transmission.
Another branch of non-wavelength-selective photovoltaics utilize visibly absorbing thin-film semi-conductors with small thicknesses or large enough band gaps that allow light to pass through. This results in semi-transparent photovoltaics with a similar direct trade off between efficiency and transmission as spatially segmented opaque solar cells.
Wavelength-selective
Wavelength-selective photovoltaics achieve transparency by utilizing materials that only absorb UV and/or NIR light and were first demonstrated in 2011. Despite their higher transmissions, lower power conversion efficiencies have resulted due to a variety of challenges. These include small exciton diffusion lengths, scaling of transparent electrodes without jeopardizing efficiency, and general lifetime due to the volatility of organic materials used in TPVs in general.
Innovations in transparent and translucent photovoltaics
Early attempts at developing non-wavelength-selective semi-transparent organic photovoltaics using very thin active layers that absorbed in the visible spectrum were only able to achieve efficiencies below 1%. However in 2011, transparent organic photovoltaics that utilized an organic chloroaluminum phthalocyanine (ClAlPc) donor and a fullerene acceptor exhibited absorption in the ultraviolet and near-infrared (NIR) spectrum with efficiencies around 1.3% and visible light transmission of over 65%. In 2017, MIT researchers developed a process to successfully deposit transparent graphene electrodes onto organic solar cells resulting in a 61% transmission of visible light and improved efficiencies ranging from 2.8%-4.1%.
Perovskite solar cells, popular due to their promise as next-generation photovoltaics with efficiencies over 25%, have also shown promise as translucent photovoltaics. In 2015, a semitransparent perovskite solar cell using a methylammonium lead triiodide perovskite and a silver nanowire mesh top electrode demonstrated 79% transmission at an 800 nm wavelength and efficiencies at around 12.7%.
Government subsidies
In some countries, additional incentives, or subsidies, are offered for building-integrated photovoltaics in addition to the existing feed-in tariffs for stand-alone solar systems. Since July 2006 France offered the highest incentive for BIPV, equal to an extra premium of EUR 0.25/kWh paid in addition to the 30 Euro cents for PV systems. These incentives are offered in the form of a rate paid for electricity fed to the grid.
European Union
France €0.25/kWh
Germany €0.05/kWh façade bonus expired in 2009
Italy €0.04–€0.09/kWh
United Kingdom 4.18 p/kWh
Spain, compared with a non- building installation that receives €0.28/kWh (RD 1578/2008):
≤20 kW: €0.34/kWh
>20 kW: €0.31/kWh
United States
United States – Varies by state. Check Database of State Incentives for Renewables & Efficiency for more details.
China
Further to the announcement of a subsidy program for BIPV projects in March 2009 offering RMB20 per watt for BIPV systems and RMB15/watt for rooftop systems, the Chinese government recently unveiled a photovoltaic energy subsidy program "the Golden Sun Demonstration Project". The subsidy program aims at supporting the development of photovoltaic electricity generation ventures and the commercialization of PV technology. The Ministry of Finance, the Ministry of Science and Technology and the National Energy Bureau have jointly announced the details of the program in July 2009. Qualified on-grid photovoltaic electricity generation projects including rooftop, BIPV, and ground mounted systems are entitled to receive a subsidy equal to 50% of the total investment of each project, including associated transmission infrastructure. Qualified off-grid independent projects in remote areas will be eligible for subsidies of up to 70% of the total investment. In mid November, China's finance ministry has selected 294 projects totaling 642 megawatts that come to roughly RMB 20 billion ($3 billion) in costs for its subsidy plan to dramatically boost the country's solar energy production.
Other integrated photovoltaics
Vehicle-integrated photovoltaics (ViPV) are similar for vehicles. Solar cells could be embedded into panels exposed to sunlight such as the hood, roof and possibly the trunk depending on a car's design.
Challenges
Performance
Because BIPV systems generate on-site power and are integrated into the building envelope, the system’s output power and thermal properties are the two primary performance indicators. Conventional BIPV systems have a lower heat dissipation capability than rack-mounted PV, which results in BIPV modules experiencing higher operating temperatures. Higher temperatures may degrade the module's semiconducting material, decreasing the output efficiency and precipitating early failure. In addition, the efficiency of BIPV systems is sensitive to weather conditions, and the use of inappropriate BIPV systems may also reduce their energy output efficiency. In terms of thermal performance, BIPV windows can reduce the cooling load compared to conventional clear glass windows, but may increase the heating load of the building.
Cost
The high upfront investment in BIPV systems is one of the biggest barriers to implementation. In addition to the upfront cost of purchasing BIPV components, the highly integrated nature of BIPV systems increases the complexity of the building design, which in turn leads to increased design and construction costs. Also, insufficient and inexperienced practitioners lead to higher employment costs incurred in the development of BIPV projects.
Policy and regulation
Although many countries have support policies for PV, most do not have additional benefits for BIPV systems. And typically, BIPV systems need to comply with building and PV industry standards, which places higher demands on implementing BIPV systems. In addition, government policies of lower conventional energy prices will lead to lower BIPV system benefits, which is particularly evident in countries where the price of conventional electricity is very low or subsidized by governments, such as in GCC countries.
Public understanding
Studies show that public awareness of BIPV is limited and the cost is generally considered too high. Deepening public understanding of BIPV through various public channels (e.g., policy, community engagement, and demonstration buildings) is likely to be beneficial to its long-term development.
See also
Distributed generation
List of pioneering solar buildings
Microgeneration
Nanoinverter
Passive solar building design
Perovskite solar cell
Solar panel
Rooftop solar power
Roof tile
Smart glass, a type of window blind capable of conserving energy for cooling
Solar cell
Solar power
Solar thermal
Zero-energy building
References
Further reading
External links
Building integrated photovoltaics an overview of the existing products and their fields of application
Canadian Solar Buildings Research Network
Building Integrated Photovoltaics
EURAC Research Building Integrated Photovoltaic on-line platform
PV UP-SCALE, a European founded project (contract EIE/05/171/SI2.420208) related to the large-scale implementation of photovoltaics (PV) in European cities.
Applications of photovoltaics
Building materials | Building-integrated photovoltaics | [
"Physics",
"Engineering"
] | 3,619 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
9,630,312 | https://en.wikipedia.org/wiki/De%20Sitter%20effect | In astrophysics, the term de Sitter effect (named after the Dutch physicist Willem de Sitter) has been applied to two unrelated phenomena:
De Sitter double star experiment
De Sitter precession – also known as geodetic precession or the geodetic effect
Astrophysics | De Sitter effect | [
"Physics",
"Astronomy"
] | 61 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
9,630,832 | https://en.wikipedia.org/wiki/Sialoglycoprotein | A sialoglycoprotein is a combination of sialic acid and glycoprotein, which is, itself, a combination of sugar and protein. These proteins often contain one or more sialyl oligosaccharides that are covalently bound to the rest of the molecule.
Glycophorin C is one common sialoglycoprotein.
Podocalyxin is another sialoglycoprotein found in the foot processes of the podocyte cells of the glomerulus in kidneys. Podocalyxin is negatively charged and therefore repels other negatively charged molecules, thus contributing to the minimal filtration of negatively charged molecules by the kidney. Its molecular weight is 46 kDa.
References
External links
Glycoproteins | Sialoglycoprotein | [
"Chemistry"
] | 162 | [
"Biochemistry stubs",
"Glycobiology",
"Glycoproteins",
"Protein stubs"
] |
9,630,847 | https://en.wikipedia.org/wiki/Project%20Oxygen | Project Oxygen is a research project at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory to develop pervasive, human-centered computing. The Oxygen architecture is to consist of handheld terminals, computers embedded in the environment, and dynamically configured networks which connect these devices. A Project Oxygen device, the H21, exhibits similarities to the iPhone. As of 2021, Project Oxygen devices have never been officially used in wider society.
References
External links
MIT Project Oxygen
Massachusetts Institute of Technology
Usability | Project Oxygen | [
"Technology"
] | 102 | [
"Computing stubs"
] |
9,631,011 | https://en.wikipedia.org/wiki/Mannosamine | D-Mannosamine (2-amino-2-deoxymannose) is a hexosamine derivative of mannose.
See also
Neuraminic acid
References
Hexosamines | Mannosamine | [
"Chemistry"
] | 44 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
9,631,094 | https://en.wikipedia.org/wiki/Small%20temporal%20RNA | Small temporal RNA (abbreviated stRNA) regulates gene expression during roundworm development by preventing the mRNAs they bind from being translated. In contrast to siRNA, stRNAs downregulate expression of target RNAs after translation initiation without affecting mRNA stability. Nowadays, stRNAs are better known as miRNAs.
stRNAs exert negative post-transcriptional regulation by binding to complementary sequences in the 3' untranslated regions of their target genes. stRNAs are transcribed as longer precursor RNAs that are processed by the RNase Dicer/DCR-1 and members of the RDE-1/AGO1 family of proteins, which are better known for their roles in RNA interference (RNAi). stRNAs may function to control temporal identity during development in C. elegans and other organisms.
References
RNA
RNA interference | Small temporal RNA | [
"Chemistry"
] | 175 | [
"Molecular biology stubs",
"Molecular biology"
] |
9,631,095 | https://en.wikipedia.org/wiki/OCCAID | The Open Contributors Corporation for Advanced Internet Development (OCCAID) was a non-profit consortium that operated one of the largest IPv6 research networks in the world. It maintained both resale and facilities-based networks spanning 15,000 miles, with a presence in over 52 cities across 6 countries. This organisation no longer operates, what occurred to this organisation is unclear as there is very little information available for this organisation, apart from their official website.
OCCAID facilitated collaboration between research communities and the carrier industry, serving as a testbed and proving ground for advanced Internet protocols. Most of its participants connected to the network using Ethernet connections in areas where OCCAID has last-mile network connections.
OCCAID's primary collaboration activities had involved IPv6 and multicast protocols.
See also
China Next Generation Internet
External links
Official site
IPv6
Computer network organizations | OCCAID | [
"Technology"
] | 177 | [
"Computing stubs",
"Computer network stubs"
] |
9,631,618 | https://en.wikipedia.org/wiki/Integrin-linked%20kinase | Integrin-linked kinase is an enzyme that in humans is encoded by the ILK gene involved with integrin-mediated signal transduction. Mutations in ILK are associated with cardiomyopathies. It is a 59kDa protein originally identified in a yeast-two hybrid screen with integrin β1 as the bait protein. Since its discovery, ILK has been associated with multiple cellular functions including cell migration, proliferation, and adhesion.
Integrin-linked kinases (ILKs) are a subfamily of Raf-like kinases (RAF). The structure of ILK consists of three features: 5 ankyrin repeats in the N-terminus, Phosphoinositide binding motif and extreme N-terminus of kinase catalytic domain. Integrins lack enzymatic activity and depend on adapters to signal proteins. ILK is linked to beta-1 and beta-3 integrin cytoplasmic domains and is one of the best described integrins. Although first described as a serine/threonine kinase by Hannigan, important motifs of ILK kinases are still uncharacterized. ILK is thought to have a role in development regulation and tissue homeostasis, however it was found that in flies, worms and mice ILK activity isn't required to regulate these processes.
Animal ILKs have been linked to the pinch- parvin complex which control muscle development. Mice lacking ILK were embryonic lethal due to lack of organized muscle cell development. In mammals ILK lacks catalytic activity but supports scaffolding protein functions for focal adhesions. In plants, ILKs signal complexes to focal adhesion sites. ILKs of plants contain multiple ILK genes. Unlike animals that contain few ILK genes ILKs have been found to possess oncogenic properties. ILKs control the activity of serine/threonine phosphatases.
Principle Features
Transduction of extracellular matrix signals through integrins influences intracellular and extracellular functions, and appears to require interaction of integrin cytoplasmic domains with cellular proteins. Integrin-linked kinase (ILK), interacts with the cytoplasmic domain of beta-1 integrin. Multiple alternatively spliced transcript variants encoding the same protein have been found for this gene. Recent results showed that the C-terminal kinase domain is actually a pseudo-kinase with adaptor function.
In 2008, ILK was found to localize to the centrosome and regulate mitotic spindle organization.
Integrin-linked kinase has been shown to interact with:
ACP6,
AKT1,
ILKAP, and
LIMS1,
Function of Plant ILK1
ILKs function by interacting with the many transmembrane receptors to regulate different signaling cascades. ILK1 has been found in the root system of most plants where they are co-localized on the plasma membrane and endoplasmic reticulum where they transport ions across the plasma membrane ILK1 is responsible for the control of osmotic and salt stress, control of the uptake of nutrients based on availability and pathogen detection.
Osmotic and salt stress
ILK1 is linked to hyperosmotic stress sensitivity. ILK1 reduced salt stress in seedlings placed in solution with increased concentrations of salt. ILK1 concentrations remain fairly constant throughout development regardless of a high salt exposure. Previously, it was believed that K+ accumulation was reduced in increased salt concentration. K+ homeostasis is not affected in high salt concentrations. During periods of high salt stress, K+ concentrations in the presence of ILK1 was maintained at the existing level. Potassium transport is required for flg22 root growth inhibition and potassium transport was affected by flg22.
Potassium levels modulate the activation of flg22, a flagellin peptide composed of 22 amino acids that triggers pathogen-associated molecular patterns (PAMPs). PAMPs functions by activating regulators of bacterial pathogen alert system. Ion concentration levels of Mn2+, Mg2+, S and Ca2+ were also affected after PAMP regulators were mobilized.
Nutrient uptake
Potassium (K+) is responsible for osmoregulation, membrane potential maintenance and turgor pressure of plant cells which in turn mediates stomata movement and growth of tubules within the plant. Photosynthesis and other metabolic pathways are controlled by potassium. When sufficient K+ uptake is not met, PAMPs are activated. Calmodulins, specifically CML9, have appeared as important genes to interact with ILK1 and regulate potassium levels within the cell. While CLM9 primarily regulates Ca2+ it is linked to a yet identified K+/Ca2+ influx channel. While interactions are known to occur between CML9 and ILK1, ILK1 Is not a direct phosphorylation target of CML9. With the addition of CML9, autophosphorylation of ILK1 is diminished, the present irrespective of calcium available for uptake.
ILK1 is also affected by presence or absence of manganese (Mn2+). Autophosphorylation and substrate phosphorylation occurred when exposed to both Mn2+ and Mg2+. Mn2+ and was dose dependent where Mg2+ was not. Specific ILK autophosphorylation sites were found in the presence of Mn2+ but not in the presence of Mg2+ which supports the ILK1 dependent phosphorylation suggested above. Mass spectrometry revealed no other kinases were present to trigger this response.
Pathogen detection
ILK1 has been found to promote resistance in bacterial pathogens. ILK1 is required for flg22 sensitivity in seedlings. A catalytically inactive version of ILK1 was compared with catalytically active versions of ILK1 to see the level of resistance when challenged with bacterial pathogens. Plants inoculated with inactive ILK1 were more susceptible to bacterial infection than active ILK1 suggesting that ILK1 is needed for bacterial pathogen detection. While ILK1 is involved in bacterial pathogen detection it is not used for effect induced defenses.
ILK1 increases PAMP response and basal immunity through phosphorylation of MPK3 and MPK6 and operates independently in reactive oxygen species (ROS) production. High Affinity Potassium uptake mediators such as HAK5 have also been found to be integral in the signaling of flg22. HAK5 function when potassium levels are low. Flg22 has been shown to depolarize the cell's plasma membrane with HAK5 and ILK1 working together to mediate ion homeostasis to assist with both short and long term actions such as growth and suppression thereof.
References
Further reading
Proteins | Integrin-linked kinase | [
"Chemistry"
] | 1,407 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
9,631,749 | https://en.wikipedia.org/wiki/Horseradish%20peroxidase | The enzyme horseradish peroxidase (HRP), found in the roots of horseradish, is used extensively in biochemistry applications. It is a metalloenzyme with many isoforms, of which the most studied type is C. It catalyzes the oxidation of various organic substrates by hydrogen peroxide.
Structure
The structure of the enzyme was first solved by X-ray crystallography in 1997 and has since been solved several times with various substrates. It is a large alpha-helical glycoprotein which binds heme as a redox cofactor.
Substrates
Alone, the HRP enzyme, or conjugates thereof, is of little value; its presence must be made visible using a substrate that, when oxidized by HRP using hydrogen peroxide as the oxidizing agent, yields a characteristic color change that is detectable by spectrophotometric methods.
Numerous substrates for horseradish peroxidase have been described and commercialized to exploit the desirable features of HRP. These substrates fall into several distinct categories. HRP catalyzes the conversion of chromogenic substrates (e.g., TMB, DAB, ABTS) into colored products, and produces light when acting on chemiluminescent substrates (e.g. enhanced chemiluminescence by luminol).
Applications
Horseradish peroxidase is a 44,173.9-dalton glycoprotein with six lysine residues which can be conjugated to a labeled molecule. It produces a coloured, fluorimetric or luminescent derivative of the labeled molecule when incubated with a proper substrate, allowing it to be detected and quantified.
HRP is often used in conjugates (molecules that have been joined genetically or chemically) to determine the presence of a molecular target. For example, an antibody conjugated to HRP may be used to detect a small amount of a specific protein in a western blot. Here, the antibody provides the specificity to locate the protein of interest, and the HRP enzyme, in the presence of a substrate, produces a detectable signal. Horseradish peroxidase is also commonly used in techniques such as ELISA and Immunohistochemistry due to its monomeric nature and the ease with which it produces coloured products. Peroxidase, a heme-containing oxidoreductase, is a commercially important enzyme which catalyses the reductive cleavage of hydrogen peroxide by an electron donor.
Horseradish peroxidase is ideal in many respects for these applications because it is smaller, more stable, and less expensive than other popular alternatives such as alkaline phosphatase. It also has a high turnover rate that allows generation of strong signals in a relatively short time span. High concentrations of phosphate severely decrease stability of horseradish peroxidase. In addition to biomedical applications, horseradish peroxidase is one of the enzymes with important environmental applications. This enzyme is suitable for the removal of hydroxylated aromatic compounds (HACs) that are considered to be primary pollutants in a wide variety of industrial wastewater.
Moreover, "In recent years the technique of marking neurons with the enzyme horseradish peroxidase has become a major tool. In its brief history, this method has probably been used by more neurobiologists than have used the Golgi stain since its discovery in 1870."
Enhanced chemiluminescence
Horseradish peroxidase catalyses the oxidation of luminol to 3-aminophthalate via several intermediates. The reaction is accompanied by emission of low-intensity light at 428 nm. In the presence of certain chemicals, the light emitted is enhanced up to 1000-fold, making the light easier to detect and increasing the sensitivity of the reaction. The enhancement of light emission is called enhanced chemiluminescence (ECL). Several enhancers can be used such as the commonly known modified phenols (mainly iodo-phenol). Several substrates on the market use other enhancers which result in luminescence signals up to 13 times greater than phenol-enhanced substrates. The intensity of light is a measure of the number of enzyme molecules reacting and thus of the amount of hybrid.
ECL is simple to set up and is sensitive, detecting about 0.5 pg nucleic acid in Southern blots and in northern blots. Detection by chemiluminescent substrates has several advantages over chromogenic substrates. The sensitivity is 10- to 100-fold greater, and quantifying of light emission is possible over a wide dynamic range, whereas that for coloured precipitates is much more limited, about one order of magnitude less. Stripping filters are much easier when chemiluminescent substrates are used.
Polymer synthesis
Horseradish peroxidase can be used for various polymerization reactions, but the most extensively studied one is a polymerization of phenol derivatives. However, Horseradish peroxidase can also be used as a catalyst for Atom Transfer Radical Polymerization reactions and create polymers in absence of any hydrogen peroxide. In this case, a substrate for HRP is alkyl halide or alkyl nitrile, which are initiators of ATRP reactions. HRP reacts with such compounds, creating radicals, that start polymerization. HRP-catalysed ATRP provides the level of control over polymerization comparable to the one obtained in metal-catalysed reaction.
HRP mimics
Many materials have been explored to mimic natural HRP. For example, iron oxide nanoparticles and hemin-containing complexes have been used to mimic HRP. These HRP-like artificial enzymes have been used for many applications, ranging from biomarker detection and tumor immunostaining to antibiofouling.
See also
Artificial enzyme
References
External links
Biochemistry detection reactions
EC 1.11.1
Armoracia | Horseradish peroxidase | [
"Chemistry",
"Biology"
] | 1,242 | [
"Microbiology techniques",
"Biochemistry detection reactions",
"Biochemical reactions"
] |
9,632,098 | https://en.wikipedia.org/wiki/Cache%20Discovery%20Protocol | The Cache Discovery Protocol (CDP) is an extension to the BitTorrent file-distribution system. It is designed to support the discovery and utilisation of local data caches by BitTorrent peers, typically set up by ISPs wishing to minimise the impact of BitTorrent traffic on their network.
The Cache Discovery Protocol was originally developed jointly by BitTorrent, Inc. and CacheLogic and first implemented in version 4.20 of the official BitTorrent client, released June 22, 2006. However, despite claims that the details of the protocol would be published, to date no specification has been made publicly available.
See also
Web Cache Communication Protocol
External links
BitTorrent Local Tracker Discovery Protocol
Slyck.com coverage of the 4.20 release
BitTorrent
Service discovery protocols | Cache Discovery Protocol | [
"Technology"
] | 161 | [
"Computing stubs",
"Computer network stubs"
] |
9,632,150 | https://en.wikipedia.org/wiki/Einstein%E2%80%93de%20Haas%20effect | The Einstein–de Haas effect is a physical phenomenon in which a change in the magnetic moment of a free body causes this body to rotate. The effect is a consequence of the conservation of angular momentum. It is strong enough to be observable in ferromagnetic materials. The experimental observation and accurate measurement of the effect demonstrated that the phenomenon of magnetization is caused by the alignment (polarization) of the angular momenta of the electrons in the material along the axis of magnetization. These measurements also allow the separation of the two contributions to the magnetization: that which is associated with the spin and with the orbital motion of the electrons.
The effect also demonstrated the close relation between the notions of angular momentum
in classical and in quantum physics.
The effect was predicted by O. W. Richardson in 1908. It is named after Albert Einstein and Wander Johannes de Haas, who published two papers in 1915 claiming the first experimental observation of the effect.
Description
The orbital motion of an electron (or any charged particle) around a certain axis produces a magnetic dipole with the magnetic moment of where and are the charge and the mass of the particle, while is the angular momentum of the motion (SI units are used). In contrast, the intrinsic magnetic
moment of the electron is related to its intrinsic angular momentum (spin) as (see Landé g-factor and anomalous magnetic dipole moment).
If a number of electrons in a unit volume of the material have a total orbital angular momentum of with respect to a certain axis, their magnetic moments would produce the magnetization of . For the spin contribution the relation would be . A change in magnetization, implies a proportional change in the angular momentum, of the electrons involved. Provided that there is no external torque along the magnetization axis applied to the body in the process, the rest of the body (practically all its mass) should acquire an angular momentum due to the law of conservation of angular momentum.
Experimental setup
The experiments involve a cylinder of a ferromagnetic material suspended with the aid of a thin string inside a cylindrical coil which is used to provide an axial magnetic field that magnetizes the cylinder along its axis. A change in the electric current in the coil changes the magnetic field the coil produces, which changes the magnetization of the ferromagnetic cylinder and, due to the effect described, its angular momentum. A change in the angular momentum causes a change in the rotational speed of the cylinder, monitored using optical devices. The external field interacting with a magnetic dipole cannot produce any torque () along the field direction. In these experiments the magnetization happens along the direction of the field produced by the magnetizing coil, therefore, in absence of other external fields, the angular momentum along this axis must be conserved.
In spite of the simplicity of such a layout, the experiments are not easy. The magnetization can be measured accurately with the help of a pickup coil around the cylinder, but the associated change in the angular momentum is small. Furthermore, the ambient magnetic fields, such as the Earth field, can provide a 107–108 times larger mechanical impact on the magnetized cylinder. The later accurate experiments were done in a specially constructed demagnetized environment with active compensation of the ambient fields. The measurement methods typically use the properties of the torsion pendulum, providing periodic current to the magnetization coil at frequencies close to the pendulum's resonance. The experiments measure directly the ratio: and derive the dimensionless gyromagnetic factor of the material from the definition: . The quantity is called gyromagnetic ratio.
History
The expected effect and a possible experimental approach was first described by Owen Willans Richardson in a paper published in 1908. The electron spin was discovered in 1925, therefore only the orbital motion of electrons was considered before that. Richardson derived the expected relation of . The paper mentioned the ongoing attempts to observe the effect at Princeton University.
In that historical context the idea of the orbital motion of electrons in atoms contradicted classical physics. This contradiction was addressed in the Bohr model in 1913, and later was removed with the development of quantum mechanics.
Samuel Jackson Barnett, motivated by the Richardson's paper realized that the opposite effect should also happen – a change in rotation should cause a magnetization (the Barnett effect). He published the idea in 1909, after which he pursued the experimental studies of the effect.
Einstein and de Haas published two papers in April 1915 containing a description of the expected effect and the experimental results. In the paper "Experimental proof of the existence of Ampere's molecular currents" they described in details the experimental apparatus and the measurements performed. Their result for the ratio of the angular momentum of the sample to its magnetic moment (the authors called it ) was very close (within 3%) to the expected value of . It was realized later that their result with the quoted uncertainty of 10% was not consistent with the correct value which is close to . Apparently, the authors underestimated the experimental uncertainties.
Barnett reported the results of his measurements at several scientific conferences in 1914. In October 1915 he published the first observation of the Barnett effect in a paper titled "Magnetization by Rotation". His result for was close to the right value of , which was unexpected at that time.
In 1918 John Quincy Stewart published the results of his measurements confirming the Barnett's result. In his paper he was calling the phenomenon the 'Richardson effect'.
The following experiments demonstrated that the gyromagnetic ratio for iron is indeed close to rather than . This phenomenon, dubbed "gyromagnetic anomaly" was finally explained after the discovery of the spin and introduction of the Dirac equation in 1928.
The experimental equipment was later donated by Geertruida de Haas-Lorentz, wife of de Haas and daughter of Lorentz, to the Ampère Museum in Lyon France in 1961. It went lost and was later rediscovered in 2023.
Literature about the effect and its discovery
Detailed accounts of the historical context and the explanations of the effect can be found in literature Commenting on the papers by Einstein, Calaprice in The Einstein Almanac writes:
52. "Experimental Proof of Ampère's Molecular Currents" (Experimenteller Nachweis der Ampereschen Molekularströme) (with Wander J. de Hass). Deutsche Physikalische Gesellschaft, Verhandlungen 17 (1915): 152–170.
Considering [André-Marie] Ampère's hypothesis that magnetism is caused by the microscopic circular motions of electric charges, the authors proposed a design to test [Hendrik] Lorentz's theory that the rotating particles are electrons. The aim of the experiment was to measure the torque generated by a reversal of the magnetisation of an iron cylinder.
Calaprice further writes:
53. "Experimental Proof of the Existence of Ampère's Molecular Currents" (with Wander J. de Haas) (in English). Koninklijke Akademie van Wetenschappen te Amsterdam, Proceedings 18 (1915–16).
Einstein wrote three papers with Wander J. de Haas on experimental work they did together on Ampère's molecular currents, known as the Einstein–De Haas effect. He immediately wrote a correction to paper 52 (above) when Dutch physicist H. A. Lorentz pointed out an error. In addition to the two papers above [that is 52 and 53] Einstein and de Haas cowrote a "Comment" on paper 53 later in the year for the same journal. This topic was only indirectly related to Einstein's interest in physics, but, as he wrote to his friend Michele Besso, "In my old age I am developing a passion for experimentation."
The second paper by Einstein and de Haas was communicated to the "Proceedings of the Royal Netherlands Academy of Arts and Sciences" by Hendrik Lorentz who was the father-in-law of de Haas. According to Viktor Frenkel, Einstein wrote in a report to the German Physical Society: "In the past three months I have performed experiments jointly with de Haas–Lorentz in the Imperial Physicotechnical Institute that have firmly established the existence of Ampère molecular currents." Probably, he attributed the hyphenated name to de Haas, not meaning both de Haas and H. A. Lorentz.
Later measurements and applications
The effect was used to measure the properties of various ferromagnetic elements and alloys. The key to more accurate measurements was better magnetic shielding, while the methods were essentially similar to those of the first experiments. The experiments measure the value of the g-factor (here we use the projections of the pseudovectors and onto the magnetization axis and omit the sign). The magnetization and the angular momentum consist of the contributions from the spin and the orbital angular momentum: , .
Using the known relations , and , where is the g-factor for the anomalous magnetic moment of the electron, one can derive the relative spin contribution to magnetization as: .
For pure iron the measured value is , and
. Therefore, in pure iron 96% of the magnetization
is provided by the polarization of the electrons' spins,
while the remaining 4% is provided by the polarization of their orbital angular momenta.
See also
Barnett effect
References
External links
"Einsteins's only experiment" (links to a directory of the Home Page of Physikalisch-Technische Bundesanstalt (PTB), Germany ). Here is a replica to be seen of the original apparatus on which the Einstein–de Haas experiment was carried out.
Experimental physics
Magnetism
Quantum magnetism
Albert Einstein | Einstein–de Haas effect | [
"Physics",
"Materials_science"
] | 1,967 | [
"Condensed matter physics",
"Quantum magnetism",
"Experimental physics",
"Quantum mechanics"
] |
9,632,204 | https://en.wikipedia.org/wiki/De%20Vaucouleurs%27s%20law | de Vaucouleurs's law, also known as the de Vaucouleurs profile or de Vaucouleurs model, describes how the surface brightness of an elliptical galaxy varies as a function of apparent distance from the center of the galaxy:
By defining Re as the radius of the isophote containing half of the total luminosity of the galaxy, the half-light radius, de Vaucouleurs profile may be expressed as:
or
where Ie is the surface brightness at Re. This can be confirmed by noting
de Vaucouleurs model is a special case of Sersic's model, with a Sersic index of . A number of (internal) density profiles that approximately reproduce de Vaucouleurs's law after projection onto the plane of the sky include Jaffe's model and Dehnen's model.
The model is named after Gérard de Vaucouleurs who first formulated it in 1948. Although an empirical model rather than a law of physics, it was so entrenched in astronomy during the 20th century that it was referred to as a "law".
References
External links
Eric Weisstein's World of Astronomy entry
Astrophysics
Equations of astronomy | De Vaucouleurs's law | [
"Physics",
"Astronomy"
] | 249 | [
"Concepts in astronomy",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Equations of astronomy",
"Astronomical sub-disciplines"
] |
9,632,448 | https://en.wikipedia.org/wiki/Fermi%20coordinates | In the mathematical theory of Riemannian geometry, there are two uses of the term Fermi coordinates. In one use they are local coordinates that are adapted to a geodesic. In a second, more general one, they are local coordinates that are adapted to any world line, even not geodesical.
Take a future-directed timelike curve ,
being the proper time along in the spacetime .
Assume that is the initial point of . Fermi coordinates adapted to are constructed this way. Consider an orthonormal basis of with parallel to . Transport the basis along making use of Fermi–Walker's transport. The basis at each point is still orthonormal with
parallel to and is non-rotated (in a precise sense related to the decomposition of Lorentz transformations into pure transformations and rotations) with respect to the initial basis, this is the physical meaning of Fermi–Walker's transport.
Finally construct a coordinate system in an open tube , a neighbourhood of , emitting all spacelike geodesics through with initial tangent vector , for every . A point has coordinates where is the only vector whose associated geodesic reaches for the value of its parameter and is the only time along for that this geodesic reaching exists.
If itself is a geodesic, then Fermi–Walker's transport becomes the standard parallel transport and Fermi's coordinates become standard Riemannian coordinates adapted to . In this case, using these coordinates in a neighbourhood of , we have , all Christoffel symbols vanish exactly on . This property is not valid for Fermi's coordinates however when is not a geodesic. Such coordinates are called Fermi coordinates and are named after the Italian physicist Enrico Fermi. The above properties are only valid on the geodesic. The Fermi-coordinates adapted to a null geodesic is provided by Mattias Blau, Denis Frank, and Sebastian Weiss. Notice that, if all Christoffel symbols vanish near , then the manifold is flat near .
In the Riemannian case at least, Fermi coordinates can be generalized to an arbitrary submanifold.
See also
Proper reference frame (flat spacetime)#Proper coordinates or Fermi coordinates
Geodesic normal coordinates
Fermi–Walker transport
Christoffel symbols
Isothermal coordinates
References
Riemannian geometry
Coordinate systems in differential geometry | Fermi coordinates | [
"Mathematics"
] | 480 | [
"Coordinate systems in differential geometry",
"Coordinate systems"
] |
9,632,792 | https://en.wikipedia.org/wiki/Pedophile%20Group | The Pedophile Group, Pedophile Group Association or Danish Pedophile Association was a Danish organisation which was disbanded on 21 March 2004. A website is still running, operated by a group of active members of the former association. It was founded in 1985.
On 23 July 1996, the group had eighty registered members and participated in an International Congress in Denmark. It was also connected with the pedophile advocacy organisation Ipce (formerly the International Pedophile and Child Emancipation). A 2004 newspaper article identified DeFillip as the organization's spokesman.
In 2000, a Danish TV documentary team went undercover to investigate the group. Members were shown exchanging child porn and giving advice on how to contact children in internet chatrooms. A man was arrested by police in connection with the investigation.
In 2000, the group asked its members to provide misleading information to authorities to help Eric Franklin Rosser evade prosecution. Rosser was a former member of John Mellencamp's band who had been charged with producing and distributing child pornography. He was convicted in 2001, however, and was added to the U.S. Federal Bureau of Investigation's most wanted list.
In 2004, the Danish newspaper Dagbladet Information ran a front-page article by the journalist Kristian Ditlev Jensen calling for the organisation's home page to be taken down. Similar criticism of the groups came from papers such as Berlingske, Jyllands-Posten and Politiken.
In 2004, the Danish Parliament of the 2001 Danish general election voted against the dissolution of the association.
Notes
1985 establishments in Denmark
2004 disestablishments in Denmark
Organizations established in 1985
Organizations disestablished in 2004
Pedophile advocacy
Clubs and societies in Denmark | Pedophile Group | [
"Biology"
] | 353 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
3,132,156 | https://en.wikipedia.org/wiki/Betti%20reaction | The Betti reaction is a chemical addition reaction of aldehydes, primary aromatic amines and phenols producing α-aminobenzylphenols.
The Betti reaction is a special case of the Mannich reaction.
History
The reaction is named after the Italian chemist Mario Betti (1857-1942). Betti worked at many universities in Italy, including Florence, Cagliari, Siena, Genoa and Bologna, where he was the successor of Giacomo Ciamician. Betti's main research was focused on stereochemistry, and the resolution of racemic compounds, the relationship between molecular constitution and optical rotation, as well asymmetric synthesis using chiral auxiliaries or in the presence of polarized light.
In 1939 Mario Betti was appointed the Senator of the Kingdom of Italy.
In 1900 Betti hypothesized that 2-naphthol would be a good carbon nucleophile to the imine produced from the reaction of benzaldehyde and aniline. This led to the Betti reaction.
Today, the name has grown to refer to any reaction of aldehydes, primary aromatic amines and phenols producing α-aminobenzylphenols.
Mechanism
The reaction mechanism begins with an imine condensation of a primary aromatic amine and formaldehyde
Once the imine is produced, it reacts with phenol in the presence of water to yield an α-aminobenzylphenol.
First, the lone-pair on the nitrogen of the imine deprotonates the phenol, pushing the bonding electrons onto the oxygen. The carbonyl is then reformed and a double bond in the benzene ring attacks the carbon atom in the pronated imine cation. Water then acts as a base and deprotonates the α-carbon, reforming the aromatic ring and pushing electrons onto oxygen. The oxygen, which now has a negative formal charge, then attacks a hydrogen on the hydronium, resulting in an α-aminobenzylphenol, with water as the only byproduct.
Betti Base
The product of the Betti reaction is called the Betti base. The stereochemistry of the base was resolved into two isomers by using tartaric acid.
Uses for the Betti base and its derivatives include:
Enantioselective addition of diethylzinc to aryl aldehydes.
Enantioselective alkenylation of aldehydes.
Preparation of stable boronate complexes, which can be alkylated to yield amino acid precursors.
Separation of enantiomers.
References
Further reading
Betti, M. Gazz. Chim. Ital. 1900, 30 II, 301.
Betti, M. Gazz. Chim. Ital. 1903, 33 II, 2.
Organic Syntheses, Coll. Vol. 1, p.381 (1941); Vol. 9, p.60 (1929). (Article)
Pirrone, F Gazz. Chim. Ital. 1936, 66, 518.
Pirrone, F Gazz. Chim. Ital. 1937, 67, 529.
Phillips, J. P. Chem. Rev. 1956, 56, 286.
Phillips, J. P.; Barrall, E. M. J. Org. Chem. 1956, 21, 692.
Kumar, A.; Kumar, M.; Gupta, M. K. Tetrahedron Lett. 2010, 12, 1582-1584.
Addition reactions
Multiple component reactions
Name reactions | Betti reaction | [
"Chemistry"
] | 743 | [
"Name reactions",
"Coupling reactions",
"Organic reactions"
] |
3,132,530 | https://en.wikipedia.org/wiki/Nakai%20conjecture | In mathematics, the Nakai conjecture is an unproven characterization of smooth algebraic varieties, conjectured by Japanese mathematician Yoshikazu Nakai in 1961.
It states that if V is a complex algebraic variety, such that its ring of differential operators is generated by the derivations it contains, then V is a smooth variety. The converse statement, that smooth algebraic varieties have rings of differential operators that are generated by their derivations, is a result of Alexander Grothendieck.
The Nakai conjecture is known to be true for algebraic curves and Stanley–Reisner rings. A proof of the conjecture would also establish the Zariski–Lipman conjecture, for a complex variety V with coordinate ring R. This conjecture states that if the derivations of R are a free module over R, then V is smooth.
References
Algebraic geometry
Singularity theory
Conjectures
Unsolved problems in geometry | Nakai conjecture | [
"Mathematics"
] | 182 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Fields of abstract algebra",
"Conjectures",
"Algebraic geometry",
"Mathematical problems"
] |
3,132,697 | https://en.wikipedia.org/wiki/Mordell%20curve | In algebra, a Mordell curve is an elliptic curve of the form y2 = x3 + n, where n is a fixed non-zero integer.
These curves were closely studied by Louis Mordell, from the point of view of determining their integer points. He showed that every Mordell curve contains only finitely many integer points (x, y). In other words, the differences of perfect squares and perfect cubes tend to infinity. The question of how fast was dealt with in principle by Baker's method. Hypothetically this issue is dealt with by Marshall Hall's conjecture.
Properties
If (x, y) is an integer point on a Mordell curve, then so is (x, −y).
If (x, y) is a rational point on a Mordell curve with y ≠ 0, then so is . Moreover, if xy ≠ 0 and n is not 1 or −432, an infinite number of rational solutions can be generated this way. This formula is known as Bachet's duplication formula.
When n ≠ 0, the Mordell curve only has finitely many integer solutions (see Siegel's theorem on integral points).
There are certain values of n for which the corresponding Mordell curve has no integer solutions; these values are:
6, 7, 11, 13, 14, 20, 21, 23, 29, 32, 34, 39, 42, ... .
−3, −5, −6, −9, −10, −12, −14, −16, −17, −21, −22, ... .
The specific case where n = −2 is also known as Fermat's Sandwich Theorem.
List of solutions
The following is a list of solutions to the Mordell curve y2 = x3 + n for |n| ≤ 25. Only solutions with y ≥ 0 are shown.
In 1998, J. Gebel, A. Pethö, H. G. Zimmer found all integers points for 0 < |n| ≤ 104.
In 2015, M. A. Bennett and A. Ghadermarzi computed integer points for 0 < |n| ≤ 107.
References
External links
J. Gebel, Data on Mordell's curves for –10000 ≤ n ≤ 10000
M. Bennett, Data on Mordell curves for –107 ≤ n ≤ 107
Algebraic curves
Diophantine equations
Elliptic curves | Mordell curve | [
"Mathematics"
] | 499 | [
"Diophantine equations",
"Mathematical objects",
"Equations",
"Number theory"
] |
3,132,756 | https://en.wikipedia.org/wiki/Troxler%27s%20fading | Troxler's fading, also called Troxler fading or the Troxler effect, is an optical illusion affecting visual perception. When one fixates on a particular point for even a short period of time, an unchanging stimulus away from the fixation point will fade away and disappear. Research suggests that at least some portion of the perceptual phenomena associated with Troxler's fading occurs in the brain.
Discovery
Troxler's fading was first identified by Swiss physician Ignaz Paul Vital Troxler in 1804, who was practicing in Vienna at the time.
Process
Neural adaptation
Troxler's fading has been attributed to the adaptation of neurons vital for perceiving stimuli in the visual system. It is part of the general principle in sensory systems that unvarying stimuli soon disappear from our awareness. For example, if a small piece of paper is dropped on the inside of one's forearm, it is felt for a short period of time. Soon, however, the sensation fades away. This is because the tactile neurons have adapted and start to ignore the unimportant stimulus. But if one jiggles one's arm up and down, giving varying stimulation, one will continue to feel the paper.
Visual parallels
A similar 'sensory fading,' or filling-in, can be seen of a fixated stimulus when its retinal image is made stationary on the retina (a stabilized retinal image). Stabilization can be done in at least three ways.
First, one can mount a tiny projector on a contact lens. The projector shines an image into the eye. As the eye moves, the contact lens moves with it, so the image is always projected onto the same part of the retina;
Second, one can monitor eye movements and move the stimulus to cancel the eye movements;
Third, one can induce an afterimage, usually by an intense, brief flash, such as when one is photographed using a photographic flash (a form of stabilized retinal image that most people have experienced). This causes an image to be bleached onto the retina by the strong response of the rods and cones. In all these cases, the stimulus fades away after a short time and disappears.
The Troxler effect is enhanced if the stimulus is small, is of low contrast (or "equiluminant"), or is blurred. The effect is enhanced the further the stimulus is away from the fixation point.
Explanation of effect
Troxler's fading can occur without any extraordinary stabilization of the retinal image in peripheral vision because the neurons in the visual system beyond the rods and cones have large receptive fields. This means that the small, involuntary eye movements made when fixating on something fail to move the stimulus onto a new cell's receptive field, in effect giving unvarying stimulation. Further experimentation this century by Hsieh and Tse showed that at least some portion of the perceptual fading occurred in the brain, not in the eyes.
See also
Cognitive science
Lilac chaser – An illusion that involves Troxler fading
Bloody mary - one of the most well known examples of this effect
References
External links
Troxler project: a research project on Troxler's fading
Optical illusions
Visual perception | Troxler's fading | [
"Physics"
] | 675 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
3,132,886 | https://en.wikipedia.org/wiki/Universal%20remote | A universal remote is a remote control that can be programmed to operate various brands of one or more types of consumer electronics devices. Low-end universal remotes can only control a set number of devices determined by their manufacturer, while mid- and high-end universal remotes allow the user to program in new control codes to the remote. Many remotes sold with various electronics include universal remote capabilities for other types of devices, which allows the remote to control other devices beyond the device it came with. For example, a VCR remote may be programmed to operate various brands of televisions.
History
On May 30, 1985, Philips introduced the first universal remote (U.S. Pat. #4774511) under the Magnavox brand name.
In 1985, Robin Rumbolt, William "Russ" McIntyre, and Larry Goodson with North American Philips Consumer Electronics (Magnavox, Sylvania, and Philco) developed the first universal remote control.
In 1987, the first programmable universal remote control was released. It was called the "CORE" and was created by CL 9, a startup founded by Steve Wozniak, the inventor of the Apple I and Apple II computers.
In March 1987, Steve Ciarcia published an article in Byte magazine entitled "Build a Trainable Infrared Master Controller", describing a universal remote with the ability to upload the settings to a computer. This device had macro capabilities.
Layout and features
Most universal remotes share a number of basic design elements:
A power button, as well as a switch or series of buttons to select which device the remote is controlling at the moment. A typical selection includes TV, VCR, DVD, and CBL/SAT, along with other devices that sometimes include DVRs, audio equipment or home automation devices.
Channel and volume up/down selectors (sometimes marked with + and - signs).
A numeric keypad for entering channel numbers and some other purposes such as time and date entry.
A set button (sometimes recessed to avoid accidental pressing) to allow selection of a particular set of codes (usually entered on the keypad). Most remotes also allow the user to cycle through the list of available codes to find one that matches the device to be controlled.
Most but not all universal remotes include one or more D-pads for navigating menus on DVD players and cable/satellite boxes.
Certain highly reduced designs such as the TV-B-Gone or keychain-sized remotes include only a few buttons, such as power and channel/volume selectors.
Higher-end remotes have numerous other features:
Macro programming, allowing the user to program command sequences to be sent with one button press
LCD to display status information.
Programmable soft keys, allowing user-defined functions and macros
Aliases or "punchthroughs", which allow multiple devices to be accessed without changing device modes (for example, using the TV's volume control while the remote is still in DVD-player mode.)
IR code learning, allowing the remote to be programmed to control new devices not already in its code list
PC configuration, allowing the remote to be connected to a computer for easy setup
Some universal remotes have the ability to also make phone calls replacing your home phone in that room.
Repeaters are available that can extend the range of a remote control; some remotes are designed to communicate with a dedicated repeater over RF, removing the line-of-sight requirement of IR repeaters, while others accept infrared signals from any remote and transmit them to the device being controlled. (The latter are sometimes built as hobby projects and are widely available in kit form.)
Upgradable and learning remotes
Some universal remotes allow the code lists programmed into the remote to be updated to support new brands or models of devices not currently supported by the remote. Some higher end universal learning remotes require a computer to be connected. The connection is typically done via USB from the computer to mini-USB on the remote or the remotes base station.
In 2000, a group of enthusiasts discovered that universal remotes made by UEI and sold under the One For All, RadioShack, and other brands can be reprogrammed by means of an interface called JP1.
IR learning remotes can learn the code for any button on many other IR remote controls. This functionality allows the remote to learn functions not supported by default for a particular device, making it sometimes possible to control devices that the remote was not originally designed to control. A drawback of this approach is that the learning remote needs a functioning teaching remote. Also, some entertainment equipment manufacturers use pulse frequencies that are higher than what the learning remote can detect and store in its memory.
Touch-screen remotes
These remotes feature an LCD screen that can be either monochrome or full color. The "buttons" are actually images on the screen, which, when touched, will send IR signals to controlled devices. Some models have multiple screens that are accessed through virtual buttons on the touch-screen and other models have a combination of the touch-screen and physical buttons.
Some models of the touch-screen remotes are programmed using a graphical interface program on a PC, which allows the user to customize the screens, backgrounds, buttons and even the "actions" the buttons perform. The "project" that is created is then downloaded into the remote through a USB cable or, in the most recent models, wirelessly by Bluetooth or Wi-Fi.
The newest touch-screen remotes, such as the Logitech 900 and 1100, include an RF transmitter to allow signals to reach locations much farther than the usual range of IR (approximately 6 meters). RF also does not require line of sight.
Some touch-screen remote controls, such as the Ray Super Remote, now have content recommendations built directly in to the universal remote control.
Smartphone and tablet universal remotes
Smartphones and tablets such as those running Nokia's Maemo (N900), Apple's iOS and Google's Android operating system can also be used as universal remote controls.
A number of devices from vendors such as Samsung, LG and Nokia include a built-in IR port that can be used as a remote, while others require a physical attachment, or 'dongle', be connected on to the phone when used as a remote. The dongle is required to convert the electrical control signals from the phone into infra red signals that are required by most home audio visual components for remote control. However it is also possible to implement a system that does not require a dongle. Such systems use a stand-alone piece of hardware called a 'gateway', which receives the electrical control signals from the smartphone in Bluetooth or wi-fi form and forward them on in infra red form to the components to be controlled.
See also
JP1 remote - Universal Electronics/One For All range of programmable remotes
Logitech Harmony Remote - Logitech's range of programmable remote controls.
Ray Super Remote - Touchscreen Universal Remote Control that recommends what to watch.
TV-B-Gone - A remote control device for turning off any television set
References
Audiovisual introductions in 1985
Assistive technology
Consumer electronics
Television technology
Remote control | Universal remote | [
"Technology"
] | 1,476 | [
"Information and communications technology",
"Television technology"
] |
3,132,981 | https://en.wikipedia.org/wiki/Ehud%20Hrushovski | Ehud Hrushovski (; born 30 September 1959) is a mathematical logician. He is a Merton Professor of Mathematical Logic at the University of Oxford and a Fellow of Merton College, Oxford. He was also Professor of Mathematics at the Hebrew University of Jerusalem.
Early life and education
Hrushovski's father, Benjamin Harshav (Hebrew: בנימין הרשב, né Hruszowski; 1928–2015), was a literary theorist, a Yiddish and Hebrew poet and a translator, professor at Yale University and Tel Aviv University in comparative literature. Ehud Hrushovski earned his PhD from the University of California, Berkeley in 1986 under Leo Harrington; his dissertation was titled Contributions to Stable Model Theory. He was a professor of mathematics at the Massachusetts Institute of Technology until 1994, when he became a professor at the Hebrew University of Jerusalem. Hrushovski moved in 2017 to the University of Oxford, where he is the Merton Professor of Mathematical Logic.
Career
Hrushovski is well known for several fundamental contributions to model theory, in particular in the branch that has become known as geometric model theory, and its applications. His PhD thesis revolutionized stable model theory (a part of model theory arising from the stability theory introduced by Saharon Shelah). Shortly afterwards he found counterexamples to the Trichotomy Conjecture of Boris Zilber and his method of proof has become well known as Hrushovski constructions and found many other applications since.
One of his most famous results is his proof of the geometric Mordell–Lang conjecture in all characteristics using model theory in 1996. This deep proof was a landmark in logic and geometry. He has had many other famous and notable results in model theory and its applications to geometry, algebra, and combinatorics.
Honours and awards
He was an invited speaker at the 1990 International Congress of Mathematicians and a plenary speaker at the 1998 ICM. He is a recipient of the Erdős Prize of the Israel Mathematical Union in 1994, the Rothschild Prize in 1998, the Karp Prize of the Association for Symbolic Logic in 1993 (jointly with Alex Wilkie) and again in 1998, In 2007, he was honored with holding the Gödel Lecture. In his absence, a lecture on his work titled Algebraic Model Theory was given by Thomas Scanlon. In 2019 he was awarded the Heinz Hopf Prize and in 2022 the Shaw Prize in Mathematical Sciences.
Hrushovski is a fellow of the American Academy of Arts and Sciences (2007), and Israel Academy of Sciences and Humanities (2008). He was elected a Fellow of the Royal Society in 2020.
References
External links
Homepage Prof. Ehud Hrushovski
Ehud Hrushovski at the Hebrew University of Jerusalem
1959 births
Living people
Israeli mathematicians
Mathematical logicians
Fellows of Merton College, Oxford
Academic staff of the Hebrew University of Jerusalem
Members of the Israel Academy of Sciences and Humanities
20th-century American Jews
Israeli Ashkenazi Jews
Model theorists
Fellows of the Royal Society
21st-century American Jews
American Ashkenazi Jews
Erdős Prize recipients | Ehud Hrushovski | [
"Mathematics"
] | 627 | [
"Model theorists",
"Mathematical logic",
"Model theory",
"Mathematical logicians"
] |
3,132,996 | https://en.wikipedia.org/wiki/Oxyhydrogen | Oxyhydrogen is a mixture of hydrogen (H2) and oxygen (O2) gases. This gaseous mixture is used for torches to process refractory materials and was the first
gaseous mixture used for welding. Theoretically, a ratio of 2:1 hydrogen:oxygen is enough to achieve maximum efficiency; in practice a ratio 4:1 or 5:1 is needed to avoid an oxidizing flame.
This mixture may also be referred to as (Scandinavian and German ; ), although some authors define knallgas to be a generic term for the mixture of fuel with the precise amount of oxygen required for complete combustion, thus 2:1 oxyhydrogen would be called "hydrogen-knallgas".
"Brown's gas" and HHO are terms for oxyhydrogen originating in pseudoscience, although is preferred due to meaning .
Properties
Oxyhydrogen will combust when brought to its autoignition temperature. For the stoichiometric mixture in air, at normal atmospheric pressure, autoignition occurs at about 570 °C (1065 °F). The minimum energy required to ignite such a mixture, at lower temperatures, with a spark is about 20 microjoules. At standard temperature and pressure, oxyhydrogen can burn when it is between about 4% and 95% hydrogen by volume.
When ignited, the gas mixture converts to water vapor and releases energy, which sustains the reaction: 241.8 kJ of energy (LHV) for every mole of burned. The amount of heat energy released is independent of the mode of combustion, but the temperature of the flame varies. The maximum temperature of about is achieved with an exact stoichiometric mixture, about hotter than a hydrogen flame in air.
When either of the gases are mixed in excess of this ratio, or when mixed with an inert gas like nitrogen, the heat must spread throughout a greater quantity of matter, reducing flame temperature.
Oxyhydrogen is explosive and can detonate when ignited, releasing a large amount of energy. This is often demonstrated in classroom environments in which teachers fill a balloon with the gas, due to the easy access of hydrogen and oxygen.
Production by electrolysis
A pure stoichiometric mixture may be obtained by water electrolysis, which uses an electric current to dissociate the water molecules:
Electrolysis:
Combustion:
William Nicholson was the first to decompose water in this manner in 1800. In theory, the input energy of a closed system always equals the output energy, as the first law of thermodynamics states. However, in practice no systems are perfectly closed, and the energy required to generate the oxyhydrogen always exceeds the energy released by combusting it, even at maximum practical efficiency, as the second law of thermodynamics implies (see Electrolysis of water#Efficiency).
Applications
Lighting
Many forms of oxyhydrogen lamps have been described, such as the limelight, which used an oxyhydrogen flame to heat a piece of quicklime to white hot incandescence. Because of the explosiveness of the oxyhydrogen, limelights have been replaced by electric lighting.
Oxyhydrogen blowpipe
The foundations of the oxy-hydrogen blowpipe were laid down by Carl Wilhelm Scheele and Joseph Priestley around the last quarter of the eighteenth century. The oxy-hydrogen blowpipe itself was developed by the Frenchman Bochard-de-Saron, the English mineralogist Edward Daniel Clarke and the American chemist Robert Hare in the late 18th and early 19th centuries. It produced a flame hot enough to melt such refractory materials as platinum, porcelain, fire brick, and corundum, and was a valuable tool in several fields of science. It is used in the Verneuil process to produce synthetic corundum.
Oxyhydrogen torch
An oxyhydrogen torch (also known as hydrogen torch) is an oxy-gas torch that burns hydrogen (the fuel) with oxygen (the oxidizer). It is used for cutting and welding metals, glasses, and thermoplastics.
Due to competition from arc welding and other oxy-fuel torches such as the acetylene-fueled cutting torch, the oxyhydrogen torch is seldom used today, but it remains the preferred cutting tool in some niche applications.
Oxyhydrogen was once used in working platinum, because at the time, only it could burn hot enough to melt the metal . These techniques have been superseded by the electric arc furnace.
Pseudoscientific claims
Oxyhydrogen is associated with various exaggerated claims. It is often called "Brown's gas" or "HHO gas", a term popularized by fringe physicist Ruggero Santilli, who claimed that his HHO gas, produced by a special apparatus, is "a new form of water", with new properties, based on his fringe theory of "magnecules".
Many other pseudoscientific claims have been made about oxyhydrogen, like an ability to neutralize radioactive waste, help plants to germinate, and more.
Oxyhydrogen is often mentioned in conjunction with vehicles that claim to use water as a fuel. The most common and decisive counter-argument against producing this gas on board to use as a fuel or fuel additive is that more energy is always needed to split water molecules than is recouped by burning the resulting gas. Additionally, the volume of gas that can be produced for on-demand consumption through electrolysis is very small in comparison to the volume consumed by an internal combustion engine.
An article in Popular Mechanics in 2008 reported that oxyhydrogen does not increase the fuel economy in automobiles.
"Water-fueled" cars should not be confused with hydrogen-fueled cars, where the hydrogen is produced elsewhere and used as fuel or where it is used as fuel enhancement.
References
Fire
Chemical mixtures
Electrolysis
Oxygen
Hydrogen technologies
Hydrogen production
Fuels
Water fuel
Industrial gases | Oxyhydrogen | [
"Chemistry"
] | 1,244 | [
"Chemical energy sources",
"Electrochemistry",
"Combustion",
"Industrial gases",
"Chemical mixtures",
"Fuels",
"Electrolysis",
"Chemical process engineering",
"nan",
"Fire"
] |
3,132,998 | https://en.wikipedia.org/wiki/Lilac%20chaser | The lilac chaser is a visual illusion, also known as the Pac-Man illusion. It consists of 12 lilac (or pink, rose, or magenta), blurred discs arranged in a circle (like the numbers on a clock), around a small black, central cross on a grey background. One of the discs disappears briefly (for about 0.1 seconds), then the next (about 0.125 seconds later), and the next, and so on, in a clockwise direction. When one stares at the cross for at least 30 seconds, one sees three illusions
A gap running around the circle of lilac discs;
A green disc running around the circle of lilac discs in place of the gap; and
The green disc running around on the grey background, with the lilac discs having disappeared in sequence.
The illusion was created by Jeremy Hinton some time before 2005. It then spread widely over the internet. It is a visual illusion that demonstrates color adaptation or human visual perception.
The chaser effect results from the phi phenomenon illusion, combined with an afterimage effect in which an opposite color, or complementary color – green – appears when each lilac spot disappears (if the discs were blue, one would see yellow), and Troxler's fading of the lilac discs.
History
The illusion was created by Jeremy Hinton sometime before 2005. He stumbled across the configuration while devising stimuli for visual motion experiments. In one version of a program to move a disc around a central point, he mistakenly neglected to erase the preceding disc, which created the appearance of a moving gap. On noticing the moving green-disc afterimage, he adjusted foreground and background colors, number of discs, and timing to optimize the effect.
In 2005 Hinton blurred the discs, allowing them to disappear when a viewer looks steadily at the central cross. Hinton entered the illusion in the European Conference on Visual Perception's Visual Illusion Contest, but was disqualified for not being registered for that year's conference. Hinton approached Michael Bach, who placed an animated GIF of the illusion on his web page of illusions, naming it the "Lilac Chaser", and later presenting a configurable Java version. The illusion became popular on the Internet in 2005.
Explanation
The lilac chaser illusion combines three simple, well-known effects, as described, for example, by Bertamini.
The phi phenomenon is the optical illusion of perceiving continuous motion between separate objects viewed rapidly in succession. The phenomenon was defined by Max Wertheimer in the Gestalt psychology in 1912 and along with persistence of vision formed a part of the base of the theory of cinema, applied by Hugo Münsterberg in 1916. The visual events in the lilac chaser initially are the disappearances of the lilac discs. The visual events then become the appearances of green afterimages (see next).
When a lilac stimulus that is presented to a particular region of the visual field for a long time (say 10 seconds or so) disappears, a green afterimage will appear. The afterimage lasts only a short time, and in this case is effaced by the reappearance of the lilac stimulus. The afterimage is a consequence of neural adaptation of the cells that carry signals from the retina of the eye to the rest of the brain, the retinal ganglion cells. According to opponent process theory, the human visual system interprets color information by processing signals from the retinal ganglion cells in three opponent channels: red versus green, blue versus yellow, and black versus white. Responses to one color of an opponent channel are antagonistic to those of the other color. Therefore, a lilac image (a combination of red and blue) will produce a green afterimage from adaptation of the red and the blue channels, so they produce weaker signals. Anything resulting in less lilac is interpreted as a combination of the other primary colors, which are green and yellow.
When a blurry stimulus is presented to a region of the visual field, and we keep our eyes still, that stimulus will disappear even though it is still physically presented. This is called Troxler fading.
These effects combine to yield the sight of a green spot running around in a circle on a grey background when only stationary, flashing lilac spots have been presented.
Psychophysics
Psychophysical research has used lilac chaser's properties. Hinton optimized the conditions for all three aspects of the illusion before releasing it. He also noted that the color of the green disc could be outside the color gamut of the monitor on which it was created (because the monitor never displays the green disc, only lilac ones). Michael Bach's version of the illusion allows viewers to adjust some aspects of the illusion. It is simple to confirm that the illusion occurs with other colors.
See also
Checker shadow illusion
References
External links
Michael Bach's Java simulation and explanation
"Electroneurobiology article". The ontological nature of the color afterimages have been analyzed in this article, "A visual yet non-optical subjective intonation", by Mariela Szirko
Optical illusions | Lilac chaser | [
"Physics"
] | 1,064 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
3,133,081 | https://en.wikipedia.org/wiki/Cray%20MTA-2 | The Cray MTA-2 is a shared-memory MIMD computer marketed by Cray Inc. It is an unusual design based on the Tera computer designed by Tera Computer Company. The original Tera computer (also known as the MTA) turned out to be nearly unmanufacturable due to its aggressive packaging and circuit technology. The MTA-2 was an attempt to correct these problems while maintaining essentially the same processor architecture respun in one silicon ASIC, down from some 26 gallium arsenide ASICs in the original MTA; and while regressing the network design from a 4-D torus topology to a less efficient but more scalable Cayley graph topology. The name Cray was added to the second version after Tera Computer Company bought the remains of the Cray Research division of Silicon Graphics in 2000 and renamed itself Cray Inc.
The MTA-2 was not a commercial success, with only one moderately-sized 40-processor system ("Boomer") being sold to the United States Naval Research Laboratory in 2002, and one 4-processor system sold to the Electronic Navigation Research Institute (ENRI) in Japan.
The MTA computers pioneered several technologies, presumably to be used in future Cray Inc. products:
A simple, whole-machine-oriented programming model.
Hardware-based multithreading.
Low-overhead thread synchronization.
See also
Cray MTA
Heterogeneous Element Processor
References
External links
Utrecht University HPCG - Cray MTA-2 page
Mta-2
Supercomputers | Cray MTA-2 | [
"Technology"
] | 327 | [
"Supercomputers",
"Supercomputing"
] |
3,133,115 | https://en.wikipedia.org/wiki/Hadwiger%E2%80%93Nelson%20problem | In geometric graph theory, the Hadwiger–Nelson problem, named after Hugo Hadwiger and Edward Nelson, asks for the minimum number of colors required to color the plane such that no two points at distance 1 from each other have the same color. The answer is unknown, but has been narrowed down to one of the numbers 5, 6 or 7. The correct value may depend on the choice of axioms for set theory.
Relation to finite graphs
The question can be phrased in graph theoretic terms as follows. Let G be the unit distance graph of the plane: an infinite graph with all points of the plane as vertices and with an edge between two vertices if and only if the distance between the two points is 1. The Hadwiger–Nelson problem is to find the chromatic number of G. As a consequence, the problem is often called "finding the chromatic number of the plane". By the de Bruijn–Erdős theorem, a result of , the problem is equivalent (under the assumption of the axiom of choice) to that of finding the largest possible chromatic number of a finite unit distance graph.
History
According to , the problem was first formulated by Nelson in 1950, and first published by . had earlier published a related result, showing that any cover of the plane by five congruent closed sets contains a unit distance in one of the sets, and he also mentioned the problem in a later paper . discusses the problem and its history extensively.
One application of the problem connects it to the Beckman–Quarles theorem, according to which any mapping of the Euclidean plane (or any higher dimensional space) to itself that preserves unit distances must be an isometry, preserving all distances. Finite colorings of these spaces can be used to construct mappings from them to higher-dimensional spaces that preserve distances but are not isometries. For instance, the Euclidean plane can be mapped to a six-dimensional space by coloring it with seven colors so that no two points at distance one have the same color, and then mapping the points by their colors to the seven vertices of a six-dimensional regular simplex with unit-length edges. This maps any two points at unit distance to distinct colors, and from there to distinct vertices of the simplex, at unit distance apart from each other. However, it maps all other distances to zero or one, so it is not an isometry. If the number of colors needed to color the plane could be reduced from seven to a lower number, the same reduction would apply to the dimension of the target space in this construction.
Lower and upper bounds
The fact that the chromatic number of the plane must be at least four follows from the existence of a seven-vertex unit distance graph with chromatic number four, named the Moser spindle after its discovery in 1961 by the brothers William and Leo Moser. This graph consists of two unit equilateral triangles joined at a common vertex, x. Each of these triangles is joined along another edge to another equilateral triangle; the vertices y and z of these joined triangles are at unit distance from each other. If the plane could be three-colored, the coloring within the triangles would force y and z to both have the same color as x, but then, since y and z are at unit distance from each other, we would not have a proper coloring of the unit distance graph of the plane. Therefore, at least four colors are needed to color this graph and the plane containing it. An alternative lower bound in the form of a ten-vertex four-chromatic unit distance graph, the Golomb graph, was discovered at around the same time by Solomon W. Golomb.
The lower bound was raised to five in 2018, when computer scientist and biogerontologist Aubrey de Grey found a 1581-vertex, non-4-colourable unit-distance graph. The proof is computer assisted. Mathematician Gil Kalai and computer scientist Scott Aaronson posted discussion of de Grey's finding, with Aaronson reporting independent verifications of de Grey's result using SAT solvers. Kalai linked additional posts by Jordan Ellenberg and Noam Elkies, with Elkies and (separately) de Grey proposing a Polymath project to find non-4-colorable unit distance graphs with fewer vertices than the one in de Grey's construction. As of 2021, the smallest known unit distance graph with chromatic number 5 has 509 vertices. The page of the Polymath project, , contains further research, media citations and verification data.
The upper bound of seven on the chromatic number follows from the existence of a tessellation of the plane by regular hexagons, with diameter slightly less than one, that can be assigned seven colors in a repeating pattern to form a 7-coloring of the plane. According to , this upper bound was first observed by John R. Isbell.
Variations
The problem can easily be extended to higher dimensions. Finding the chromatic number of 3-space is a particularly interesting problem. As with the version on the plane, the answer is not known, but has been shown to be at least 6 and at most 15.
In the n-dimensional case of the problem, an easy upper bound on the number of required colorings found from tiling n-dimensional cubes is . A lower bound from simplexes is . For , a lower bound of is available using a generalization of the Moser spindle: a pair of the objects (each two simplexes glued together on a facet) which are joined on one side by a point and the other side by a line. An exponential lower bound was proved by Frankl and Wilson in 1981.
One can also consider colorings of the plane in which the sets of points of each color are restricted to sets of some particular type. Such restrictions may cause the required number of colors to increase, as they prevent certain colorings from being considered acceptable. For instance, if a coloring of the plane consists of regions bounded by Jordan curves, then at least six colors are required.
See also
Four color theorem
Notes
References
External links
Unsolved problems in graph theory
Geometric graph theory
Graph coloring
Infinite graphs
Mathematical problems | Hadwiger–Nelson problem | [
"Mathematics"
] | 1,266 | [
"Unsolved problems in mathematics",
"Graph coloring",
"Mathematical objects",
"Graph theory",
"Infinity",
"Infinite graphs",
"Unsolved problems in graph theory",
"Mathematical relations",
"Geometric graph theory",
"Mathematical problems"
] |
3,133,127 | https://en.wikipedia.org/wiki/F%C3%A1ry%E2%80%93Milnor%20theorem | In the mathematical theory of knots, the Fáry–Milnor theorem, named after István Fáry and John Milnor, states that three-dimensional smooth curves with small total curvature must be unknotted. The theorem was proved independently by Fáry in 1949 and Milnor in 1950. It was later shown to follow from the existence of quadrisecants .
Statement
If K is any closed curve in Euclidean space that is sufficiently smooth to define the curvature κ at each of its points, and if the total absolute curvature is less than or equal to 4π, then K is an unknot, i.e.:
The contrapositive tells us that if K is not an unknot, i.e. K is not isotopic to the circle, then the total curvature will be strictly greater than 4π. Notice that having the total curvature less than or equal to 4 is merely a sufficient condition for K to be an unknot; it is not a necessary condition. In other words, although all knots with total curvature less than or equal to 4π are the unknot, there exist unknots with curvature strictly greater than 4π.
Generalizations to non-smooth curves
For closed polygonal chains the same result holds with the integral of curvature replaced by the sum of angles between adjacent segments of the chain. By approximating arbitrary curves by polygonal chains, one may extend the definition of total curvature to larger classes of curves, within which the Fáry–Milnor theorem also holds (, ).
References
.
.
.
.
External links
. Fenner describes a geometric proof of the theorem, and of the related theorem that any smooth closed curve has total curvature at least 2π.
Knot theory
Theorems in topology | Fáry–Milnor theorem | [
"Mathematics"
] | 358 | [
"Mathematical theorems",
"Mathematical problems",
"Topology",
"Theorems in topology"
] |
3,133,250 | https://en.wikipedia.org/wiki/Zariski%20geometry | In mathematics, a Zariski geometry consists of an abstract structure introduced by Ehud Hrushovski and Boris Zilber, in order to give a characterisation of the Zariski topology on an algebraic curve, and all its powers. The Zariski topology on a product of algebraic varieties is very rarely the product topology, but richer in closed sets defined by equations that mix two sets of variables. The result described gives that a very definite meaning, applying to projective curves and compact Riemann surfaces in particular.
Definition
A Zariski geometry consists of a set X and a topological structure on each of the sets
X, X2, X3, ...
satisfying certain axioms.
(N) Each of the Xn is a Noetherian topological space, of dimension at most n.
Some standard terminology for Noetherian spaces will now be assumed.
(A) In each Xn, the subsets defined by equality in an n-tuple are closed. The mappings
Xm → Xn
defined by projecting out certain coordinates and setting others as constants are all continuous.
(B) For a projection
p: Xm → Xn
and an irreducible closed subset Y of Xm, p(Y) lies between its closure Z and Z \ where is a proper closed subset of Z. (This is quantifier elimination, at an abstract level.)
(C) X is irreducible.
(D) There is a uniform bound on the number of elements of a fiber in a projection of any closed set in Xm, other than the cases where the fiber is X.
(E) A closed irreducible subset of Xm, of dimension r, when intersected with a diagonal subset in which s coordinates are set equal, has all components of dimension at least r − s + 1.
The further condition required is called very ample (cf. very ample line bundle). It is assumed there is an irreducible closed subset P of some Xm, and an irreducible closed subset Q of P× X2, with the following properties:
(I) Given pairs (x, y), (, ) in X2, for some t in P, the set of (t, u, v) in Q includes (t, x, y) but not (t, , )
(J) For t outside a proper closed subset of P, the set of (x, y) in X2, (t, x, y) in Q is an irreducible closed set of dimension 1.
(K) For all pairs (x, y), (, ) in X2, selected from outside a proper closed subset, there is some t in P such that the set of (t, u, v) in Q includes (t, x, y) and (t, , ).
Geometrically this says there are enough curves to separate points (I), and to connect points (K); and that such curves can be taken from a single parametric family.
Then Hrushovski and Zilber prove that under these conditions there is an algebraically closed field K, and a non-singular algebraic curve C, such that its Zariski geometry of powers and their Zariski topology is isomorphic to the given one. In short, the geometry can be algebraized.
References
Model theory
Algebraic curves
Vector bundles | Zariski geometry | [
"Mathematics"
] | 697 | [
"Mathematical logic",
"Model theory"
] |
3,133,272 | https://en.wikipedia.org/wiki/Retinite | Retinite is resin, particularly from beds of brown coal which are near amber in appearance, but contain little or no succinic acid. It may conveniently serve as a generic name, since no two independent occurrences prove to be alike, and the indefinite multiplication of names, no one of them properly specific, is not to be desired.
Retinite resins contain no succinic acid and oxygen from 6% to 15%.
References
Resins | Retinite | [
"Physics"
] | 94 | [
"Amorphous solids",
"Unsolved problems in physics",
"Resins"
] |
3,133,314 | https://en.wikipedia.org/wiki/Steve%20Ciarcia | Steve Ciarcia is an American embedded control systems engineer. He became popular through his Ciarcia's Circuit Cellar column in BYTE magazine, and later through the Circuit Cellar magazine that he published. He is also the author of Build Your Own Z80 Computer, edited in 1981 and Take My Computer...Please!, published in 1978. He has also compiled seven volumes of his hardware project articles that appeared in BYTE magazine.
In 1982 and 1983, he published a series of articles on building the MPX-16, a 16-bit single-board computer that was hardware-compatible with the IBM PC.
In December 2009, Steve Ciarcia announced that for the American market a strategic cooperation would be entered between Elektor and his Circuit Cellar magazine. In November 2012, Steve Ciarcia announced that he was quitting Circuit Cellar and Elektor would take it over.
In October 2014, Ciarcia purchased Circuit Cellar, audioXpress, Voice Coil, Loudspeaker Industry Sourcebook, and their respective websites, newsletters, and products from Netherlands-based Elektor International Media. The aforementioned magazines will continue to be published by Ciarcia's US-based team.
In July 2016, Steve Ciarcia sold the company to long time employee KC Prescott operating under the company name KCK Media Corp.
References
External links
Circuit Cellar magazine
Index on Steve Ciarcia's articles in BYTE
American magazine editors
American technology writers
Control theorists
Living people
Year of birth missing (living people) | Steve Ciarcia | [
"Engineering"
] | 308 | [
"Control engineering",
"Control theorists"
] |
3,133,347 | https://en.wikipedia.org/wiki/Propiolic%20acid | Propiolic acid is the organic compound with the formula HC2CO2H. It is the simplest acetylenic carboxylic acid. It is a colourless liquid that crystallises to give silky crystals. Near its boiling point, it decomposes.
It is soluble in water and possesses an odor like that of acetic acid.
Preparation
It is prepared commercially by oxidizing propargyl alcohol at a lead electrode. It can also be prepared by decarboxylation of acetylenedicarboxylic acid.
Reactions and applications
Exposure to sunlight converts it into trimesic acid (benzene-1,3,5-tricarboxylic acid). It undergoes bromination to give dibromoacrylic acid. With hydrogen chloride it forms chloroacrylic acid. Its ethyl ester condenses with hydrazine to form pyrazolone.
It forms a characteristic explosive solid upon treatment to its aqueous solution with ammoniacal silver nitrate. An amorphous explosive precipitate forms with ammoniacal cuprous chloride.
Propiolates
Propiolates are esters or salts of propiolic acid. Common examples include methyl propiolate and ethyl propiolate.
See also
Propargyl
Propargyl alcohol
References
Carboxylic acids
Alkyne derivatives | Propiolic acid | [
"Chemistry"
] | 280 | [
"Carboxylic acids",
"Functional groups"
] |
3,133,356 | https://en.wikipedia.org/wiki/Switch%20virtual%20interface | A switch virtual interface (SVI) represents a logical layer-3 interface on a switch.
VLANs divide broadcast domains in a LAN environment. Whenever hosts in one VLAN need to communicate with hosts in another VLAN, the traffic must be routed between them. This is known as inter-VLAN routing. On layer-3 switches it is accomplished by the creation of layer-3 interfaces (SVIs). Inter VLAN routing, in other words routing between VLANs, can be achieved using SVIs.
SVI or VLAN interface, is a virtual routed interface that connects a VLAN on the device to the Layer 3 router engine on the same device. Only one VLAN interface can be associated with a VLAN, but you need to configure a VLAN interface for a VLAN only when you want to route between VLANs or to provide IP host connectivity to the device through a virtual routing and forwarding (VRF) instance that is not the management VRF. When you enable VLAN interface creation, a switch creates a VLAN interface for the default VLAN (VLAN 1) to permit remote switch administration.
SVIs are generally configured for a VLAN for the following reasons:
Allow traffic to be routed between VLANs by providing a default gateway for the VLAN.
Provide fallback bridging (if required for non-routable protocols).
Provide Layer 3 IP connectivity to the switch.
Support bridging configurations and routing protocol.
Access Layer - 'Routed Access' Configuration (in lieu of Spanning Tree)
SVIs advantages include:
Much faster than router-on-a-stick, because everything is hardware-switched and routed.
No need for external links from the switch to the router for routing.
Not limited to one link. Layer 2 EtherChannels can be used between the switches to get more bandwidth.
Latency is much lower, because it does not need to leave the switch
An SVI can also be known as a Routed VLAN Interface (RVI) by some vendors.
References
Cisco Systems, Configure InterVLAN Routing on Layer 3 Switches
Cisco Systems, Configuring SVI
Cisco Systems, 2006, "Building Cisco Multilayer Switched Networks" (Version 3.0), Cisco Systems Inc.
Switch Virtual Interfaces (SVI) configuration
Data Centre Networking Module (COMH9003) | Cork Institute of Technology
Computer networking | Switch virtual interface | [
"Technology",
"Engineering"
] | 496 | [
"Computer networking",
"Computer engineering",
"Computer network stubs",
"Computer science",
"Computing stubs"
] |
3,133,405 | https://en.wikipedia.org/wiki/Flow%20battery | A flow battery, or redox flow battery (after reduction–oxidation), is a type of electrochemical cell where chemical energy is provided by two chemical components dissolved in liquids that are pumped through the system on separate sides of a membrane. Ion transfer inside the cell (accompanied by current flow through an external circuit) occurs across the membrane while the liquids circulate in their respective spaces.
Various flow batteries have been demonstrated, including inorganic and organic forms. Flow battery design can be further classified into full flow, semi-flow, and membraneless.
The fundamental difference between conventional and flow batteries is that energy is stored in the electrode material in conventional batteries, while in flow batteries it is stored in the electrolyte.
A flow battery may be used like a fuel cell (where new charged negolyte (a.k.a. reducer or fuel) and charged posolyte (a.k.a. oxidant) are added to the system) or like a rechargeable battery (where an electric power source drives regeneration of the reducer and oxidant).
Flow batteries have certain technical advantages over conventional rechargeable batteries with solid electroactive materials, such as independent scaling of power (determined by the size of the stack) and of energy (determined by the size of the tanks), long cycle and calendar life, and potentially lower total cost of ownership,. However, flow batteries suffer from low cycle energy efficiency (50–80%). This drawback stems from the need to operate flow batteries at high (>= 100 mA/cm2) current densities to reduce the effect of internal crossover (through the membrane/separator) and to reduce the cost of power (size of stacks). Also, most flow batteries (Zn-Cl2, Zn-Br2 and H2-LiBrO3 are exceptions) have lower specific energy (heavier weight) than lithium-ion batteries. The heavier weight results mostly from the need to use a solvent (usually water) to maintain the redox active species in the liquid phase.
Patent Classifications for flow batteries had not been fully developed as of 2021. Cooperative Patent Classification considers flow batteries as a subclass of regenerative fuel cell (H01M8/18), even though it is more appropriate to consider fuel cells as a subclass of flow batteries.
Cell voltage is chemically determined by the Nernst equation and ranges, in practical applications, from 1.0 to 2.43 volts. The energy capacity is a function of the electrolyte volume and the power is a function of the surface area of the electrodes.
History
The zinc–bromine flow battery (Zn-Br2) was the original flow battery. John Doyle file patent on September 29, 1879. Zn-Br2 batteries have relatively high specific energy, and were demonstrated in electric cars in the 1970s.
Walther Kangro, an Estonian chemist working in Germany in the 1950s, was the first to demonstrate flow batteries based on dissolved transition metal ions: Ti–Fe and Cr–Fe. After initial experimentations with Ti–Fe redox flow battery (RFB) chemistry, NASA and groups in Japan and elsewhere selected Cr–Fe chemistry for further development. Mixed solutions (i.e. comprising both chromium and iron species in the negolyte and in the posolyte) were used in order to reduce the effect of time-varying concentration during cycling.
In the late 1980s, Sum, Rychcik and Skyllas-Kazacos at the University of New South Wales (UNSW) in Australia demonstrated vanadium RFB chemistry UNSW filed several patents related to VRFBs, that were later licensed to Japanese, Thai and Canadian companies, which tried to commercialize this technology with varying success.
Organic redox flow batteries emerged in 2009.
In 2022, Dalian, China began operating a 400 MWh, 100 MW vanadium flow battery, then the largest of its type.
Sumitomo Electric has built flow batteries for use in Taiwan, Belgium, Australia, Morocco and California. Hokkaido’s flow battery farm was the biggest in the world when it opened in April 2022 — until China deployed one eight times larger that can match the output of a natural gas plant.
Design
A flow battery is a rechargeable fuel cell in which an electrolyte containing one or more dissolved electroactive elements flows through an electrochemical cell that reversibly converts chemical energy to electrical energy. Electroactive elements are "elements in solution that can take part in an electrode reaction or that can be adsorbed on the electrode."
Electrolyte is stored externally, generally in tanks, and is typically pumped through the cell (or cells) of the reactor. Flow batteries can be rapidly "recharged" by replacing discharged electrolyte liquid (analogous to refueling internal combustion engines) while recovering the spent material for recharging. They can also be recharged in situ. Many flow batteries use carbon felt electrodes due to its low cost and adequate electrical conductivity, despite their limited power density due to their low inherent activity toward many redox couples. The amount of electricity that can be generated depends on the volume of electrolyte.
Flow batteries are governed by the design principles of electrochemical engineering.
Evaluation
Redox flow batteries, and to a lesser extent hybrid flow batteries, have the advantages of:
Independent scaling of energy (tanks) and power (stack), which allows for a cost/weight/etc. optimization for each application
Long cycle and calendar lives (because there are no solid-to-solid phase transitions, which degrade lithium-ion and related batteries)
Quick response times
No need for "equalisation" charging (the overcharging of a battery to ensure all cells have an equal charge)
No harmful emissions
Little/no self-discharge during idle periods
Recycling of electroactive materials
Some types offer easy state-of-charge determination (through voltage dependence on charge), low maintenance and tolerance to overcharge/overdischarge.
They are safe because they typically do not contain flammable electrolytes, and electrolytes can be stored away from the power stack.
The main disadvantages are:
Low energy density (large tanks are required to store useful amounts of energy)
Low charge and discharge rates. This implies large electrodes and membrane separators, increasing cost.
Lower energy efficiency, because they operate at higher current densities to minimize the effects of cross-over (internal self-discharge) and to reduce cost.
Flow batteries typically have a higher energy efficiency than fuel cells, but lower than lithium-ion batteries.
Traditional flow battery chemistries have both low specific energy (which makes them too heavy for fully electric vehicles) and low specific power (which makes them too expensive for stationary energy storage). However a high power of 1.4 W/cm2 was demonstrated for hydrogen–bromine flow batteries, and a high specific energy (530 Wh/kg at the tank level) was shown for hydrogen–bromate flow batteries
Traditional flow batteries
The redox cell uses redox-active species in fluid (liquid or gas) media. Redox flow batteries are rechargeable (secondary) cells. Because they employ heterogeneous electron transfer rather than solid-state diffusion or intercalation they are more similar to fuel cells than to conventional batteries. The main reason fuel cells are not considered to be batteries, is because originally (in the 1800s) fuel cells emerged as a means to produce electricity directly from fuels (and air) via a non-combustion electrochemical process. Later, particularly in the 1960s and 1990s, rechargeable fuel cells (i.e. /, such as unitized regenerative fuel cells in NASA's Helios Prototype) were developed.
Cr–Fe chemistry has disadvantages, including hydrate isomerism (i.e. the equilibrium between electrochemically active Cr3+ chloro-complexes and inactive hexa-aqua complex and hydrogen evolution on the negode. Hydrate isomerism can be alleviated by adding chelating amino-ligands, while hydrogen evolution can be mitigated by adding Pb salts to increase the H2 overvoltage and Au salts for catalyzing the chromium electrode reaction.
Traditional redox flow battery chemistries include iron-chromium, vanadium, polysulfide–bromide (Regenesys), and uranium. Redox fuel cells are less common commercially although many have been proposed.
Vanadium
Vanadium redox flow batteries are the commercial leaders. They use vanadium at both electrodes, so they do not suffer cross-contamination. The limited solubility of vanadium salts, however, offsets this advantage in practice. This chemistry's advantages include four oxidation states within the electrochemical voltage window of the graphite-aqueous acid interface, and thus the elimination of the mixing dilution, detrimental in Cr–Fe RFBs. More importantly for commercial success is the near-perfect match of the voltage window of carbon/aqueous acid interface with that of vanadium redox-couples. This extends the life of the low-cost carbon electrodes and reduces the impact of side reactions, such as H2 and O2 evolutions, resulting in many year durability and many cycle (15,000–20,000) lives, which in turn results in a record low levelized cost of energy (LCOE, system cost divided by usable energy, cycle life, and round-trip efficiency). These long lifetimes allow for the amortization of their relatively high capital cost (driven by vanadium, carbon felts, bipolar plates, and membranes). The LCOE is on the order of a few tens cents per kWh, much lower than of solid-state batteries and near the targets of 5 cents stated by US and EC government agencies. Major challenges include: low abundance and high costs of V2O5 (> $30 / Kg); parasitic reactions including hydrogen and oxygen evolution; and precipitation of V2O5 during cycling.
Hybrid
The hybrid flow battery (HFB) uses one or more electroactive components deposited as a solid layer. The major disadvantage is that this reduces decoupled energy and power. The cell contains one battery electrode and one fuel cell electrode. This type is limited in energy by the electrode surface area.
HFBs include zinc–bromine, zinc–cerium, soluble lead–acid, and all-iron flow batteries. Weng et al. reported a vanadium–metal hydride hybrid flow battery with an experimental OCV of 1.93 V and operating voltage of 1.70 V, relatively high values. It consists of a graphite felt positive electrode operating in a mixed solution of and , and a metal hydride negative electrode in KOH aqueous solution. The two electrolytes of different pH are separated by a bipolar membrane. The system demonstrated good reversibility and high efficiencies in coulomb (95%), energy (84%), and voltage (88%). They reported improvements with increased current density, inclusion of larger 100 cm2 electrodes, and series operation. Preliminary data using a fluctuating simulated power input tested the viability toward kWh scale storage. In 2016, a high energy density Mn(VI)/Mn(VII)-Zn hybrid flow battery was proposed.
Zinc-polyiodide
A prototype zinc–polyiodide flow battery demonstrated an energy density of 167 Wh/L. Older zinc–bromide cells reach 70 Wh/L. For comparison, lithium iron phosphate batteries store 325 Wh/L. The zinc–polyiodide battery is claimed to be safer than other flow batteries given its absence of acidic electrolytes, nonflammability and operating range of that does not require extensive cooling circuitry, which would add weight and occupy space. One unresolved issue is zinc buildup on the negative electrode that can permeate the membrane, reducing efficiency. Because of the Zn dendrite formation, Zn-halide batteries cannot operate at high current density (> 20 mA/cm2) and thus have limited power density. Adding alcohol to the electrolyte of the ZnI battery can help. The drawbacks of Zn/I RFB lie are the high cost of Iodide salts (> $20 / Kg); limited area capacity of Zn deposition, reducing the decoupled energy and power; and Zn dendrite formation.
When the battery is fully discharged, both tanks hold the same electrolyte solution: a mixture of positively charged zinc ions () and negatively charged iodide ion, (). When charged, one tank holds another negative ion, polyiodide, (). The battery produces power by pumping liquid across the stack where the liquids mix. Inside the stack, zinc ions pass through a selective membrane and change into metallic zinc on the stack's negative side. To increase energy density, bromide ions () are used as the complexing agent to stabilize the free iodine, forming iodine–bromide ions () as a means to free up iodide ions for charge storage.
Proton flow
Proton flow batteries (PFB) integrate a metal hydride storage electrode into a reversible proton exchange membrane (PEM) fuel cell. During charging, PFB combines hydrogen ions produced from splitting water with electrons and metal particles in one electrode of a fuel cell. The energy is stored in the form of a metal hydride solid. Discharge produces electricity and water when the process is reversed and the protons are combined with ambient oxygen. Metals less expensive than lithium can be used and provide greater energy density than lithium cells.
Organic
Compared to inorganic redox flow batteries, such as vanadium and Zn-Br2 batteries. Organic redox flow batteries advantage is the tunable redox properties of its active components. As of 2021, organic RFB experienced low durability (i.e. calendar or cycle life, or both) and have not been demonstrated on a commercial scale.
Organic redox flow batteries can be further classified into aqueous (AORFBs) and non-aqueous (NAORFBs). AORFBs use water as solvent for electrolyte materials while NAORFBs employ organic solvents. AORFBs and NAORFBs can be further divided into total and hybrid systems. The former use only organic electrode materials, while the latter use inorganic materials for either anode or cathode. In larger-scale energy storage, lower solvent cost and higher conductivity give AORFBs greater commercial potential, as well as offering the safety advantages of water-based electrolytes. NAORFBs instead provide a much larger voltage window and occupy less space.
pH neutral AORFBs
pH neutral AORFBs are operated at pH 7 conditions, typically using NaCl as a supporting electrolyte. At pH neutral conditions, organic and organometallic molecules are more stable than at corrosive acidic and alkaline conditions. For example, K4[Fe(CN)], a common catholyte used in AORFBs, is not stable in alkaline solutions but is at pH neutral conditions.
AORFBs used methyl viologen as an anolyte and 4-hydroxy-2,2,6,6-tetramethylpiperidin-1-oxyl as a catholyte at pH neutral conditions, plus NaCL and a low-cost anion exchange membrane. This MV/TEMPO system has the highest cell voltage, 1.25V, and, possibly, lowest capital cost ($180/kWh) reported for AORFBs as of 2015. The aqueous liquid electrolytes were designed as a drop-in replacement without replacing infrastructure. A 600-milliwatt test battery was stable for 100 cycles with nearly 100 percent efficiency at current densities ranging from 20 to 100 mA/cm, with optimal performance rated at 40–50mA, at which about 70% of the battery's original voltage was retained. Neutral AORFBs can be more environmentally friendly than acid or alkaline alternatives, while showing electrochemical performance comparable to corrosive RFBs. The MV/TEMPO AORFB has an energy density of 8.4Wh/L with the limitation on the TEMPO side. In 2019Viologen-based flow batteries using an ultralight sulfonate–viologen/ferrocyanide AORFB were reported to be stable for 1000 cycles at an energy density of 10 Wh/L, the most stable, energy-dense AORFB to that date.
Acidic AORFBs
Quinones and their derivatives are the basis of many organic redox systems. In one study, 1,2-dihydrobenzoquinone-3,5-disulfonic acid (BQDS) and 1,4-dihydrobenzoquinone-2-sulfonic acid (BQS) were employed as cathodes, and conventional Pb/PbSO4 was the anolyte in a hybrid acid AORFB. Quinones accept two units of electrical charge, compared with one in conventional catholyte, implying twice as much energy in a given volume.
Another quinone 9,10-Anthraquinone-2,7-disulfonic acid (AQDS), was evaluated. AQDS undergoes rapid, reversible two-electron/two-proton reduction on a glassy carbon electrode in sulfuric acid. An aqueous flow battery with inexpensive carbon electrodes, combining the quinone/hydroquinone couple with the / redox couple, yields a peak galvanic power density exceeding 6,000 W/m2 at 13,000 A/m2. Cycling showed > 99% storage capacity retention per cycle. Volumetric energy density was over 20 Wh/L. Anthraquinone-2-sulfonic acid and anthraquinone-2,6-disulfonic acid on the negative side and 1,2-dihydrobenzoquinone- 3,5-disulfonic acid on the positive side avoids the use of hazardous Br2. The battery was claimed to last 1,000 cycles without degradation. It has a low cell voltage (ca. 0.55V) and a low energy density (< 4Wh/L).
Replacing hydrobromic acid with a less toxic alkaline solution (1M KOH) and ferrocyanide was less corrosive, allowing the use of inexpensive polymer tanks. The increased electrical resistance in the membrane was compensated increased voltage to 1.2V. Cell efficiency exceeded 99%, while round-trip efficiency measured 84%. The battery offered an expected lifetime of at least 1,000 cycles. Its theoretic energy density was 19Wh/L. Ferrocyanide's chemical stability in high pH KOH solution was not verified.
Integrating both anolyte and catholyte in the same molecule, i.e., bifunctional analytes or combi-molecules allow the same material to be used in both tanks. In one tank it is an electron donor, while in the other it is an electron recipient. This has advantages such as diminishing crossover effects. Thus, quinone diaminoanthraquinone and indigo-based molecules as well as TEMPO/phenazine are potential electrolytes for such symmetric redox-flow batteries (SRFB).
Another approach adopted a Blatter radical as the donor/recipient. It endured 275 charge and discharge cycles in tests, although it was not water-soluble.
Alkaline
Quinone and fluorenone molecules can be reengineered to increase water solubility. In 2021 a reversible ketone (de)hydrogenation demonstration cell operated continuously for 120 days over 1,111 charging cycles at room temperature without a catalyst, retaining 97% percent capacity. The cell offered more than double the energy density of vanadium-based systems. The major challenge was the lack of a stable catholyte, holding energy densities below 5 Wh/L. Alkaline AORFBs use excess potassium ferrocyanide catholyte because of the stability issue of ferrocyanide in alkaline solutions.
Metal-organic flow batteries use organic ligands to improve redox properties. The ligands can be chelates such as EDTA, and can enable the electrolyte to be in neutral or alkaline conditions under which metal aquo complexes would otherwise precipitate. By blocking the coordination of water to the metal, organic ligands can inhibit metal-catalyzed water-splitting reactions, resulting in higher voltage aqueous systems. For example, the use of chromium coordinated to 1,3-propanediaminetetraacetate (PDTA), gave cell potentials of 1.62 V vs. ferrocyanide and a record 2.13 V vs. bromine. Metal-organic flow batteries may be known as coordination chemistry flow batteries, such as Lockheed Martin's Gridstar Flow technology.
Oligomer
Oligomer redox-species were proposed to reduce crossover, while allowing low-cost membranes. Such redox-active oligomers are known as redoxymers. One system uses organic polymers and a saline solution with a cellulose membrane. A prototype underwent 10,000 charging cycles while retaining substantial capacity. The energy density was 10 Wh/L. Current density reached ,1 amperes/cm2.
Another oligomer RFB employed viologen and TEMPO redoxymers in combination with low-cost dialysis membranes. Functionalized macromolecules (similar to acrylic glass or styrofoam) dissolved in water were the active electrode material. The size-selective nanoporous membrane worked like a strainer and is produced much more easily and at lower cost than conventional ion-selective membranes. It block the big "spaghetti"-like polymer molecules, while allowing small counterions to pass. The concept may solve the high cost of traditional Nafion membrane. RFBs with oligomer redox-species have not demonstrated competitive area-specific power. Low operating current density may be an intrinsic feature of large redox-molecules.
Other types
Other flow-type batteries include the zinc–cerium battery, the zinc–bromine battery, and the hydrogen–bromine battery.
Membraneless
A membraneless battery relies on laminar flow in which two liquids are pumped through a channel, where they undergo electrochemical reactions to store or release energy. The solutions pass in parallel, with little mixing. The flow naturally separates the liquids, without requiring a membrane.
Membranes are often the most costly and least reliable battery components, as they are subject to corrosion by repeated exposure to certain reactants. The absence of a membrane enables the use of a liquid bromine solution and hydrogen: this combination is problematic when membranes are used, because they form hydrobromic acid that can destroy the membrane. Both materials are available at low cost. The design uses a small channel between two electrodes. Liquid bromine flows through the channel over a graphite cathode and hydrobromic acid flows under a porous anode. At the same time, hydrogen gas flows across the anode. The chemical reaction can be reversed to recharge the battery – a first for a membraneless design. One such membraneless flow battery announced in August 2013 produced a maximum power density of 795 kW/cm2, three times more than other membraneless systems—and an order of magnitude higher than lithium-ion batteries.
In 2018, a macroscale membraneless RFB capable of recharging and recirculation of the electrolyte streams was demonstrated. The battery was based on immiscible organic catholyte and aqueous anolyte liquids, which exhibited high capacity retention and Coulombic efficiency during cycling.
Suspension-based
A lithium–sulfur system arranged in a network of nanoparticles eliminates the requirement that charge moves in and out of particles that are in direct contact with a conducting plate. Instead, the nanoparticle network allows electricity to flow throughout the liquid. This allows more energy to be extracted.
In a semi-solid flow battery, positive and negative electrode particles are suspended in a carrier liquid. The suspensions are flow through a stack of reaction chambers, separated by a barrier such as a thin, porous membrane. The approach combines the basic structure of aqueous-flow batteries, which use electrode material suspended in a liquid electrolyte, with the chemistry of lithium-ion batteries in both carbon-free suspensions and slurries with a conductive carbon network. The carbon-free semi-solid RFB is also referred to as solid dispersion redox flow batteries. Dissolving a material changes its chemical behavior significantly. However, suspending bits of solid material preserves the solid's characteristics. The result is a viscous suspension.
In 2022, Influit Energy announced a flow battery electrolyte consisting of a metal oxide suspended in an aqueous solution.
Flow batteries with redox-targeted solids (ROTS), also known as solid energy boosters (SEBs) either the posolyte or negolyte or both (a.k.a. redox fluids), come in contact with one or more solid electroactive materials (SEM). The fluids comprise one or more redox couples, with redox potentials flanking the redox potential of the SEM. Such SEB/RFBs combine the high specific energy advantage of conventional batteries (such as lithium-ion) with the decoupled energy-power advantage of flow batteries. SEB(ROTS) RFBs have advantages compared to semi-solid RFBs, such as no need to pump viscous slurries, no precipitation/clogging, higher area-specific power, longer durability, and wider chemical design space. However, because of double energy losses (one in the stack and another in the tank between the SEB(ROTS) and a mediator), such batteries suffer from poor energy efficiency. On a system-level, the practical specific energy of traditional lithium-ion batteries is larger than that of SEB(ROTS)-flow versions of lithium-ion batteries.
Comparison
Applications
Technical merits make redox flow batteries well-suited for large-scale energy storage. Flow batteries are normally considered for relatively large (1 kWh – 10 MWh) stationary applications with multi-hour charge-discharge cycles. Flow batteries are not cost-efficient for shorter charge/discharge times. Market niches include:
Grid storage - short and/or long-term energy storage for use by the grid
Load balancing – the battery is attached to the grid to store power during off-peak hours and release it during peak demand periods. The common problem limiting this use of most flow battery chemistries is their low areal power (operating current density) which translates into high cost.
Shifting energy from intermittent sources such as wind or solar for use during periods of peak demand.
Peak shaving, where demand spikes are met by the battery.
UPS, where the battery is used if the main power fails to provide an uninterrupted supply.
Power conversion – Because all cells share the same electrolyte(s), the electrolytes may be charged using a given number of cells and discharged with a different number. As battery voltage is proportional to the number of cells used, the battery can act as a powerful DC–DC converter. In addition, if the number of cells is continuously changed (on the input and/or output side) power conversion can also be AC/DC, AC/AC, or DC–AC with the frequency limited by that of the switching gear.
Electric vehicles – Because flow batteries can be rapidly "recharged" by replacing the electrolyte, they can be used for applications where the vehicle needs to take on energy as fast as a gas vehicle. A common problem with most RFB chemistries in EV applications is their low energy density which translated into a short driving range. Zinc-chlorine batteries and batteries with highly soluble halates are a notable exception.
Stand-alone power system – An example of this is in cellphone base stations where no grid power is available. The battery can be used alongside solar or wind power sources to compensate for their fluctuating power levels and alongside a generator to save fuel.
See also
Glossary of fuel cell terms
Hydrogen technologies
List of battery types
Redox electrode
Microtubular membrane
References
External links
Electropaedia on Flow Batteries
Research on the uranium redox flow battery
South Australian Flow Battery Project
Electrochemistry
Fuel cells
Battery types | Flow battery | [
"Chemistry"
] | 5,963 | [
"Electrochemistry"
] |
3,133,430 | https://en.wikipedia.org/wiki/Durable%20water%20repellent | Durable water repellent, or DWR, is a coating added to fabrics at the factory to make them water-resistant (hydrophobic). Most factory-applied treatments are fluoropolymer based; these applications are quite thin and not always effective. Durable water repellents are commonly used in conjunction with waterproof breathable fabrics such as Gore-Tex to prevent the outer layer of fabric from becoming saturated with water. This saturation, called 'wetting out,' can reduce the garment's breathability (moisture transport through the breathable membrane) and let water through. As the DWR wears off over time, re-treatment is recommended when necessary. Many spray-on and wash-in products for treatment of non-waterproof garments and re-treatment of proofed garments losing their water-repellency are available.
Methods for factory application of DWR treatments involve applying a solution of a chemical onto the surface of the fabric by spraying or dipping, or chemical vapor deposition (CVD). The advantages of CVD include reducing the use of environmentally harmful solvents; requiring less DWR; and an extremely thin waterproof layer that has less effect on the natural look and feel of the fabric.
Some researchers have suggested that the use of PFAS in water-repellent clothing is over-engineering, and comparable performance can be achieved using specific silicon- and hydrocarbon-based finishes.
Re-treating garments
Certain types of fabrics need to be re-treated to maintain water-repellency, as fluoropolymers decompose over time when exposed to water and chemicals. Washing the garment with harsh detergents usually accelerates DWR loss; in addition, soaps often leave a residue which attracts water and dirt. On the other hand, rain water or salt water affects DWRs less significantly. Affected garments can be treated with a 'spray-on' or 'wash-in' treatment to improve water-repellency. In some cases heat treatment can reactivate the factory applied repellent finish and aid in the repelling of water, and other liquids such as oils. On the other hand, some DWR products do not require heat treatment to be activated, and sometimes DWR treatments can be revitalized simply by washing the fabric with a suitable cleaner.
Cravenette
Cravenette was an old process to make cloths water-repellent. It was a performance finish that repelled water. Various U.S.-based suppliers, such as A. Murphy, W.G. Hitchcock, and H. Herrmann, were offering Cravenette-treated cloths in the early 20th century.
See also
Finishing (textiles)
Nikwax Analogy
P2i
Perfluorobutanesulfonic acid
Perfluorooctanoic acid
Scotchgard
References
Technical fabrics
Textile chemistry
Textile treatments | Durable water repellent | [
"Chemistry"
] | 582 | [
"nan"
] |
3,133,571 | https://en.wikipedia.org/wiki/Bromoacetone | Bromoacetone is an organic compound with the formula . It is a colorless liquid although impure samples appear yellow or even brown. It is a lachrymatory agent and a precursor to other organic compounds.
Occurrence in nature
Bromoacetone is present (less than 1%) in the essential oil of a seaweed (Asparagopsis taxiformis) from the vicinity of the Hawaiian Islands.
Synthesis
Bromoacetone is available commercially, sometimes stabilized with magnesium oxide. It was first described in the 19th century, attributed to N. Sokolowsky.
Bromoacetone is prepared by combining bromine and acetone, with catalytic acid. As with all ketones, acetone enolizes in the presence of acids or bases. The alpha carbon then undergoes electrophilic substitution with bromine. The main difficulty with this method is over-bromination, resulting in di- and tribrominated products. If a base is present, bromoform is obtained instead, by the haloform reaction.
Applications
It was used in World War I as a chemical weapon, called BA by British and B-Stoff (Weisskreuz) by Germans. Due to its toxicity, it is not used as a riot control agent anymore. Bromoacetone is a versatile reagent in organic synthesis. It is, for example, the precursor to hydroxyacetone by reaction with aqueous sodium hydroxide.
See also
Use of poison gas in World War I
Chloroacetone
Fluoroacetone
Iodoacetone
Thioacetone
References
Organobromides
Lachrymatory agents
World War I chemical weapons
Ketones | Bromoacetone | [
"Chemistry"
] | 339 | [
"Chemical weapons",
"Ketones",
"Functional groups",
"Lachrymatory agents",
"World War I chemical weapons"
] |
3,133,744 | https://en.wikipedia.org/wiki/Parallel%20optical%20interface | A parallel optical interface is a form of fiber-optic technology aimed primarily at communications and networking over relatively short distances (less than 300 meters), and at high bandwidths.
Parallel optic interfaces differ from traditional fiber-optic communication in that data is simultaneously transmitted and received over multiple fibers. Different methods exist for splitting the data over this high-bandwidth link. In the simplest form, the parallel optic link is a replacement for many serial-data communication links. In the more typical application, one byte of information is split up into bits and each bit is coded and sent across the individual fibers. Needless to say, there are many ways to perform this multiplexing provided the fundamental coding at the fiber level meets the channel requirement.
The main applications for parallel optical interfaces are found in telecommunications and supercomputers, also being introduced to consumer applications. It displaces copper backplanes that are commonly used for large switching equipment design.
There are two forms of commercially available products for parallel optic interfaces. The first is a twelve-channel system consisting of an optical transmitter and an optical receiver. The second is a four channel transceiver product that is capable of transmitting four channels and receiving four channels in one product.
Parallel optics is often the most cost-effective solution for getting 40 Gigabit per second transmission of data over distances exceeding 100 meters. 100GE Optical Transceiver comes with 100 Gigabit of data transmit. Data is delivered in both duplex and parallel mechanism with 100GE.
See also
Fiber-optic cable
Interconnect bottleneck
Optical communication
Thunderbolt (interface)
References
Fiber-optic communications | Parallel optical interface | [
"Technology"
] | 326 | [
"Computing stubs",
"Computer network stubs"
] |
3,133,750 | https://en.wikipedia.org/wiki/Open%20Vulnerability%20and%20Assessment%20Language | Open Vulnerability and Assessment Language (OVAL) is an international, information security, community standard to promote open and publicly available security content, and to standardize the transfer of this information across the entire spectrum of security tools and services. OVAL includes a language used to encode system details, and an assortment of content repositories held throughout the community. The language standardizes the three main steps of the assessment process:
representing configuration information of systems for testing;
analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and
reporting the results of this assessment.
The repositories are collections of publicly available and open content that utilize the language.
The OVAL community has developed three schemas written in Extensible Markup Language (XML) to serve as the framework and vocabulary of the OVAL Language. These schemas correspond to the three steps of the assessment process: an OVAL System Characteristics schema for representing system information, an OVAL Definition schema for expressing a specific machine state, and an OVAL Results schema for reporting the results of an assessment.
Content written in the OVAL Language is located in one of the many repositories found within the community. One such repository, known as the OVAL Repository, is hosted by The MITRE Corporation. It is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Each definition in the OVAL Repository determines whether a specified software vulnerability, configuration issue, program, or patch is present on a system.
The information security community contributes to the development of OVAL by participating in the creation of the OVAL Language on the OVAL Developers Forum and by writing definitions for the OVAL Repository through the OVAL Community Forum. An OVAL Board consisting of representatives from a broad spectrum of industry, academia, and government organizations from around the world oversees and approves the OVAL Language and monitors the posting of the definitions hosted on the OVAL Web site. This means that the OVAL, which is funded by US-CERT at the U.S. Department of Homeland Security for the benefit of the community, reflects the insights and combined expertise of the broadest possible collection of security and system administration professionals worldwide.
OVAL is used by the Security Content Automation Protocol (SCAP).
OVAL Language
The OVAL Language standardizes the three main steps of the assessment process: representing configuration information of systems for testing; analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and reporting the results of this assessment.
OVAL Interpreter
The OVAL Interpreter is a freely available reference implementation created to show how data can be collected from a computer for testing based on a set of OVAL Definitions and then evaluated to determine the results of each definition.
The OVAL Interpreter demonstrates the usability of OVAL Definitions, and can be used by definition writers to ensure correct syntax and adherence to the OVAL Language during the development of draft definitions. It is not a fully functional scanning tool and has a simplistic user interface, but running the OVAL Interpreter will provide you with a list of result values for each evaluated definition.
OVAL Repository
The OVAL Repository is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Other repositories in the community also host OVAL content, which can include OVAL System Characteristics files and OVAL Results files as well as definitions. The OVAL Repository contains all community-developed OVAL Vulnerability, Compliance, Inventory, and Patch Definitions for supported operating systems. Definitions are free to use and implement in information security products and services.
The OVAL Repository Top Contributor Award Program grants awards on a quarterly basis to the top contributors to the OVAL Repository. The Repository is a community effort, and contributions of new content and modifications are instrumental in its success. The awards serve as public recognition of an organization’s support of the OVAL Repository and as an incentive to others to contribute.
Organizations receiving the award will also receive an OVAL Repository Top Contributor logo indicating the quarter of the award (e.g., 1st Quarter 2007) that may be used as they see fit. Awards are granted to organizations that have made a significant contribution of new or modified content each quarter.
OVAL Board
The OVAL Board is an advisory body, which provides valuable input on OVAL to the Moderator (currently MITRE). While it is important to have organizational support for OVAL, it is the individuals who sit on the OVAL Board and their input and activity that truly make a difference. The Board’s primary responsibilities are to work with the Moderator and the Community to define OVAL, to provide input into OVAL’s strategic direction, and to advocate OVAL in the Community.
See also
MITRE The MITRE Corporation
Common Vulnerability and Exposures (index of standardized names for vulnerabilities and other security issues)
XCCDF - eXtensible Configuration Checklist Description Format
Security Content Automation Protocol uses OVAL
External links
OVAL web site
Gideon Technologies (OVAL Board Member) Corporate Web Site
www.itsecdb.com Portal for OVAL definitions from several sources
oval.secpod.com SecPod OVAL Definitions Professional Feed
Computer security procedures
Mitre Corporation | Open Vulnerability and Assessment Language | [
"Engineering"
] | 1,034 | [
"Cybersecurity engineering",
"Computer security procedures"
] |
3,133,934 | https://en.wikipedia.org/wiki/A%20band%20%28NATO%29 | The NATO A band is the obsolete designation given to the radio frequencies from 0 to 250 MHz (equivalent to wavelengths from 1.2 m upwards) during the cold war period. Since 1992, frequency allocations, allotment and assignments are in line with the NATO Joint Civil/Military Frequency Agreement.
However, in order to identify military radio spectrum requirements, e.g. for crisis management planning, training, electronic warfare activities, or in military operations, this system is still in use.
NATO Radio spectrum designation
Examples to military frequency utilisation in this particular band
HF long distance radio communications
tactical UHF radio communications
aeronautical mobile service
References
Radio spectrum | A band (NATO) | [
"Physics"
] | 131 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
3,133,950 | https://en.wikipedia.org/wiki/Thiophenol | Thiophenol is an organosulfur compound with the formula C6H5SH, sometimes abbreviated as PhSH. This foul-smelling colorless liquid is the simplest aromatic thiol. The chemical structures of thiophenol and its derivatives are analogous to phenols, where the oxygen atom in the hydroxyl group (-OH) bonded to the aromatic ring in phenol is replaced by a sulfur atom. The prefix thio- implies a sulfur-containing compound and when used before a root word name for a compound which would normally contain an oxygen atom, in the case of 'thiol' that the alcohol oxygen atom is replaced by a sulfur atom.
Thiophenols also describes a class of compounds formally derived from thiophenol itself. All have a sulfhydryl group (-SH) covalently bonded to an aromatic ring. The organosulfur ligand in the medicine thiomersal is a thiophenol.
Synthesis
There are several methods of synthesis for thiophenol and related compounds, although thiophenol itself is usually purchased for laboratory operations. 2 methods are the reduction of benzenesulfonyl chloride with zinc and the action of elemental sulfur on phenyl magnesium halide or phenyllithium followed by acidification.
Via the Newman–Kwart rearrangement, phenols (1) can be converted to the thiophenols (5) by conversion to the O-aryl dialkylthiocarbamates (3), followed by heating to give the isomeric S-aryl derivative (4).
In the Leuckart thiophenol reaction, the starting material is an aniline through the diazonium salt (ArN2X) and the xanthate (ArS(C=S)OR). Alternatively, sodium sulfide and triazenes can react in organic solutions and yield thiophenols.
Thiophenol can be manufactured from chlorobenzene and hydrogen sulfide over alumina at . The disulfide is the primary byproduct. The reaction medium is corrosive and requires ceramic or similar reactor lining. Aryl iodides and sulfur in certain conditions may also produce thiophenols.
Applications
Thiophenols are used in the production of pharmaceuticals including of sulfonamides. The antifungal agents butoconazole and merthiolate are derivatives of thiophenols.
Properties and reactions
Acidity
Thiophenol has appreciably greater acidity than does phenol, as is shown by their pKa values (6.62 for thiophenol and 9.95 for phenol). A similar pattern is seen for H2S versus H2O, and all thiols versus the corresponding alcohols. Treatment of PhSH with strong base such as sodium hydroxide (NaOH) or sodium metal affords the salt sodium thiophenolate (PhSNa).
Alkylation
The thiophenolate is highly nucleophilic, which translates to a high rate of alkylation. Thus, treatment of C6H5SH with methyl iodide in the presence of a base gives methyl phenyl sulfide, C6H5SCH3, a thioether often referred to as thioanisole. Such reactions are fairly irreversible. C6H5SH also adds to α,β-unsaturated carbonyls via Michael addition.
Oxidation
Thiophenols, especially in the presence of base are easily oxidized to diphenyl disulfide:
4 C6H5SH + O2 → 2 C6H5S-SC6H5 + 2 H2O
The disulfide can be reduced back the thiol using sodium borohydride followed by acidification. This redox reaction is also exploited in the use of C6H5SH as a source of H atoms.
Chlorination
Phenylsulfenyl chloride, a blood-red liquid (b.p. 41–42 °C, 1.5 mm Hg), can be prepared by the reaction of thiophenol with chlorine (Cl2).
Coordination to metals
Metal cations form thiophenolates, some of which are polymeric. One example is "C6H5SCu," obtained by treating copper(I) chloride with thiophenol.
Safety
The US National Institute for Occupational Safety and Health has established a recommended exposure limit at a ceiling of 0.1 ppm (0.5 mg m−3), and exposures not greater than 15 minutes.
References
External links
Thiophenol, Toxicology Data Network
Thiols
Phenyl compounds
Foul-smelling chemicals | Thiophenol | [
"Chemistry"
] | 1,014 | [
"Organic compounds",
"Thiols"
] |
3,134,024 | https://en.wikipedia.org/wiki/B%20band%20%28NATO%29 | The NATO B band is the obsolete designation given to the radio frequencies from 250 to 500 MHz (equivalent to wavelengths between 1.20 and 0.60 m) during the cold war period. Since 1992 frequency allocations, allotment and assignments are in line to NATO Joint Civil/Military Frequency Agreement (NJFA).
However, in order to identify military radio spectrum requirements, e.g. for crises management planning, training, Electronic warfare activities, or in military operations, this system is still in use.
Particularities
The NATO harmonised UHF band 225-400 MHz is also a subset of this particular band as defined by the NJFA.
References
Radio spectrum | B band (NATO) | [
"Physics"
] | 135 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
3,134,232 | https://en.wikipedia.org/wiki/Avalanche%20breakdown | Avalanche breakdown (or the avalanche effect) is a phenomenon that can occur in both insulating and semiconducting materials. It is a form of electric current multiplication that can allow very large currents within materials which are otherwise good insulators. It is a type of electron avalanche. The avalanche process occurs when carriers in the transition region are accelerated by the electric field to energies sufficient to create mobile or free electron-hole pairs via collisions with bound electrons.
Explanation
Materials conduct electricity if they contain mobile charge carriers. There are two types of charge carriers in a semiconductor: free electrons (mobile electrons) and electron holes (mobile holes which are missing electrons from the normally-occupied electron states). A normally-bound electron (e.g., in a bond) in a reverse-biased diode may break loose due to a thermal fluctuation or excitation, creating a mobile electron-hole pair (exciton). If there is a voltage gradient (electric field) in the semiconductor, then the electron will move towards the positive voltage while the hole will move towards the negative voltage. Usually, the electron and hole will simply move to opposite ends of the crystal and enter the appropriate electrodes. When the electric field is strong enough, the mobile electron or hole may be accelerated to speeds high enough to knock other bound electrons free, creating more free charge carriers, increasing the current and leading to further "knocking out" processes and creating an avalanche. In this way, large portions of a normally-insulating crystal can begin to conduct.
The large voltage drop and possibly large current during breakdown necessarily leads to the generation of heat. Therefore, a diode placed into a reverse blocking power application will usually be destroyed by breakdown if the external circuit allows a large current. In principle, avalanche breakdown only involves the passage of electrons and need not cause damage to the crystal. Avalanche diodes (commonly encountered as high voltage Zener diodes) are constructed to break down at a uniform voltage and to avoid current crowding during breakdown. These diodes can indefinitely sustain a moderate level of current during breakdown.
The voltage at which the breakdown occurs is called the breakdown voltage. There is a hysteresis effect; once avalanche breakdown has occurred, the material will continue to conduct even if the voltage across it drops below the breakdown voltage. This is different from a Zener diode, which will stop conducting once the reverse voltage drops below the breakdown voltage.
See also
QBD (electronics)
Single-photon avalanche diode
Spark gap
Zener breakdown
References
Microelectronic Circuit Design — Richard C Jaeger —
The Art of Electronics — Horowitz & Hill —
University of Colorado guide to Advance MOSFET design
Power MOSFET avalanche characteristics and ratings - ST Application Note AN2344
Power MOSFET Avalanche Design Guidelines - Vishay Application Note AN-1005
Electrical breakdown | Avalanche breakdown | [
"Physics"
] | 580 | [
"Physical phenomena",
"Electrical phenomena",
"Electrical breakdown"
] |
3,134,466 | https://en.wikipedia.org/wiki/Polarizable%20vacuum | In theoretical physics, particularly fringe physics, polarizable vacuum (PV) and its associated theory refer to proposals by Harold Puthoff, Robert H. Dicke, and others to develop an analog of general relativity to describe gravity and its relationship to electromagnetism.
Description
In essence, Dicke and Puthoff proposed that the presence of mass alters the electric permittivity and the magnetic permeability of flat spacetime, εo and μo respectively by multiplying them by a scalar function, K:
arguing that this will affect the lengths of rulers made of ordinary matter so that in the presence of a gravitational field, the spacetime metric of Minkowski spacetime is replaced by
where is the so-called "dielectric constant of the vacuum". This is a "diagonal" metric given in terms of a Cartesian chart and having the same stratified conformally flat form in the Watt-Misner theory of gravitation. However, according to Dicke and Puthoff, κ must satisfy a field equation that differs from the field equation of the Watt-Misner theory. In the case of a static spherically symmetric vacuum, this yields the asymptotically flat solution
The resulting Lorentzian spacetime agrees with the analogous solution in the Watt-Misner theory. It has the same weak-field limit and far-field as the Schwarzschild vacuum solution in general relativity. It satisfies three of the four classical tests of relativistic gravitation (redshift, deflection of light, precession of the perihelion of Mercury) to within the limit of observational accuracy. However, as shown by Ibison (2003), it yields a different prediction for the inspiral of test particles due to gravitational radiation.
However, requiring stratified-conformally flat metrics rules out the possibility of recovering the weak-field Kerr metric and is certainly inconsistent with the claim that PV can give a general "approximation" of the general theory of relativity. In particular, this theory exhibits no frame-dragging effects. Also, the impact of gravitational radiation on test particles differs profoundly between scalar theories and tensor theories of gravitation, such as general relativity. LIGO is not intended primarily as a test ruling out scalar theories. However, it is widely expected to do so as a side benefit once it detects unambiguous gravitational wave signals exhibiting the characteristics expected in general relativity.
Ibison has considered a "cosmological solution" of PV, analogous to the Friedmann dust solution with flat orthogonal hyperslices in general relativity, and argues that this model is inconsistent with various observational and theoretical constraints. He also finds a rate of inspiral disagreeing with observation. The latter result disagrees with that of Watt and Misner, whose Lorentzian manifold differs from PV in the case of cosmology.
Contrary to Puthoff's claims, it is widely accepted that no scalar theory of gravitation can reproduce all of general relativity's successes. It might be noted that De Felice uses constitutive relations to obtain a susceptibility tensor which lives in spatial hyperslices; this provides extra degrees of freedom, which help make up for the degree of freedom lacking in PV and other scalar theories.
Criticism
Puthoff himself has offered various characterizations of his proposal, which has been variously characterized as
an attempt to reformulate general relativity in terms of a purely formal analogy with the propagation of light through an optical medium,
an attempt to replace general relativity with a scalar theory of gravitation featuring formal analogies with Maxwell's theory of electromagnetism,
an attempt to unify gravitation and electromagnetism in a theory of electrogravity,
an attempt to provide a physical mechanism for how spacetime gets curved in general relativity, which suggests (to Puthoff) the possibility of "metric engineering" for such purposes as spacecraft propulsion (see Breakthrough Propulsion Physics Program).
PV has origins in more mainstream work by such physicists as Robert Dicke. Still, in current parlance, the term does appear to be most closely associated with the speculations of Puthoff. The claims have not been accepted in mainstream physics.
Mainstream physicists agree that PV is
not viable as a unification of gravitation and electromagnetism
not a "reformulation" of general relativity,
not a viable theory of gravitation since it violates observational and theoretical requirements.
Related work
Antecedents of PV and more recent related proposals include the following:
A proposal in 1921 by H. A. Wilson to reduce gravitation to electromagnetism by pursuing the formal analogy between "light bending" in metric theories of gravitation and propagation of light through an optical medium having a spatially varying refractive index. Wilson's approach to a unified field theory is not considered viable today.
An attempt (roughly 1960–1970) by Robert Dicke and Fernando de Felice to resurrect and improve Wilson's idea of an optical analog of gravitational effects. If interpreted conservatively as an attempt to provide an alternative approach to GTR rather than as a work toward a theory unifying electromagnetism and gravitation, this approach is not unreasonable, although most likely of rather limited utility.
The 1967 proposal of Andrei Sakharov that gravitation might arise from underlying quantum field theory effects in a manner somewhat analogous to the way that the (simple) classical theory of elasticity arises from (complicated) particle physics. This work is generally regarded as mainstream and not entirely implausible but highly speculative, and most physicists seem to feel that little progress has been made.
In a series of papers, Bernard Haisch and Alfonso Rueda have proposed that the inertia of massive objects arises as a "electromagnetic reaction force", due to interaction with the so-called zero point field. According to mainstream physics, their claims rely on incorrect quantum field theory computations.
Recent work, motivated in large part by the discoveries of the Unruh effect, Hawking radiation, and black hole thermodynamics, to work out a complete theory of physical analogs such as optical black holes. This is not work toward a unified field theory, but in another sense, can be regarded as work towards an even more ambitious unification, in which some of the most famous effects usually ascribed to general relativity (but familiar to many metric theories of gravitation) would be seen as essentially thermodynamical effects, not specifically gravitational effects. This work has excited great interest because it might enable experimental verification of the basic concept of Hawking radiation, which is widely regarded as one of the most revolutionary proposals in twentieth-century physics but which, in its gravitational incarnation, seems impossible to verify in experiments in earthly laboratories.
The 1999 proposal by Keith Watt and Charles W. Misner of a scalar theory of gravitation which postulates a stratified conformally flat metric of the form , given with respect to a Cartesian chart, where φ satisfies a certain partial differential equation which reduces in a vacuum region to the flat spacetime wave equation . This is a "toy theory", not a fully fledged theory of gravitation, since as Watt and Misner pointed out, while this theory does have the correct Newtonian limit, it disagrees with the result of certain observations.
Matthew R. Edwards suggests that the gravito-optical medium is composed of gravitons and may, in turn, connect with the polarizable vacuum approach.
See also
Induced gravity (for Sakharov's proposal)
Maxwell's equations in curved spacetime
Electromagnetic stress-energy tensor
Vacuum polarization
References
Theories of gravity
Fringe physics | Polarizable vacuum | [
"Physics"
] | 1,583 | [
"Theoretical physics",
"Theories of gravity"
] |
3,134,585 | https://en.wikipedia.org/wiki/Charge%20density | In electromagnetism, charge density is the amount of electric charge per unit length, surface area, or volume. Volume charge density (symbolized by the Greek letter ρ) is the quantity of charge per unit volume, measured in the SI system in coulombs per cubic meter (C⋅m−3), at any point in a volume. Surface charge density (σ) is the quantity of charge per unit area, measured in coulombs per square meter (C⋅m−2), at any point on a surface charge distribution on a two dimensional surface. Linear charge density (λ) is the quantity of charge per unit length, measured in coulombs per meter (C⋅m−1), at any point on a line charge distribution. Charge density can be either positive or negative, since electric charge can be either positive or negative.
Like mass density, charge density can vary with position. In classical electromagnetic theory charge density is idealized as a continuous scalar function of position , like a fluid, and , , and are usually regarded as continuous charge distributions, even though all real charge distributions are made up of discrete charged particles. Due to the conservation of electric charge, the charge density in any volume can only change if an electric current of charge flows into or out of the volume. This is expressed by a continuity equation which links the rate of change of charge density and the current density .
Since all charge is carried by subatomic particles, which can be idealized as points, the concept of a continuous charge distribution is an approximation, which becomes inaccurate at small length scales. A charge distribution is ultimately composed of individual charged particles separated by regions containing no charge. For example, the charge in an electrically charged metal object is made up of conduction electrons moving randomly in the metal's crystal lattice. Static electricity is caused by surface charges consisting of electrons and ions near the surface of objects, and the space charge in a vacuum tube is composed of a cloud of free electrons moving randomly in space. The charge carrier density in a conductor is equal to the number of mobile charge carriers (electrons, ions, etc.) per unit volume. The charge density at any point is equal to the charge carrier density multiplied by the elementary charge on the particles. However, because the elementary charge on an electron is so small (1.6⋅10−19 C) and there are so many of them in a macroscopic volume (there are about 1022 conduction electrons in a cubic centimeter of copper) the continuous approximation is very accurate when applied to macroscopic volumes, and even microscopic volumes above the nanometer level.
At even smaller scales, of atoms and molecules, due to the uncertainty principle of quantum mechanics, a charged particle does not have a precise position but is represented by a probability distribution, so the charge of an individual particle is not concentrated at a point but is 'smeared out' in space and acts like a true continuous charge distribution. This is the meaning of 'charge distribution' and 'charge density' used in chemistry and chemical bonding. An electron is represented by a wavefunction whose square is proportional to the probability of finding the electron at any point in space, so is proportional to the charge density of the electron at any point. In atoms and molecules the charge of the electrons is distributed in clouds called orbitals which surround the atom or molecule, and are responsible for chemical bonds.
Definitions
Continuous charges
Following are the definitions for continuous charge distributions.
The linear charge density is the ratio of an infinitesimal electric charge dQ (SI unit: C) to an infinitesimal line element,
similarly the surface charge density uses a surface area element dS
and the volume charge density uses a volume element dV
Integrating the definitions gives the total charge Q of a region according to line integral of the linear charge density λq(r) over a line or 1d curve C,
similarly a surface integral of the surface charge density σq(r) over a surface S,
and a volume integral of the volume charge density ρq(r) over a volume V,
where the subscript q is to clarify that the density is for electric charge, not other densities like mass density, number density, probability density, and prevent conflict with the many other uses of λ, σ, ρ in electromagnetism for wavelength, electrical resistivity and conductivity.
Within the context of electromagnetism, the subscripts are usually dropped for simplicity: λ, σ, ρ. Other notations may include: ρℓ, ρs, ρv, ρL, ρS, ρV etc.
The total charge divided by the length, surface area, or volume will be the average charge densities:
Free, bound and total charge
In dielectric materials, the total charge of an object can be separated into "free" and "bound" charges.
Bound charges set up electric dipoles in response to an applied electric field E, and polarize other nearby dipoles tending to line them up, the net accumulation of charge from the orientation of the dipoles is the bound charge. They are called bound because they cannot be removed: in the dielectric material the charges are the electrons bound to the nuclei.
Free charges are the excess charges which can move into electrostatic equilibrium, i.e. when the charges are not moving and the resultant electric field is independent of time, or constitute electric currents.
Total charge densities
In terms of volume charge densities, the total charge density is:
as for surface charge densities:
where subscripts "f" and "b" denote "free" and "bound" respectively.
Bound charge
The bound surface charge is the charge piled up at the surface of the dielectric, given by the dipole moment perpendicular to the surface:
where s is the separation between the point charges constituting the dipole, is the electric dipole moment, is the unit normal vector to the surface.
Taking infinitesimals:
and dividing by the differential surface element dS gives the bound surface charge density:
where P is the polarization density, i.e. density of electric dipole moments within the material, and dV is the differential volume element.
Using the divergence theorem, the bound volume charge density within the material is
hence:
The negative sign arises due to the opposite signs on the charges in the dipoles, one end is within the volume of the object, the other at the surface.
A more rigorous derivation is given below.
Free charge density
The free charge density serves as a useful simplification in Gauss's law for electricity; the volume integral of it is the free charge enclosed in a charged object - equal to the net flux of the electric displacement field D emerging from the object:
See Maxwell's equations and constitutive relation for more details.
Homogeneous charge density
For the special case of a homogeneous charge density ρ0, independent of position i.e. constant throughout the region of the material, the equation simplifies to:
Proof
Start with the definition of a continuous volume charge density:
Then, by definition of homogeneity, ρq(r) is a constant denoted by ρq, 0 (to differ between the constant and non-constant densities), and so by the properties of an integral can be pulled outside of the integral resulting in:
so,
The equivalent proofs for linear charge density and surface charge density follow the same arguments as above.
Discrete charges
For a single point charge q at position r0 inside a region of 3d space R, like an electron, the volume charge density can be expressed by the Dirac delta function:
where r is the position to calculate the charge.
As always, the integral of the charge density over a region of space is the charge contained in that region. The delta function has the shifting property for any function f:
so the delta function ensures that when the charge density is integrated over R, the total charge in R is q:
This can be extended to N discrete point-like charge carriers. The charge density of the system at a point r is a sum of the charge densities for each charge qi at position ri, where :
The delta function for each charge qi in the sum, δ(r − ri), ensures the integral of charge density over R returns the total charge in R:
If all charge carriers have the same charge q (for electrons q = −e, the electron charge) the charge density can be expressed through the number of charge carriers per unit volume, n(r), by
Similar equations are used for the linear and surface charge densities.
Charge density in special relativity
In special relativity, the length of a segment of wire depends on velocity of observer because of length contraction, so charge density will also depend on velocity. Anthony French
has described how the magnetic field force of a current-bearing wire arises from this relative charge density. He used (p 260) a Minkowski diagram to show "how a neutral current-bearing wire appears to carry a net charge density as observed in a moving frame." When a charge density is measured in a moving frame of reference it is called proper charge density.
It turns out the charge density ρ and current density J transform together as a four-current vector under Lorentz transformations.
Charge density in quantum mechanics
In quantum mechanics, charge density ρq is related to wavefunction ψ(r) by the equationwhere q is the charge of the particle and is the probability density function i.e. probability per unit volume of a particle located at r.
When the wavefunction is normalized - the average charge in the region r ∈ R iswhere d3r is the integration measure over 3d position space.
For system of identical fermions, the number density is given as sum of probability density of each particle in :
Using symmetrization condition:where is considered as the charge density.
The potential energy of a system is written as:The electron-electron repulsion energy is thus derived under these conditions to be:Note that this is excluding the exchange energy of the system, which is a purely quantum mechanical phenomenon, has to be calculated separately.
Then, the energy is given using Hartree-Fock method as:
Where I is the kinetic and potential energy of electrons due to positive charges, J is the electron electron interaction energy and K is the exchange energy of electrons.
Application
The charge density appears in the continuity equation for electric current, and also in Maxwell's Equations. It is the principal source term of the electromagnetic field; when the charge distribution moves, this corresponds to a current density. The charge density of molecules impacts chemical and separation processes. For example, charge density influences metal-metal bonding and hydrogen bonding. For separation processes such as nanofiltration, the charge density of ions influences their rejection by the membrane.
See also
Continuity equation relating charge density and current density
Ionic potential
Charge density wave
References
Further reading
External links
- Spatial charge distributions
Density
Electric charge
Electromagnetic quantities
es:Carga eléctrica#Densidad de carga eléctrica | Charge density | [
"Physics",
"Mathematics"
] | 2,241 | [
"Electromagnetic quantities",
"Physical quantities",
"Electric charge",
"Quantity",
"Mass",
"Density",
"Wikipedia categories named after physical quantities",
"Matter"
] |
3,135,003 | https://en.wikipedia.org/wiki/C5H5N5 | {{DISPLAYTITLE:C5H5N5}}
The molecular formula C5H5N5 (molar mass: 135.13 g/mol, exact mass: 135.0545 u) may refer to:
Adenine
2-Aminopurine
Molecular formulas | C5H5N5 | [
"Physics",
"Chemistry"
] | 60 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
3,135,005 | https://en.wikipedia.org/wiki/Dust%20solution | In general relativity, a dust solution is a fluid solution, a type of exact solution of the Einstein field equation, in which the gravitational field is produced entirely by the mass, momentum, and stress density of a perfect fluid that has positive mass density but vanishing pressure. Dust solutions are an important special case of fluid solutions in general relativity.
Dust model
A perfect and pressureless fluid can be interpreted as a model of a configuration of dust particles that locally move in concert and interact with each other only gravitationally, from which the name is derived. For this reason, dust models are often employed in cosmology as models of a toy universe, in which the dust particles are considered as highly idealized models of galaxies, clusters, or superclusters. In astrophysics, dust models have been employed as models of gravitational collapse.
Dust solutions can also be used to model finite rotating disks of dust grains; some examples are listed below. If superimposed somehow on a stellar model comprising a ball of fluid surrounded by vacuum, a dust solution could be used to model an accretion disk around a massive object; however, no such exact solutions that model rotating accretion disks are yet known due to the extreme mathematical difficulty of constructing them.
Mathematical definition
The stress–energy tensor of a relativistic pressureless fluid can be written in the simple form
Here, the world lines of the dust particles are the integral curves of the four-velocity and the matter density in dust's rest frame is given by the scalar function .
Eigenvalues
Because the stress-energy tensor is a rank-one matrix, a short computation shows that the characteristic polynomial
of the Einstein tensor in a dust solution will have the form
Multiplying out this product, we find that the coefficients must satisfy the following three algebraically independent (and invariant) conditions:
Using Newton's identities, in terms of the sums of the powers of the roots (eigenvalues), which are also the traces of the powers of the Einstein tensor itself, these conditions become:
In tensor index notation, this can be written using the Ricci scalar as:
This eigenvalue criterion is sometimes useful in searching for dust solutions, since it shows that very few Lorentzian manifolds could possibly admit an interpretation, in general relativity, as a dust solution.
Examples
Null dust solution
A null dust solution is a dust solution where the Einstein tensor is null.
Bianchi dust
A Bianchi dust models exhibits various types of Lie algebras of Killing vector fields.
Special cases include FLRW and Kasner dust.
Kasner dust
A Kasner dust is the simplest cosmological model exhibiting anisotropic expansion.
FLRW dust
Friedmann–Lemaître–Robertson–Walker (FLRW) dusts are homogeneous and isotropic. These solutions often referred to as the matter-dominated FLRW models.
Rotating dust
The van Stockum dust is a cylindrically symmetric rotating dust.
The Neugebauer–Meinel dust models a rotating disk of dust matched to an axisymmetric vacuum exterior. This solution has been called, the most remarkable exact solution discovered since the Kerr vacuum.
Other solutions
Noteworthy individual dust solutions include:
Lemaître–Tolman–Bondi (LTB) dusts (some of the simplest inhomogeneous cosmological models, often employed as models of gravitational collapse)
Kantowski–Sachs dusts (cosmological models which exhibit perturbations from FLRW models)
Gödel metric
See also
Lorentz group
References
Gives many examples of exact dust solutions.
Exact solutions in general relativity | Dust solution | [
"Mathematics"
] | 737 | [
"Exact solutions in general relativity",
"Mathematical objects",
"Equations"
] |
3,135,098 | https://en.wikipedia.org/wiki/World%20Conference%20on%20Breeding%20Endangered%20Species%20in%20Captivity%20as%20an%20Aid%20to%20their%20Survival | The World Conference on Breeding Endangered Species in Captivity as an Aid to their Survival (WCBESCAS) is the world's first conference on captive breeding. Started by the Fauna and Flora Preservation Society, due to efforts by the famous naturalist and pioneer of captive breeding Gerald Durrell, the first conference was held in 1972 at Jersey (the location of Durrell's Jersey Zoo, one of the few zoos of the world to practise captive breeding at that time). The conference provided a common scientific meeting ground for captive breeding issues for the first time. The conference has been held at:
1st WCBESCAS; Jersey, 1972
2nd WCBESCAS; London, 1976
3rd WCBESCAS; San Diego, 1979
4th WCBESCAS; Harderwijk, 1984
5th WCBESCAS; Cincinnati, 1988
6th WCBESCAS; Jersey, 1992, 'The Roles of Zoos in Global Conservation'
7th WCBESCAS; Cincinnati, 1999, 'Linking Zoos and Field Research to Advance Conservation'
Proceedings of the Conference
Robert D. Martin (ed.): Breeding Endangered Species in Captivity. Academic Press, London 1975,
Peter J.S. Olney (ed.): International Zoo Yearbook 1977, Vol 17, London 1977, (Section I: Proceedings of the Second World Conference of Breeding Endangered Species in Captivity)
Peter J.S. Olney (ed.): International Zoo Yearbook 1980, Vol 20, London 1980, (Section I: Proceedings of the Third World Conference of Breeding Endangered Species in Captivity)
Peter J.S. Olney (ed.): International Zoo Yearbook 1984/85, Vol 24/25, London 1986, (Section I: Proceedings of the Fourth World Conference of Breeding Endangered Species in Captivity)
Betsy Lynne Dresser, R.W. Reece & Edward J. Maruska (eds.), Proceedings, 5th World Conference on Breeding Endangered Species in Captivity, October 9–12, 1988, Cincinnati, Ohio. Cincinnati Zoo and Botanical Garden, Cincinnati, Ohio, 1988, 723 p.
Peter J.S. Olney, Georgina M. Mace & Anna T.C. Feistner (eds.): Creative Conservation. Interactive management of wild and captive animals. Chapman & Hall, London 1994, (This book is the result of the deliberations of the Sixth World Conference on Breeding Endangered Species, held in Jersey 1992. Editors and contributors have further developed the key issues tackled at the conference and the resulting chapters represent a kind of updated conference proceedings.)
Terri L. Roth, W.F. Swanson & L.K. Blattman (eds.), Proceedings of Seventh World Conference on Breeding Endangered Species: Linking Zoos and Field Research to Advance Conservation, Cincinnati, Ohio, May 22–26, 1999. Cincinnati Zoo and Botanic Garden, Cincinnati, Ohio, 1999, 349 p.
See also
Ex-situ conservation
References
Jeremy J.C. Mallinson; The Evolution of the World Conference Series on Breeding Endangered Species, 1972-1999; International Zoo News, Vol. 46/5 (No. 294); July/August 1999
Animal breeding organizations
Nature conservation organizations
Environmental conferences
1972 in the environment
Endangered species | World Conference on Breeding Endangered Species in Captivity as an Aid to their Survival | [
"Biology"
] | 644 | [
"Biota by conservation status",
"Endangered species"
] |
3,135,183 | https://en.wikipedia.org/wiki/Women%20in%20science | The presence of women in science spans the earliest times of the history of science wherein they have made significant contributions. Historians with an interest in gender and science have researched the scientific endeavors and accomplishments of women, the barriers they have faced, and the strategies implemented to have their work peer-reviewed and accepted in major scientific journals and other publications. The historical, critical, and sociological study of these issues has become an academic discipline in its own right.
The involvement of women in medicine occurred in several early Western civilizations, and the study of natural philosophy in ancient Greece was open to women. Women contributed to the proto-science of alchemy in the first or second centuries CE During the Middle Ages, religious convents were an important place of education for women, and some of these communities provided opportunities for women to contribute to scholarly research. The 11th century saw the emergence of the first universities; women were, for the most part, excluded from university education. Outside academia, botany was the science that benefitted most from the contributions of women in early modern times. The attitude toward educating women in medical fields appears to have been more liberal in Italy than elsewhere. The first known woman to earn a university chair in a scientific field of studies was eighteenth-century Italian scientist Laura Bassi.
Gender roles were largely deterministic in the eighteenth century and women made substantial advances in science. During the nineteenth century, women were excluded from most formal scientific education, but they began to be admitted into learned societies during this period. In the later nineteenth century, the rise of the women's college provided jobs for women scientists and opportunities for education. Marie Curie paved the way for scientists to study radioactive decay and discovered the elements radium and polonium. Working as a physicist and chemist, she conducted pioneering research on radioactive decay and was the first woman to receive a Nobel Prize in Physics and became the first person to receive a second Nobel Prize in Chemistry. Sixty women have been awarded the Nobel Prize between 1901 and 2022. Twenty-four women have been awarded the Nobel Prize in physics, chemistry, physiology or medicine.
Cross-cultural perspectives
In the 1970s and 1980s, many books and articles about women scientists were appearing; virtually all of the published sources ignored women of color and women outside of Europe and North America. The formation of the Kovalevskaia Fund in 1985 and the Organization for Women in Science for the Developing World in 1993 gave more visibility to previously marginalized women scientists, but even today there is a dearth of information about current and historical women in science in developing countries. According to academic Ann Hibner Koblitz:
Koblitz has said that these generalizations about women in science often do not hold up cross-culturally:
Historical examples
Ancient history
The involvement of women in the field of medicine has been recorded in several early civilizations. An ancient Egyptian physician, Peseshet (), described in an inscription as "lady overseer of the female physicians", is the earliest known female physician named in the history of science. Agamede was cited by Homer as a healer in ancient Greece before the Trojan War (c. 1194–1184 BCE). According to one late antique legend, Agnodice was the first female physician to practice legally in fourth century BCE Athens.
The study of natural philosophy in ancient Greece was open to women. Recorded examples include Aglaonike, who predicted eclipses; and Theano, mathematician and physician, who was a pupil (possibly also wife) of Pythagoras, and one of a school in Crotone founded by Pythagoras, which included many other women. A passage in Pollux speaks about those who invented the process of coining money mentioning Pheidon and Demodike from Cyme, wife of the Phrygian king, Midas, and daughter of King Agamemnon of Cyme. A daughter of a certain Agamemnon, king of Aeolian Cyme, married a Phrygian king called Midas. This link may have facilitated the Greeks "borrowing" their alphabet from the Phrygians because the Phrygian letter shapes are closest to the inscriptions from Aeolis.
During the period of the Babylonian civilization, around 1200 BCE, two perfumeresses named Tapputi-Belatekallim and -ninu (first half of her name unknown) were able to obtain the essences from plants by using extraction and distillation procedures. During the Egyptian dynasty, women were involved in applied chemistry, such as the making of beer and the preparation of medicinal compounds. Women have been recorded to have made major contributions to alchemy. Many of which lived in Alexandria around the 1st or 2nd centuries C.E., where the gnostic tradition led to female contributions being valued. The most famous of the women alchemist, Mary the Jewess, is credited with inventing several chemical instruments, including the double boiler (bain-marie); the improvement or creation of distillation equipment of that time. Such distillation equipment were called kerotakis (simple still) and the tribikos (a complex distillation device).
Hypatia of Alexandria (c. 350–415 CE), daughter of Theon of Alexandria, was a philosopher, mathematician, and astronomer. She is the earliest female mathematician about whom detailed information has survived. Hypatia is credited with writing several important commentaries on geometry, algebra and astronomy. Hypatia was the head of a philosophical school and taught many students. In 415 CE, she became entangled in a political dispute between Cyril, the bishop of Alexandria, and Orestes, the Roman governor, which resulted in a mob of Cyril's supporters stripping her, dismembering her, and burning the pieces of her body.
Medieval Europe
The early parts of the European Middle Ages, also known as the Dark Ages, were marked by the decline of the Roman Empire. The Latin West was left with great difficulties that affected the continent's intellectual production dramatically. Although nature was still seen as a system that could be comprehended in the light of reason, there was little innovative scientific inquiry. The Arabic world deserves credit for preserving scientific advancements. Arabic scholars produced original scholarly work and generated copies of manuscripts from Classical periods. During this period, Christianity underwent a period of resurgence, and Western civilization was bolstered as a result. This phenomenon was, in part, due to monasteries and nunneries that nurtured the skills of reading and writing, and the monks and nuns who collected and copied important writings produced by scholars of the past.
As it mentioned before, convents were an important place of education for women during this period, for the monasteries and nunneries encourage the skills of reading and writing, and some of these communities provided opportunities for women to contribute to scholarly research. An example is the German abbess Hildegard of Bingen (1098–1179 A.D), a famous philosopher and botanist, whose prolific writings include treatments of various scientific subjects, including medicine, botany and natural history (c. 1151–58). Another famous German abbess was Hroswitha of Gandersheim (935–1000 A.D.) that also helped encourage women to be intellectual. However, with the growth in number and power of nunneries, the all-male clerical hierarchy was not welcomed toward it, and thus it stirred up conflict by having backlash against women's advancement. That impacted many religious orders closed on women and disbanded their nunneries, and overall excluding women from the ability to learn to read and write. With that, the world of science became closed off to women, limiting women's influence in science.
Entering the 11th century, the first universities emerged. Women were, for the most part, excluded from university education. However, there were some exceptions. The Italian University of Bologna allowed women to attend lectures from its inception, in 1088.
The attitude to educating women in medical fields in Italy appears to have been more liberal than in other places. The physician, Trotula di Ruggiero, is supposed to have held a chair at the Medical School of Salerno in the 11th century, where she taught many noble Italian women, a group sometimes referred to as the "ladies of Salerno". Several influential texts on women's medicine, dealing with obstetrics and gynecology, among other topics, are also often attributed to Trotula.
Dorotea Bucca was another distinguished Italian physician. She held a chair of philosophy and medicine at the University of Bologna for over forty years from 1390. Other Italian women whose contributions in medicine have been recorded include Abella, Jacobina Félicie, Alessandra Giliani, Rebecca de Guarna, Margarita, Mercuriade (14th century), Constance Calenda, Calrice di Durisio (15th century), Constanza, Maria Incarnata and Thomasia de Mattio.
Despite the success of some women, cultural biases affecting their education and participation in science were prominent in the Middle Ages. For example, Saint Thomas Aquinas, a Christian scholar, wrote, referring to women, "She is mentally incapable of holding a position of authority."
Scientific Revolutions of 1600s and 1700s
Margaret Cavendish, a seventeenth-century aristocrat, took part in some of the most important scientific debates of that time. She was, however, not inducted into the English Royal Society, although she was once allowed to attend a meeting. She wrote a number of works on scientific matters, including Observations upon Experimental Philosophy (1666) and Grounds of Natural Philosophy. In these works she was especially critical of the growing belief that humans, through science, were the masters of nature. The 1666 work attempted to heighten female interest in science. The observations provided a critique of the experimental science of Bacon and criticized microscopes as imperfect machines.
Isabella Cortese, an Italian alchemist, is most known for her book I secreti della signora Isabella Cortese or The Secrets of Isabella Cortese. Cortese was able to manipulate nature in order to create several medicinal, alchemy and cosmetic "secrets" or experiments. Isabella's book of secrets belongs to a larger book of secrets that became extremely popular among the elite during the 16th century. Despite the low percentage of literate women during Cortese's era, the majority of alchemical and cosmetic "secrets" in the book of secrets were geared towards women. This included but was not limited to pregnancy, fertility, and childbirth.
Sophia Brahe, sister of Tycho Brahe, was a Danish Horticulturalist. Brahe was trained by her older brother in chemistry and horticulture but taught herself astronomy by studying books in German. Sophia visited her brother in the Uranienborg on numerous occasions and assisted on his project the De nova stella. Her observations lead to the discovery of the Supernova SN 1572 which helped refute the geocentric model of the universe.
Tycho Wrote the Urania Titani about his sister Sophia and her husband Erik. The Urania presented Sophia and the Titan represented Erik. Tycho used this poem in order to show his appreciation for his sister and all of her work.
In Germany, the tradition of female participation in craft production enabled some women to become involved in observational science, especially astronomy. Between 1650 and 1710, women were 14% of German astronomers. The most famous female astronomer in Germany was Maria Winkelmann. She was educated by her father and uncle and received training in astronomy from a nearby self-taught astronomer. Her chance to be a practising astronomer came when she married Gottfried Kirch, Prussia's foremost astronomer. She became his assistant at the astronomical observatory operated in Berlin by the Academy of Science. She made original contributions, including the discovery of a comet. When her husband died, Winkelmann applied for a position as assistant astronomer at the Berlin Academy – for which she had experience. As a woman – with no university degree – she was denied the post. Members of the Berlin Academy feared that they would establish a bad example by hiring a woman. "Mouths would gape", they said.
Winkelmann's problems with the Berlin Academy reflect the obstacles women faced in being accepted in scientific work, which was considered to be chiefly for men. No woman was invited to either the Royal Society of London nor the French Academy of Sciences until the twentieth century. Most people in the seventeenth century viewed a life devoted to any kind of scholarship as being at odds with the domestic duties women were expected to perform.
A founder of modern botany and zoology, the German Maria Sibylla Merian (1647–1717), spent her life investigating nature. When she was thirteen, Sibylla began growing caterpillars and studying their metamorphosis into butterflies. She kept a "Study Book" which recorded her investigations into natural philosophy. In her first publication, The New Book of Flowers, she used imagery to catalog the lives of plants and insects. After her husband died, and her brief stint of living in Siewert, she and her daughter journeyed to Paramaribo for two years to observe insects, birds, reptiles, and amphibians. She returned to Amsterdam and published The Metamorphosis of the Insects of Suriname, which "revealed to Europeans for the first time the astonishing diversity of the rain forest." She was a botanist and entomologist who was known for her artistic illustrations of plants and insects. Uncommon for that era, she traveled to South America and Surinam, where, assisted by her daughters, she illustrated the plant and animal life of those regions.
Overall, the Scientific Revolution did little to change people's ideas about the nature of women – more specifically – their capacity to contribute to science just as men do. According to Jackson Spielvogel, 'Male scientists used the new science to spread the view that women were by nature inferior and subordinate to men and suited to play a domestic role as nurturing mothers. The widespread distribution of books ensured the continuation of these ideas'.
Eighteenth century
Although women excelled in many scientific areas during the eighteenth century, they were discouraged from learning about plant reproduction. Carl Linnaeus' system of plant classification based on sexual characteristics drew attention to botanical licentiousness, and people feared that women would learn immoral lessons from nature's example. Women were often depicted as both innately emotional and incapable of objective reasoning, or as natural mothers reproducing a natural, moral society.
The eighteenth century was characterized by three divergent views towards women: that women were mentally and socially inferior to men, that they were equal but different, and that women were potentially equal in both mental ability and contribution to society. While individuals such as Jean-Jacques Rousseau believed women's roles were confined to motherhood and service to their male partners, the Enlightenment was a period in which women experienced expanded roles in the sciences.
The rise of salon culture in Europe brought philosophers and their conversation to an intimate setting where men and women met to discuss contemporary political, social, and scientific topics. While Jean-Jacques Rousseau attacked women-dominated salons as producing 'effeminate men' that stifled serious discourse, salons were characterized in this era by the mixing of the sexes.
Lady Mary Wortley Montagu defied convention by introducing smallpox inoculation through variolation to Western medicine after witnessing it during her travels in the Ottoman Empire. In 1718 Wortley Montague had her son inoculated and when in 1721 a smallpox epidemic struck England, she had her daughter inoculated. This was the first such operation done in Britain. She persuaded Caroline of Ansbach to test the treatment on prisoners. Princess Caroline subsequently inoculated her two daughters in 1722. Under a pseudonym, Wortley Montague published an article describing and advocating in favor of inoculation in September 1722.
After publicly defending forty nine theses in the Palazzo Pubblico, Laura Bassi was awarded a doctorate of philosophy in 1732 at the University of Bologna. Thus, Bassi became the second woman in the world to earn a philosophy doctorate after Elena Cornaro Piscopia in 1678, 54 years prior. She subsequently defended twelve additional theses at the Archiginnasio, the main building of the University of Bologna which allowed her to petition for a teaching position at the university. In 1732 the university granted Bassi's professorship in philosophy, making her a member of the Academy of the Sciences and the first woman to earn a professorship in physics at a university in Europe But the university held the value that women were to lead a private life and from 1746 to 1777 she gave only one formal dissertation per year ranging in topic from the problem of gravity to electricity. Because she could not lecture publicly at the university regularly, she began conducting private lessons and experiments from home in the year of 1749. However, due to her increase in responsibilities and public appearances on behalf of the university, Bassi was able to petition for regular pay increases, which in turn was used to pay for her advanced equipment. Bassi earned the highest salary paid by the University of Bologna of 1,200 lire. In 1776, at the age of 65, she was appointed to the chair in experimental physics by the Bologna Institute of Sciences with her husband as a teaching assistant.
According to Britannica, Maria Gaetana Agnesi is "considered to be the first woman in the Western world to have achieved a reputation in mathematics." She is credited as the first woman to write a mathematics handbook, the Instituzioni analitiche ad uso della gioventù italiana, (Analytical Institutions for the Use of Italian Youth). Published in 1748 it "was regarded as the best introduction extant to the works of Euler." The goal of this work was, according to Agnesi herself, to give a systematic illustration of the different results and theorems of infinitesimal calculus. In 1750 she became the second woman to be granted a professorship at a European university. Also appointed to the University of Bologna she never taught there.
The German Dorothea Erxleben was instructed in medicine by her father from an early age and Bassi's university professorship inspired Erxleben to fight for her right to practise medicine. In 1742 she published a tract arguing that women should be allowed to attend university. After being admitted to study by a dispensation of Frederick the Great, Erxleben received her M.D. from the University of Halle in 1754. She went on to analyse the obstacles preventing women from studying, among them housekeeping and children. She became the first female medical doctor in Germany.
In 1741–42 Charlotta Frölich became the first woman to be published by the Royal Swedish Academy of Sciences with three books in agricultural science. In 1748 Eva Ekeblad became the first woman inducted into that academy. In 1746 Ekeblad had written to the academy about her discoveries of how to make flour and alcohol out of potatoes. Potatoes had been introduced into Sweden in 1658 but had been cultivated only in the greenhouses of the aristocracy. Ekeblad's work turned potatoes into a staple food in Sweden, and increased the supply of wheat, rye and barley available for making bread, since potatoes could be used instead to make alcohol. This greatly improved the country's eating habits and reduced the frequency of famines. Ekeblad also discovered a method of bleaching cotton textile and yarn with soap in 1751, and of replacing the dangerous ingredients in cosmetics of the time by using potato flour in 1752.
Émilie du Châtelet, a close friend of Voltaire, was the first scientist to appreciate the significance of kinetic energy, as opposed to momentum. She repeated and described the importance of an experiment originally devised by Willem 's Gravesande showing the impact of falling objects is proportional not to their velocity, but to the velocity squared. This understanding is considered to have made a profound contribution to Newtonian mechanics. In 1749 she completed the French translation of Newton's Philosophiae Naturalis Principia Mathematica (the Principia), including her derivation of the notion of conservation of energy from its principles of mechanics. Published ten years after her death, her translation and commentary of the Principia contributed to the completion of the scientific revolution in France and to its acceptance in Europe.
Marie-Anne Pierrette Paulze and her husband Antoine Lavoisier rebuilt the field of chemistry, which had its roots in alchemy and at the time was a convoluted science dominated by George Stahl's theory of phlogiston. Paulze accompanied Lavoisier in his lab, making entries into lab notebooks and sketching diagrams of his experimental designs. The training she had received allowed her to accurately and precisely draw experimental apparatuses, which ultimately helped many of Lavoisier's contemporaries to understand his methods and results. Paulze translated various works about phlogiston into French. One of her most important translation was that of Richard Kirwan's Essay on Phlogiston and the Constitution of Acids, which she both translated and critiqued, adding footnotes as she went along and pointing out errors in the chemistry made throughout the paper. Paulze was instrumental in the 1789 publication of Lavoisier's Elementary Treatise on Chemistry, which presented a unified view of chemistry as a field. This work proved pivotal in the progression of chemistry, as it presented the idea of conservation of mass as well as a list of elements and a new system for chemical nomenclature. She also kept strict records of the procedures followed, lending validity to the findings Lavoisier published.
The astronomer Caroline Herschel was born in Hanover but moved to England where she acted as an assistant to her brother, William Herschel. Throughout her writings, she repeatedly made it clear that she desired to earn an independent wage and be able to support herself. When the crown began paying her for her assistance to her brother in 1787, she became the first woman to do so at a time when even men rarely received wages for scientific enterprisesto receive a salary for services to science. During 1786–97 she discovered eight comets, the first on 1 August 1786. She had unquestioned priority as discoverer of five of the comets and rediscovered Comet Encke in 1795. Five of her comets were published in Philosophical Transactions, a packet of paper bearing the superscription, "This is what I call the Bills and Receipts of my Comets" contains some data connected with the discovery of each of these objects. William was summoned to Windsor Castle to demonstrate Caroline's comet to the royal family. Caroline Herschel is often credited as the first woman to discover a comet; however, Maria Kirch discovered a comet in the early 1700s, but is often overlooked because at the time, the discovery was attributed to her husband, Gottfried Kirch.
Nineteenth century
Early nineteenth century
Science remained a largely amateur profession during the early part of the nineteenth century. Botany was considered a popular and fashionable activity, and one particularly suitable to women. In the later eighteenth and early nineteenth centuries, it was one of the most accessible areas of science for women in both England and North America.
However, as the nineteenth century progressed, botany and other sciences became increasingly professionalized, and women were increasingly excluded. Women's contributions were limited by their exclusion from most formal scientific education, but began to be recognized through their occasional admittance into learned societies during this period.
Scottish scientist Mary Fairfax Somerville carried out experiments in magnetism, presenting a paper entitled 'The Magnetic Properties of the Violet Rays of the Solar Spectrum' to the Royal Society in 1826, the second woman to do so. She also wrote several mathematical, astronomical, physical and geographical texts, and was a strong advocate for women's education. In 1835, she and Caroline Herschel were the first two women elected as Honorary Members of the Royal Astronomical Society.
English mathematician Ada, Lady Lovelace, a pupil of Somerville, corresponded with Charles Babbage about applications for his analytical engine. In her notes (1842–43) appended to her translation of Luigi Menabrea's article on the engine, she foresaw wide applications for it as a general-purpose computer, including composing music. She has been credited as writing the first computer program, though this has been disputed.
In Germany, institutes for "higher" education of women (Höhere Mädchenschule, in some regions called Lyzeum) were founded at the beginning of the century. The Deaconess Institute at Kaiserswerth was established in 1836 to instruct women in nursing. Elizabeth Fry visited the institute in 1840 and was inspired to found the London Institute of Nursing, and Florence Nightingale studied there in 1851.
In the US, Maria Mitchell made her name by discovering a comet in 1847, but also contributed calculations to the Nautical Almanac produced by the United States Naval Observatory. She became the first woman member of the American Academy of Arts and Sciences in 1848 and of the American Association for the Advancement of Science in 1850.
Other notable female scientists during this period include:
in Britain, Mary Anning (paleontologist), Anna Atkins (botanist), Janet Taylor (astronomer)
in France, Marie-Sophie Germain (mathematician), Jeanne Villepreux-Power (marine biologist)
Late 19th century in western Europe
The latter part of the 19th century saw a rise in educational opportunities for women. Schools aiming to provide education for girls similar to that afforded to boys were founded in the UK, including the North London Collegiate School (1850), Cheltenham Ladies' College (1853) and the Girls' Public Day School Trust schools (from 1872). The first UK women's university college, Girton, was founded in 1869, and others soon followed: Newnham (1871) and Somerville (1879).
The Crimean War (1854–1856) contributed to establishing nursing as a profession, making Florence Nightingale a household name. A public subscription allowed Nightingale to establish a school of nursing in London in 1860, and schools following her principles were established throughout the UK. Nightingale was also a pioneer in public health as well as a statistician.
James Barry became the first British woman to gain a medical qualification in 1812, passing as a man. Elizabeth Garrett Anderson was the first openly female Briton to qualify medically, in 1865. With Sophia Jex-Blake, American Elizabeth Blackwell and others, Garret Anderson founded the first UK medical school to train women, the London School of Medicine for Women, in 1874.
Annie Scott Dill Maunder was a pioneer in astronomical photography, especially of sunspots. A mathematics graduate of Girton College, Cambridge, she was first hired (in 1890) to be an assistant to Edward Walter Maunder, discoverer of the Maunder Minimum, the head of the solar department at Greenwich Observatory. They worked together to observe sunspots and to refine the techniques of solar photography. They married in 1895. Annie's mathematical skills made it possible to analyse the years of sunspot data that Maunder had been collecting at Greenwich. She also designed a small, portable wide-angle camera with a lens. In 1898, the Maunders traveled to India, where Annie took the first photographs of the Sun's corona during a solar eclipse. By analysing the Cambridge records for both sunspots and geomagnetic storm, they were able to show that specific regions of the Sun's surface were the source of geomagnetic storms and that the Sun did not radiate its energy uniformly into space, as William Thomson, 1st Baron Kelvin had declared.
In Prussia women could go to university from 1894 and were allowed to receive a PhD. In 1908 all remaining restrictions for women were terminated.
Alphonse Rebière published a book in 1897, in France, entitled Les Femmes dans la science (Women in Science) which listed the contributions and publications of women in science.
Other notable female scientists during this period include:
in Britain, Hertha Marks Ayrton (mathematician, engineer), Margaret Huggins (astronomer), Beatrix Potter (mycologist)
in France, Dorothea Klumpke-Roberts (American-born astronomer)
in Germany, Amalie Dietrich (naturalist), Agnes Pockels (physicist)
in Russia and Sweden, Sofia Kovalevskaya (mathematician)
Late nineteenth-century Russians
In the second half of the 19th century, a large proportion of the most successful women in the STEM fields were Russians. Although many women received advanced training in medicine in the 1870s, in other fields women were barred and had to go to western Europemainly Switzerlandin order to pursue scientific studies. In her book about these "women of the [eighteen] sixties" (шестидесятницы), as they were called, Ann Hibner Koblitz writes:
Among the successful scientists were Nadezhda Suslova (1843–1918), the first woman in the world to obtain a medical doctorate fully equivalent to men's degrees; Maria Bokova-Sechenova (1839–1929), a pioneer of women's medical education who received two doctoral degrees, one in medicine in Zürich and one in physiology in Vienna; Iulia Lermontova (1846–1919), the first woman in the world to receive a doctoral degree in chemistry; the marine biologist Sofia Pereiaslavtseva (1849–1903), director of the Sevastopol Biological Station and winner of the Kessler Prize of the Russian Society of Natural Scientists; and the mathematician Sofia Kovalevskaia (1850–1891), the first woman in 19th century Europe to receive a doctorate in mathematics and the first to become a university professor in any field.
Late nineteenth century in the United States
In the later nineteenth century the rise of the women's college provided jobs for women scientists, and opportunities for education.
Women's colleges produced a disproportionate number of women who went on for PhDs in science.
Many coeducational colleges and universities also opened or started to admit women during this period; such institutions included just over 3000 women in 1875, by 1900 numbered almost 20,000.
An example is Elizabeth Blackwell, who became the first certified female doctor in the US when she graduated from Geneva Medical College in 1849. With her sister, Emily Blackwell, and Marie Zakrzewska, Blackwell founded the New York Infirmary for Women and Children in 1857 and the first women's medical college in 1868, providing both training and clinical experience for women doctors. She also published several books on medical education for women.
In 1876, Elizabeth Bragg became the first woman to graduate with a civil engineering degree in the United States, from the University of California, Berkeley.
Early twentieth century
Europe before World War II
Marie Skłodowska-Curie, the first woman to win a Nobel prize in 1903 (physics), went on to become a double Nobel prize winner in 1911, both for her work on radiation. She was the first person to win two Nobel prizes, a feat accomplished by only three others since then. She also was the first woman to teach at Sorbonne University in Paris.
Alice Perry is understood to be the first woman to graduate with a degree in civil engineering in the then United Kingdom of Great Britain and Ireland, in 1906 at Queen's College, Galway, Ireland.
Lise Meitner played a major role in the discovery of nuclear fission. As head of the physics section at the Kaiser Wilhelm Institute in Berlin she collaborated closely with the head of chemistry Otto Hahn on atomic physics until forced to flee Berlin in 1938. In 1939, in collaboration with her nephew Otto Frisch, Meitner derived the theoretical explanation for an experiment performed by Hahn and Fritz Strassman in Berlin, thereby demonstrating the occurrence of nuclear fission. The possibility that Fermi's bombardment of uranium with neutrons in 1934 had instead produced fission by breaking up the nucleus into lighter elements, had actually first been raised in print in 1934, by chemist Ida Noddack (co-discover of the element rhenium), but this suggestion had been ignored at the time, as no group made a concerted effort to find any of these light radioactive fission products.
Maria Montessori was the first woman in Southern Europe to qualify as a physician. She developed an interest in the diseases of children and believed in the necessity of educating those recognized to be ineducable. In the case of the latter she argued for the development of training for teachers along Froebelian lines and developed the principle that was also to inform her general educational program, which is the first the education of the senses, then the education of the intellect. Montessori introduced a teaching program that allowed defective children to read and write. She sought to teach skills not by having children repeatedly try it, but by developing exercises that prepare them.
Emmy Noether revolutionized abstract algebra, filled in gaps in relativity, and was responsible for a critical theorem about conserved quantities in physics. One notes that the Erlangen program attempted to identify invariants under a group of transformations. On 16 July 1918, before a scientific organization in Göttingen, Felix Klein read a paper written by Emmy Noether, because she was not allowed to present the paper herself. In particular, in what is referred to in physics as Noether's theorem, this paper identified the conditions under which the Poincaré group of transformations (now called a gauge group) for general relativity defines conservation laws. Noether's papers made the requirements for the conservation laws precise. Among mathematicians, Noether is best known for her fundamental contributions to abstract algebra, where the adjective noetherian is nowadays commonly used on many sorts of objects.
Mary Cartwright was a British mathematician who was the first to analyze a dynamical system with chaos. Inge Lehmann, a Danish seismologist, first suggested in 1936 that inside the Earth's molten core there may be a solid inner core. Women such as Margaret Fountaine continued to contribute detailed observations and illustrations in botany, entomology, and related observational fields. Joan Beauchamp Procter, an outstanding herpetologist, was the first woman Curator of Reptiles for the Zoological Society of London at London Zoo.
Florence Sabin was an American medical scientist. Sabin was the first woman faculty member at Johns Hopkins in 1902, and the first woman full-time professor there in 1917. Her scientific and research experience is notable. Sabin published over 100 scientific papers and multiple books.
United States before and during World War II
Women moved into science in significant numbers by 1900, helped by the women's colleges and by opportunities at some of the new universities. Margaret Rossiter's books Women Scientists in America: Struggles and Strategies to 1940 and Women Scientists in America: Before Affirmative Action 1940–1972 provide an overview of this period, stressing the opportunities women found in separate women's work in science.
In 1892, Ellen Swallow Richards called for the "christening of a new science" – "oekology" (ecology) in a Boston lecture. This new science included the study of "consumer nutrition" and environmental education. This interdisciplinary branch of science was later specialized into what is currently known as ecology, while the consumer nutrition focus split off and was eventually relabeled as home economics, which provided another avenue for women to study science. Richards helped to form the American Home Economics Association, which published a journal, the Journal of Home Economics, and hosted conferences. Home economics departments were formed at many colleges, especially at land grant institutions. In her work at MIT, Ellen Richards also introduced the first biology course in its history as well as the focus area of sanitary engineering.
Women also found opportunities in botany and embryology. In psychology, women earned doctorates but were encouraged to specialize in educational and child psychology and to take jobs in clinical settings, such as hospitals and social welfare agencies.
In 1901, Annie Jump Cannon first noticed that it was a star's temperature that was the principal distinguishing feature among different spectra. This led to re-ordering of the ABC types by temperature instead of hydrogen absorption-line strength. Due to Cannon's work, most of the then-existing classes of stars were thrown out as redundant. Afterward, astronomy was left with the seven primary classes recognized today, in order: O, B, A, F, G, K, M; that has since been extended.
Henrietta Swan Leavitt first published her study of variable stars in 1908. This discovery became known as the "period-luminosity relationship" of Cepheid variables. Our picture of the universe was changed forever, largely because of Leavitt's discovery.
The accomplishments of Edwin Hubble, renowned American astronomer, were made possible by Leavitt's groundbreaking research and Leavitt's Law. "If Henrietta Leavitt had provided the key to determine the size of the cosmos, then it was Edwin Powell Hubble who inserted it in the lock and provided the observations that allowed it to be turned", wrote David H. and Matthew D.H. Clark in their book Measuring the Cosmos.
Hubble often said that Leavitt deserved the Nobel for her work. Gösta Mittag-Leffler of the Swedish Academy of Sciences had begun paperwork on her nomination in 1924, only to learn that she had died of cancer three years earlier (the Nobel prize cannot be awarded posthumously).
In 1925, Harvard graduate student Cecilia Payne-Gaposchkin demonstrated for the first time from existing evidence on the spectra of stars that stars were made up almost exclusively of hydrogen and helium, one of the most fundamental theories in stellar astrophysics.
Canadian-born Maud Menten worked in the US and Germany. Her most famous work was on enzyme kinetics together with Leonor Michaelis, based on earlier findings of Victor Henri. This resulted in the Michaelis–Menten equations. Menten also invented the azo-dye coupling reaction for alkaline phosphatase, which is still used in histochemistry. She characterised bacterial toxins from B. paratyphosus, Streptococcus scarlatina and Salmonella ssp., and conducted the first electrophoretic separation of proteins in 1944. She worked on the properties of hemoglobin, regulation of blood sugar level, and kidney function.
World War II brought some new opportunities. The Office of Scientific Research and Development, under Vannevar Bush, began in 1941 to keep a registry of men and women trained in the sciences. Because there was a shortage of workers, some women were able to work in jobs they might not otherwise have accessed. Many women worked on the Manhattan Project or on scientific projects for the United States military services. Women who worked on the Manhattan Project included Leona Woods Marshall, Katharine Way, and Chien-Shiung Wu. It was actually Wu who confirmed Enrico Fermi's hypothesis through her earlier draft that Xe-135 impeded the B reactor from working. The adjustments made would quickly let the project resume its course.
Wu would later also confirm Albert Einstein's EPR Paradox in the first experimental corroboration, and prove the first violation of Parity and Charge Conjugate Symmetry, thereby laying the conceptual basis for the future Standard model of Particle Physics, and the rapid development of the new field.
Women in other disciplines looked for ways to apply their expertise to the war effort. Three nutritionists, Lydia J. Roberts, Hazel K. Stiebeling, and Helen S. Mitchell, developed the Recommended Dietary Allowance in 1941 to help military and civilian groups make plans for group feeding situations. The RDAs proved necessary, especially, once foods began to be rationed. Rachel Carson worked for the United States Bureau of Fisheries, writing brochures to encourage Americans to consume a wider variety of fish and seafood. She also contributed to research to assist the Navy in developing techniques and equipment for submarine detection.
Women in psychology formed the National Council of Women Psychologists, which organized projects related to the war effort. The NCWP elected Florence Laura Goodenough president. In the social sciences, several women contributed to the Japanese Evacuation and Resettlement Study, based at the University of California. This study was led by sociologist Dorothy Swaine Thomas, who directed the project and synthesized information from her informants, mostly graduate students in anthropology. These included Tamie Tsuchiyama, the only Japanese-American woman to contribute to the study, and Rosalie Hankey Wax.
In the United States Navy, female scientists conducted a wide range of research. Mary Sears, a planktonologist, researched military oceanographic techniques as head of the Hydgrographic Office's Oceanographic Unit. Florence van Straten, a chemist, worked as an aerological engineer. She studied the effects of weather on military combat. Grace Hopper, a mathematician, became one of the first computer programmers for the Mark I computer. Mina Spiegel Rees, also a mathematician, was the chief technical aide for the Applied Mathematics Panel of the National Defense Research Committee.
Gerty Cori was a biochemist who discovered the mechanism by which glycogen, a derivative of glucose, is transformed in the muscles to form lactic acid, and is later reformed as a way to store energy. For this discovery she and her colleagues were awarded the Nobel prize in 1947, making her the third woman and the first American woman to win a Nobel Prize in science. She was the first woman ever to be awarded the Nobel Prize in Physiology or Medicine. Cori is among several scientists whose works are commemorated by a U.S. postage stamp.
Late 20th century to early 21st century
Nina Byers notes that before 1976, fundamental contributions of women to physics were rarely acknowledged. Women worked unpaid or in positions lacking the status they deserved. That imbalance is gradually being redressed.
In the early 1980s, Margaret Rossiter presented two concepts for understanding the statistics behind women in science as well as the disadvantages women continued to suffer. She coined the terms "hierarchical segregation" and "territorial segregation." The former term describes the phenomenon in which the further one goes up the chain of command in the field, the smaller the presence of women. The latter describes the phenomenon in which women "cluster in scientific disciplines."
A recent book titled Athena Unbound provides a life-course analysis (based on interviews and surveys) of women in science from early childhood interest, through university, graduate school and the academic workplace. The thesis of this book is that "Women face a special series of gender related barriers to entry and success in scientific careers that persist, despite recent advances".
The L'Oréal-UNESCO Awards for Women in Science were set up in 1998, with prizes alternating each year between the materials science and life sciences. One award is given for each geographical region of Africa and the Middle East, Asia-Pacific, Europe, Latin America and the Caribbean, and North America. By 2017, these awards had recognised almost 100 laureates from 30 countries. Two of the laureates have gone on to win the Nobel Prize, Ada Yonath (2008) and Elizabeth Blackburn (2009). Fifteen promising young researchers also receive an International Rising Talent fellowship each year within this programme.
Europe after World War II
South-African born physicist and radiobiologist Tikvah Alper(1909–95), working in the UK, developed many fundamental insights into biological mechanisms, including the (negative) discovery that the infective agent in scrapie could not be a virus or other eukaryotic structure.
French virologist Françoise Barré-Sinoussi performed some of the fundamental work in the identification of the human immunodeficiency virus (HIV) as the cause of AIDS, for which she shared the 2008 Nobel Prize in Physiology or Medicine.
In July 1967, Jocelyn Bell Burnell discovered evidence for the first known radio pulsar, which resulted in the 1974 Nobel Prize in Physics for her supervisor. She was president of the Institute of Physics from October 2008 until October 2010.
Astrophysicist Margaret Burbidge was a member of the B2FH group responsible for originating the theory of stellar nucleosynthesis, which explains how elements are formed in stars. She has held a number of prestigious posts, including the directorship of the Royal Greenwich Observatory.
Mary Cartwright was a mathematician and student of G. H. Hardy. Her work on nonlinear differential equations was influential in the field of dynamical systems.
Rosalind Franklin was a crystallographer, whose work helped to elucidate the fine structures of coal, graphite, DNA and viruses. In 1953, the work she did on DNA allowed Watson and Crick to conceive their model of the structure of DNA. Her photograph of DNA gave Watson and Crick a basis for their DNA research, and they were awarded the Nobel Prize without giving due credit to Franklin, who had died of cancer in 1958.
Jane Goodall is a British primatologist considered to be the world's foremost expert on chimpanzees and is best known for her over 55-year study of social and family interactions of wild chimpanzees. She is the founder of the Jane Goodall Institute and the Roots & Shoots programme.
Dorothy Hodgkin analyzed the molecular structure of complex chemicals by studying diffraction patterns caused by passing X-rays through crystals. She won the 1964 Nobel prize for chemistry for discovering the structure of vitamin B12, becoming the third woman to win the prize for chemistry.
Irène Joliot-Curie, daughter of Marie Curie, won the 1935 Nobel Prize for chemistry with her husband Frédéric Joliot for their work in radioactive isotopes leading to nuclear fission. This made the Curies the family with the most Nobel laureates to date.
Palaeoanthropologist Mary Leakey discovered the first skull of a fossil ape on Rusinga Island and also a noted robust Australopithecine.
Italian neurologist Rita Levi-Montalcini received the 1986 Nobel Prize in Physiology or Medicine for the discovery of Nerve growth factor (NGF). Her work allowed for a further potential understanding of different diseases such as tumors, delayed healing, malformations, and others. This research led to her winning the Nobel Prize for Physiology or Medicine alongside Stanley Cohen in 1986. While making advancements in medicine and science, Rita Levi-Montalcini was also active politically throughout her life. She was appointed a Senator for Life in the Italian Senate in 2001 and is the oldest Nobel laureate ever to have lived.
Zoologist Anne McLaren conducted studied in genetics which led to advances in in vitro fertilization. She became the first female officer of the Royal Society in 331 years.
Christiane Nüsslein-Volhard received the Nobel Prize in Physiology or Medicine in 1995 for research on the genetic control of embryonic development. She also started the Christiane Nüsslein-Volhard Foundation (Christiane Nüsslein-Volhard Stiftung), to aid promising young female German scientists with children.
Bertha Swirles was a theoretical physicist who made a number of contributions to early quantum theory. She co-authored the well-known textbook Methods of Mathematical Physics with her husband Sir Harold Jeffreys.
United States after World War II
Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman were six of the original programmers for the ENIAC, the first general purpose electronic computer.
Linda B. Buck is a neurobiologist who was awarded the 2004 Nobel Prize in Physiology or Medicine along with Richard Axel for their work on olfactory receptors.
Rachel Carson was a marine biologist from the United States. She is credited with being the founder of the environmental movement. The biologist and activist published Silent Spring, a work on the dangers of pesticides, in 1962. The publishing of her environmental science book led to the questioning of usage of harmful pesticides and other chemicals in agricultural settings. This led to a campaign to attempt to ultimately discredit Carson. However, the federal government called for a review of DDT which concluded with DDT being banned. Carson later died from cancer in 1964 at 57 years old.
Eugenie Clark, popularly known as The Shark Lady, was an American ichthyologist known for her research on poisonous fish of the tropical seas and on the behavior of sharks.
Ann Druyan is an American writer, lecturer and producer specializing in cosmology and popular science. Druyan has credited her knowledge of science to the 20 years she spent studying with her late husband, Carl Sagan, rather than formal academic training. She was responsible for the selection of music on the Voyager Golden Record for the Voyager 1 and Voyager 2 exploratory missions. Druyan also sponsored the Cosmos 1 spacecraft.
Gertrude B. Elion was an American biochemist and pharmacologist, awarded the Nobel Prize in Physiology or Medicine in 1988 for her work on the differences in biochemistry between normal human cells and pathogens.
Sandra Moore Faber, with Robert Jackson, discovered the Faber–Jackson relation between luminosity and stellar dispersion velocity in elliptical galaxies. She also headed the team which discovered the Great Attractor, a large concentration of mass which is pulling a number of nearby galaxies in its direction.
Zoologist Dian Fossey worked with gorillas in Africa from 1967 until her murder in 1985.
Astronomer Andrea Ghez received a MacArthur "genius grant" in 2008 for her work in surmounting the limitations of earthbound telescopes.
Maria Goeppert Mayer was the second female Nobel Prize winner in Physics, for proposing the nuclear shell model of the atomic nucleus. Earlier in her career, she had worked in unofficial or volunteer positions at the university where her husband was a professor. Goeppert Mayer is one of several scientists whose works are commemorated by a U.S. postage stamp.
Sulamith Low Goldhaber and her husband Gerson Goldhaber formed a research team on the K meson and other high-energy particles in the 1950s.
Carol Greider and the Australian born Elizabeth Blackburn, along with Jack W. Szostak, received the 2009 Nobel Prize in Physiology or Medicine for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase.
Rear Admiral Grace Murray Hopper developed the first computer compiler while working for the Eckert Mauchly Computer Corporation, released in 1952.
Deborah S. Jin's team at JILA, in Boulder, Colorado, in 2003 produced the first fermionic condensate, a new state of matter.
Stephanie Kwolek, a researcher at DuPont, invented poly-paraphenylene terephthalamide – better known as Kevlar.
Lynn Margulis is a biologist best known for her work on endosymbiotic theory, which is now generally accepted for how certain organelles were formed.
Barbara McClintock's studies of maize genetics demonstrated genetic transposition in the 1940s and 1950s. Before then, McClintock obtained her PhD from Cornell University in 1927. Her discovery of transposition provided a greater understanding of mobile loci within chromosomes and the ability for genetics to be fluid. She dedicated her life to her research, and she was awarded the Nobel Prize in Physiology or Medicine in 1983. McClintock was the first American woman to receive a Nobel Prize that was not shared by anyone else. McClintock is one of several scientists whose works are commemorated by a U.S. postage stamp.
Nita Ahuja is a renowned surgeon-scientist known for her work on CIMP in cancer, she is currently the Chief of surgical oncology at Johns Hopkins Hospital. First woman ever to be the Chief of this prestigious department.
Carolyn Porco is a planetary scientist best known for her work on the Voyager program and the Cassini–Huygens mission to Saturn. She is also known for her popularization of science, in particular space exploration.
Physicist Helen Quinn, with Roberto Peccei, postulated Peccei-Quinn symmetry. One consequence is a particle known as the axion, a candidate for the dark matter that pervades the universe. Quinn was the first woman to receive the Dirac Medal by the International Centre for Theoretical Physics (ICTP) and the first to receive the Oskar Klein Medal.
Lisa Randall is a theoretical physicist and cosmologist, best known for her work on the Randall–Sundrum model. She was the first tenured female physics professor at Princeton University.
Sally Ride was an astrophysicist and the first American woman, and then-youngest American, to travel to outer space. Ride wrote or co-wrote several books on space aimed at children, with the goal of encouraging them to study science. Ride participated in the Gravity Probe B (GP-B) project, which provided more evidence that the predictions of Albert Einstein's general theory of relativity are correct.
Through her observations of galaxy rotation curves, astronomer Vera Rubin discovered the Galaxy rotation problem, now taken to be one of the key pieces of evidence for the existence of dark matter. She was the first female allowed to observe at the Palomar Observatory.
Sara Seager is a Canadian-American astronomer who is currently a professor at the Massachusetts Institute of Technology and known for her work on extrasolar planets.
Astronomer Jill Tarter is best known for her work on the search for extraterrestrial intelligence. Tarter was named one of the 100 most influential people in the world by Time Magazine in 2004. She is the former director of SETI.
Rosalyn Yalow was the co-winner of the 1977 Nobel Prize in Physiology or Medicine (together with Roger Guillemin and Andrew Schally) for development of the radioimmunoassay (RIA) technique.
Australia after World War II
Amanda Barnard, an Australia-based theoretical physicist specializing in nanomaterials, winner of the Malcolm McIntosh Prize for Physical Scientist of the Year.
Isobel Bennett, was one of the first women to go to Macquarie Island with the Australian National Antarctic Research Expeditions (ANARE). She is one of Australia's best known marine biologists.
Dorothy Hill, an Australian geologist who became the first female Professor at an Australian university.
Ruby Payne-Scott, was an Australian who was an early leader in the fields of radio astronomy and radiophysics. She was one of the first radio astronomers and the first woman in the field.
Penny Sackett, an astronomer who became the first female Chief Scientist of Australia in 2008. She is a US-born Australian citizen.
Fiona Stanley, winner of the 2003 Australian of the Year award, is an epidemiologist noted for her research into child and maternal health, birth disorders, and her work in the public health field.
Michelle Simmons, winner of the 2018 Australian of the Year award, is a quantum physicist known for her research and leadership on atomic-scale silicon quantum devices.
Israel after World War II
Ada Yonath, the first woman from the Middle East to win a Nobel prize in the sciences, was awarded the Nobel Prize in Chemistry in 2009 for her studies on the structure and function of the ribosome.
Latin America
Maria Nieves Garcia-Casal, the first scientist and nutritionist woman from Latin America to lead the Latin America Society of Nutrition.
Angela Restrepo Moreno is a microbiologist from Colombia. She first gained interest in tiny organisms when she had the opportunity to view them through a microscope that belonged to her grandfather. While Restrepo has a variety of research, her main area of research is fungi and their causes of diseases. Her work led her to develop research on a disease caused by fungi that has only been diagnosed in Latin America but was originally found in Brazil: Paracoccidioidomycosis. Research groups also developed by Restrepo have begun studying two routes: the relationship between humans, fungi, and the environment and also how the cells within the fungi work.
Along with her research, Restrepo co-founded a non-profit that is devoted to scientific research named Corporation for Biological Research (CIB). Angela Restrepo Moreno was awarded the SCOPUS Prize in 2007 for her numerous publications. She currently resides in Colombia and continues her research.
Susana López Charretón was born in Mexico City, Mexico in 1957. She is a virologist whose area of study focused on the rotavirus. When she initially began studying rotavirus, it had only been discovered four years earlier. Charretón's main job was to study how the virus entered cells and its ways of multiplying. Because of her, and several others, work other scientists were able to learn about more details of the virus. Now, her research focuses on the virus's ability to recognize the cells it infects. Along with her husband, Charretón was awarded the Carlos J. Finlay Prize for Microbiology in 2001. She also received the Loreal-UNESCO prize titled "Woman in Science" in 2012. Charretón has also received several other awards for her research.
Liliana Quintanar Vera is a Mexican chemist. Currently a researcher at the Department of Chemistry of the Center of Investigation and Advanced Studies, Vera's research currently focuses on neurodegenerative diseases like Parkinson's, Alzheimer's, and prion disease and also on degenerative diseases like diabetes and cataracts. For this research she focused on how copper interacts with the proteins of the neurodegenerative diseases mentioned before.
Liliana's awards include the Mexican Academy of Sciences Research Prize for Science in 2017, the Marcos Moshinsky Chair award in 2016, the Fulbright Scholarship in 2014, and the L'Oréal-UNESCO For Women in Science Award in 2007.
Nobel laureates
The Nobel Prize and Prize in Economic Sciences have been awarded to women 61 times between 1901 and 2022. One woman, Marie Sklodowska-Curie, has been honored twice, with the 1903 Nobel Prize in Physics and the 1911 Nobel Prize in Chemistry. This means that 60 women in total have been awarded the Nobel Prize between 1901 and 2022. 25 women have been awarded the Nobel Prize in physics, chemistry, physiology or medicine.
Chemistry
2022 – Carolyn Bertozzi
2020 – Emmanuelle Charpentier, Jennifer Doudna
2018 – Frances Arnold
2009 – Ada E. Yonath
1964 – Dorothy Crowfoot Hodgkin
1935 – Irène Joliot-Curie
1911 – Marie Sklodowska-Curie
Physics
2023 – Anne L'Huillier
2020 – Andrea Ghez
2018 – Donna Strickland
1963 – Maria Goeppert-Mayer
1903 – Marie Sklodowska-Curie
Physiology or Medicine
2023 – Katalin Karikó
2015 – Youyou Tu
2014 – May-Britt Moser
2009 – Elizabeth H. Blackburn
2009 – Carol W. Greider
2008 – Françoise Barré-Sinoussi
2004 – Linda B. Buck
1995 – Christiane Nüsslein-Volhard
1988 – Gertrude B. Elion
1986 – Rita Levi-Montalcini
1983 – Barbara McClintock
1977 – Rosalyn Yalow
1947 – Gerty Cori
Fields Medal
2014 – Maryam Mirzakhani (1977–2017), the first woman to have won the prize, was an Iranian mathematician and a professor of mathematics at Stanford University.
2022 – Maryna Viazovska
Statistics
Statistics are used to indicate disadvantages faced by women in science, and also to track positive changes of employment opportunities and incomes for women in science.
Situation in the 1990s
Women appear to do less well than men (in terms of degree, rank, and salary) in the fields that have been traditionally dominated by women, such as nursing. In 1991 women attributed 91% of the PhDs in nursing, and men held 4% of full professorships in nursing. In the field of psychology, where women earn the majority of PhDs, women do not fill the majority of high rank positions in that field.
Women's lower salaries in the scientific community are also reflected in statistics. According to the data provided in 1993, the median salaries of female scientists and engineers with doctoral degrees were 20% less than men. This data can be explained as there was less participation of women in high rank scientific fields/positions and a female majority in low-paid fields/positions. However, even with men and women in the same scientific community field, women are typically paid 15–17% less than men. In addition to the gender gap, there were also salary differences between ethnicity: African-American women with more years of experiences earn 3.4% less than European-American women with similar skills, while Asian women engineers out-earn both Africans and Europeans.
Women are also under-represented in the sciences as compared to their numbers in the overall working population. Within 11% of African-American women in the workforce, 3% are employed as scientists and engineers. Hispanics made up 8% of the total workers in the US, 3% of that number are scientists and engineers. Native Americans participation cannot be statistically measured.
Women tend to earn less than men in almost all industries, including government and academia. Women are less likely to be hired in highest-paid positions. The data showing the differences in salaries, ranks, and overall success between the genders is often claimed to be a result of women's lack of professional experience. The rate of women's professional achievement is increasing. In 1996, the salaries for women in professional fields increased from 85% to 95% relative to men with similar skills and jobs. Young women between the age of 27 and 33 earned 98%, nearly as much as their male peers. In the total workforce of the United States, women earn 74% as much as their male counterparts (in the 1970s they made 59% as much as their male counterparts).
Claudia Goldin, Harvard concludes in A Grand Gender Convergence: Its Last Chapter – "The gender gap in pay would be considerably reduced and might vanish altogether if firms did not have an incentive to disproportionately reward individuals who labored long hours and worked particular hours."
Research on women's participation in the "hard" sciences such as physics and computer science speaks of the "leaky pipeline" model, in which the proportion of women "on track" to potentially becoming top scientists fall off at every step of the way, from getting interested in science and maths in elementary school, through doctorate, postdoctoral, and career steps. The leaky pipeline also applies in other fields. In biology, for instance, women in the United States have been getting Masters degrees in the same numbers as men for two decades, yet fewer women get PhDs; and the numbers of women principal investigators have not risen.
What may be the cause of this "leaky pipeline" of women in the sciences? It is important to look at factors outside of academia that are occurring in women's lives at the same time they are pursuing their continued education and career search. The most outstanding factor that is occurring at this crucial time is family formation. As women are continuing their academic careers, they are also stepping into their new role as a wife and mother. These traditionally require at large time commitment and presence outside work. These new commitments do not fare well for the person looking to attain tenure. That is why women entering the family formation period of their life are 35% less likely to pursue tenure positions after receiving their PhD's than their male counterparts.
In the UK, women occupied over half the places in science-related higher education courses (science, medicine, maths, computer science and engineering) in 2004–05. However, gender differences varied from subject to subject: women substantially outnumbered men in biology and medicine, especially nursing, while men predominated in maths, physical sciences, computer science and engineering.
In the US, women with science or engineering doctoral degrees were predominantly employed in the education sector in 2001, with substantially fewer employed in business or industry than men. According to salary figures reported in 1991, women earn anywhere between 83.6 percent to 87.5 percent that of a man's salary. An even greater disparity between men and women is the ongoing trend that women scientists with more experience are not as well-compensated as their male counterparts. The salary of a male engineer continues to experience growth as he gains experience whereas the female engineer sees her salary reach a plateau.
Women, in the United States and many European countries, who succeed in science tend to be graduates of single-sex schools. Women earn 54% of all bachelor's degrees in the United States and 50% of those are in science. 9% of US physicists are women.
Overview of situation in 2013
In 2013, women accounted for 53% of the world's graduates at the bachelor's and master's level and 43% of successful PhD candidates but just 28% of researchers. Women graduates are consistently highly represented in the life sciences, often at over 50%. However, their representation in the other fields is inconsistent. In North America and much of Europe, few women graduate in physics, mathematics and computer science but, in other regions, the proportion of women may be close to parity in physics or mathematics. In engineering and computer sciences, women consistently trail men, a situation that is particularly acute in many high-income countries.
In decision-making
As of 2015, each step up the ladder of the scientific research system saw a drop in female participation until, at the highest echelons of scientific research and decision-making, there were very few women left. In 2015, the EU Commissioner for Research, Science and Innovation Carlos Moedas called attention to this phenomenon, adding that the majority of entrepreneurs in science and engineering tended to be men. In 2013, the German government coalition agreement introduced a 30% quota for women on company boards of directors.
In 2010, women made up 14% of university chancellors and vice-chancellors at Brazilian public universities and 17% of those in South Africa in 2011. As of 2015, in Argentina, women made up 16% of directors and vice-directors of national research centres and, in Mexico, 10% of directors of scientific research institutes at the National Autonomous University of Mexico. In the US, numbers are slightly higher at 23%. In the EU, less than 16% of tertiary institutions were headed by a woman in 2010 and just 10% of universities. In 2011, at the main tertiary institution for the English-speaking Caribbean, the University of the West Indies, women represented 51% of lecturers but only 32% of senior lecturers and 26% of full professors . A 2018 review of the Royal Society of Britain by historians Aileen Fyfe and Camilla Mørk Røstvik produced similarly low numbers, with women accounting for more than 25% of members in only a handful of countries, including Cuba, Panama and South Africa. As of 2015, the figure for Indonesia was 17%.
Women in life sciences
In life sciences, women researchers have achieved parity (45–55% of researchers) in many countries. In some, the balance even now tips in their favour. Six out of ten researchers are women in both medical and agricultural sciences in Belarus and New Zealand, for instance. More than two-thirds of researchers in medical sciences are women in El Salvador, Estonia, Kazakhstan, Latvia, the Philippines, Tajikistan, Ukraine and Venezuela.
There has been a steady increase in female graduates in agricultural sciences since the turn of the century. In sub-Saharan Africa, for instance, numbers of female graduates in agricultural science have been increasing steadily, with eight countries reporting a share of women graduates of 40% or more: Lesotho, Madagascar, Mozambique, Namibia, Sierra Leone, South Africa, Swaziland and Zimbabwe. The reasons for this surge are unclear, although one explanation may lie in the growing emphasis on national food security and the food industry. Another possible explanation is that women are highly represented in biotechnology. For example, in South Africa, women were underrepresented in engineering (16%) in 2004 and in 'natural scientific professions' (16%) in 2006 but made up 52% of employees working in biotechnology-related companies.
Women play an increasing role in environmental sciences and conservation biology. In fact, women played a foremost role in the development of these disciplines. Silent Spring by Rachel Carson proved an important impetus to the conservation movement and the later banning of chemical pesticides. Women played an important role in conservation biology including the famous work of Dian Fossey, who published the famous Gorillas in the Mist and Jane Goodall who studied primates in East Africa. Today women make up an increasing proportion of roles in the active conservation sector. A recent survey of those working in the Wildlife Trusts in the U.K., the leading conservation organisation in England, found that there are nearly as many women as men in practical conservation roles.
In engineering and related fields
Women are consistently underrepresented in engineering and related fields. In Israel, for instance, where 28% of senior academic staff are women, there are proportionately many fewer in engineering (14%), physical sciences (11%), mathematics and computer sciences (10%) but dominate education (52%) and paramedical occupations (63%). In Japan and the Republic of Korea, women represent just 5% and 10% of engineers.
For women who are pursuing STEM major careers, these individuals often face gender disparities in the work field, especially in regards to science and engineering. It has become more common for women to pursue undergraduate degrees in science, but are continuously discredited in salary rates and higher ranking positions. For example, men show a greater likelihood of being selected for an employment position than a woman.
In Europe and North America, the number of female graduates in engineering, physics, mathematics and computer science is generally low. Women make up just 19% of engineers in Canada, Germany and the US and 22% in Finland, for example. However, 50% of engineering graduates are women in Cyprus, 38% in Denmark and 36% in the Russian Federation, for instance.
In many cases, engineering has lost ground to other sciences, including agriculture. The case of New Zealand is fairly typical. Here, women jumped from representing 39% to 70% of agricultural graduates between 2000 and 2012, continued to dominate health (80–78%) but ceded ground in science (43–39%) and engineering (33–27%).
In a number of developing countries, there is a sizable proportion of women engineers. At least three out of ten engineers are women, for instance, in Costa Rica, Vietnam and the United Arab Emirates (31%), Algeria (32%), Mozambique (34%), Tunisia (41%) and Brunei Darussalam (42%). In Malaysia (50%) and Oman (53%), women are on a par with men. Of the 13 sub-Saharan countries reporting data, seven have observed substantial increases (more than 5%) in women engineers since 2000, namely: Benin, Burundi, Eritrea, Ethiopia, Madagascar, Mozambique and Namibia.
Of the seven Arab countries reporting data, four observe a steady percentage or an increase in female engineers (Morocco, Oman, Palestine and Saudi Arabia). In the United Arab Emirates, the government has made it a priority to develop a knowledge economy, having recognized the need for a strong human resource base in science, technology and engineering. With just 1% of the labour force being Emirati, it is also concerned about the low percentage of Emirati citizens employed in key industries. As a result, it has introduced policies promoting the training and employment of Emirati citizens, as well as a greater participation of Emirati women in the labour force. Emirati female engineering students have said that they are attracted to a career in engineering for reasons of financial independence, the high social status associated with this field, the opportunity to engage in creative and challenging projects and the wide range of career opportunities.
An analysis of computer science shows a steady decrease in female graduates since 2000 that is particularly marked in high-income countries. Between 2000 and 2012, the share of women graduates in computer science slipped in Australia, New Zealand, the Republic of Korea and US. In Latin America and the Caribbean, the share of women graduates in computer science dropped by between 2 and 13 percentage points over this period for all countries reporting data.
There are exceptions. In Denmark, the proportion of female graduates in computer science increased from 15% to 24% between 2000 and 2012 and Germany saw an increase from 10% to 17%. These are still very low levels. Figures are higher in many emerging economies. In Turkey, for instance, the proportion of women graduating in computer science rose from a relatively high 29% to 33% between 2000 and 2012.
The Malaysian information technology (IT) sector is made up equally of women and men, with large numbers of women employed as university professors and in the private sector. This is a product of two historical trends: the predominance of women in the Malay electronics industry, the precursor to the IT industry, and the national push to achieve a 'pan-Malayan' culture beyond the three ethnic groups of Indian, Chinese and Malay. Government support for the education of all three groups is available on a quota basis and, since few Malay men are interested in IT, this leaves more room for women. Additionally, families tend to be supportive of their daughters' entry into this prestigious and highly remunerated industry, in the interests of upward social mobility. Malaysia's push to develop an endogenous research culture should deepen this trend.
In India, the substantial increase in women undergraduates in engineering may be indicative of a change in the 'masculine' perception of engineering in the country. It is also a product of interest on the part of parents, since their daughters will be assured of employment as the field expands, as well as an advantageous marriage. Other factors include the 'friendly' image of engineering in India and the easy access to engineering education resulting from the increase in the number of women's engineering colleges over the last two decades.
In space
While women have made huge strides in the STEM fields, it is obvious that they are still underrepresented. One of the areas where women are most underrepresented in science is space flight. Out of the 556 people who have traveled to space, only 65 of them were women. This means that only 11% of astronauts have been women.
In the 1960s, the American space program was taking off. However, women were not allowed to be considered for the space program because at the time astronauts were required to be military pilotsa profession that women were not allowed to be a part of. There were other "practical" reasons as well. According to General Don Flickinger of the United States Air Force, there was difficulty "designing and fitting a space suit to accommodate their particular biological needs and functions."
During the early 1960s, the first American astronauts, nicknamed the Mercury Seven, were training. At the same time, William Randolph Lovelace II was interested to see if women could manage to go through the same training that the Mercury 7 undergoing at the time. Lovelace recruited thirteen female pilots, called the "Mercury 13", and put them through the same tests that the male astronauts took. As a result, the women actually performed better on these tests than the men of the Mercury 7 did. However, this did not convince NASA officials to allow women in space. In response, congressional hearings were held to investigate discrimination against women in the program. One of the women who testified at the hearing was Jerrie Cobb, the first woman to pass Lovelace's tests. During her testimony, Cobb said:I find it a little ridiculous when I read in a newspaper that there is a place called Chimp College in New Mexico where they are training chimpanzees for space flight, one a female named Glenda. I think it would be at least as important to let the women undergo this training for space flight.NASA officials also had representatives present, notably astronauts John Glenn and Scott Carpenter, to testify that women are not suited for the space program. Ultimately, no action came from the hearings, and NASA did not put a woman in space until 1983.
Even though the United States did not allow women in space during the 60s or 70s, other countries did. Valentina Tereshkova, a cosmonaut from the Soviet Union, was the first woman to fly in space. Although she had no piloting experience, she flew on the Vostok 6 in 1963. Before going to space, Tereshkova was a textile worker. Although she successfully orbited the Earth 48 times, the next woman to go to space did not fly until almost twenty years later.
Sally Ride was the third woman to go to space and the first American woman in space. In 1978, Ride and five other women were accepted into the first class of astronauts that allowed women. In 1983, Ride became the first American woman in space when she flew on the Challenger for the STS-7 mission.
NASA has been more inclusive in recent years. The number of women in NASA's astronaut classes has steadily risen since the first class that allowed women in 1978. The most recent class was 45% women, and the class before was 50%. In 2019, the first all-female spacewalk was completed at the International Space Station.
Regional trends as of 2013
The global figures mask wide disparities from one region to another. In Southeast Europe, for instance, women researchers have obtained parity and, at 44%, are on the verge of doing so in Central Asia and Latin America and the Caribbean. In the European Union, on the other hand, just one in three (33%) researchers is a woman, compared to 37% in the Arab world. Women are also better represented in sub-Saharan Africa (30%) than in South Asia (17%).
There are also wide intraregional disparities. Women make up 52% of researchers in the Philippines and Thailand, for instance, and are close to parity in Malaysia and Vietnam, yet only one in three researchers is a woman in Indonesia and Singapore. In Japan and the Republic of Korea, two countries characterized by high researcher densities and technological sophistication, as few as 15% and 18% of researchers respectively are women. These are the lowest ratios among members of the Organisation for Economic Co-operation and Development. The Republic of Korea also has the widest gap among OECD members in remuneration between men and women researchers (39%). There is also a yawning gap in Japan (29%).
Latin America and the Caribbean
Latin America has some of the world's highest rates of women studying scientific fields; it also shares with the Caribbean one of the highest proportions of female researchers: 44%. Of the 12 countries reporting data for the years 2010–2013, seven have achieved gender parity, or even dominate research: Bolivia (63%), Venezuela (56%), Argentina (53%), Paraguay (52%), Uruguay (49%), Brazil (48%) and Guatemala (45%). Costa Rica is on the cusp (43%). Chile has the lowest score among countries for which there are recent data (31%). The Caribbean paints a similar picture, with Cuba having achieved gender parity (47%) and Trinidad and Tobago on 44%. Recent data on women's participation in industrial research are available for those countries with the most developed national innovation systems, with the exception of Brazil and Cuba: Uruguay (47%), Argentina (29%), Colombia and Chile (26%).
As in most other regions, the great majority of health graduates are women (60–85%). Women are also strongly represented in science. More than 40% of science graduates are women in each of Argentina, Colombia, Ecuador, El Salvador, Mexico, Panama and Uruguay. The Caribbean paints a similar picture, with women graduates in science being on a par with men or dominating this field in Barbados, Cuba, Dominican Republic and Trinidad and Tobago.
In engineering, women make up over 30% of the graduate population in seven Latin American countries (Argentina, Colombia, Costa Rica, Honduras, Panama and Uruguay) and one Caribbean country, the Dominican Republic. There has been a decrease in the number of women engineering graduates in Argentina, Chile and Honduras.
The participation of women in science has consistently dropped since the turn of the century. This trend has been observed in all sectors of the larger economies: Argentina, Brazil, Chile and Colombia. Mexico is a notable exception, having recorded a slight increase. Some of the decrease may be attributed to women transferring to agricultural sciences in these countries. Another negative trend is the drop in female doctoral students and in the labour force. Of those countries reporting data, the majority signal a significant drop of 10–20 percentage points in the transition from master's to doctoral graduates.
Eastern Europe, West and Central Asia
Most countries in Eastern Europe, West and Central Asia have attained gender parity in research (Armenia, Azerbaijan, Georgia, Kazakhstan, Mongolia and Ukraine) or are on the brink of doing so (Kyrgyzstan and Uzbekistan). This trend is reflected in tertiary education, with some exceptions in engineering and computer science. Although Belarus and the Russian Federation have seen a drop over the past decade, women still represented 41% of researchers in 2013. In the former Soviet states, women are also very present in the business enterprise sector: Bosnia and Herzegovina (59%), Azerbaijan (57%), Kazakhstan (50%), Mongolia (48%), Latvia (48%), Serbia (46%), Croatia and Bulgaria (43%), Ukraine and Uzbekistan (40%), Romania and Montenegro (38%), Belarus (37%), Russian Federation (37%).
One in three researchers is a woman in Turkey (36%) and Tajikistan (34%). Participation rates are lower in Iran (26%) and Israel (21%), although Israeli women represent 28% of senior academic staff. At university, Israeli women dominate medical sciences (63%) but only a minority study engineering (14%), physical sciences (11%), mathematics and computer science (10%). There has been an interesting evolution in Iran. Whereas the share of female PhD graduates in health remained stable at 38–39% between 2007 and 2012, it rose in all three other broad fields. Most spectacular was the leap in female PhD graduates in agricultural sciences from 4% to 33% but there was also a marked progression in science (from 28% to 39%) and engineering (from 8% to 16%).
Southeast Europe
With the exception of Greece, all the countries of Southeast Europe were once part of the Soviet bloc. Some 49% of researchers in these countries are women (compared to 37% in Greece in 2011). This high proportion is considered a legacy of the consistent investment in education by the Socialist governments in place until the early 1990s, including that of the former Yugoslavia. Moreover, the participation of female researchers is holding steady or increasing in much of the region, with representation broadly even across the four sectors of government, business, higher education and non-profit. In most countries, women tend to be on a par with men among tertiary graduates in science. Between 70% and 85% of graduates are women in health, less than 40% in agriculture and between 20% and 30% in engineering. Albania has seen a considerable increase in the share of its women graduates in engineering and agriculture.
European Union
Women make up 33% of researchers overall in the European Union (EU), slightly more than their representation in science (32%). Women constitute 40% of researchers in higher education, 40% in government and 19% in the private sector, with the number of female researchers increasing faster than that of male researchers. The proportion of female researchers has been increasing over the last decade, at a faster rate than men (5.1% annually over 2002–2009 compared with 3.3% for men), which is also true for their participation among scientists and engineers (up 5.4% annually between 2002 and 2010, compared with 3.1% for men).
Despite these gains, women's academic careers in Europe remain characterized by strong vertical and horizontal segregation. In 2010, although female students (55%) and graduates (59%) outnumbered male students, men outnumbered women at the PhD and graduate levels (albeit by a small margin). Further along in the research career, women represented 44% of grade C academic staff, 37% of grade B academic staff and 20% of grade A academic staff.11 These trends are intensified in science, with women making up 31% of the student population at the tertiary level to 38% of PhD students and 35% of PhD graduates. At the faculty level, they make up 32% of academic grade C personnel, 23% of grade B and 11% of grade A. The proportion of women among full professors is lowest in engineering and technology, at 7.9%. With respect to representation in science decision-making, in 2010 15.5% of higher education institutions were headed by women and 10% of universities had a female rector.
Membership on science boards remained predominantly male as well, with women making up 36% of board members. The EU has engaged in a major effort to integrate female researchers and gender research into its research and innovation strategy since the mid-2000s. Increases in women's representation in all of the scientific fields overall indicates that this effort has met with some success; however, the continued lack of representation of women at the top level of faculties, management and science decision making indicate that more work needs to be done. The EU is addressing this through a gender equality strategy and crosscutting mandate in Horizon 2020, its research and innovation funding programme for 2014–2020.
Australia, New Zealand and USA
In 2013, women made up the majority of PhD graduates in fields related to health in Australia (63%), New Zealand (58%) and the United States of America (73%). The same can be said of agriculture, in New Zealand's case (73%). Women have also achieved parity in agriculture in Australia (50%) and the United States (44%). Just one in five women graduate in engineering in the latter two countries, a situation that has not changed over the past decade. In New Zealand, women jumped from constituting 39% to 70% of agricultural graduates (all levels) between 2000 and 2012 but ceded ground in science (43–39%), engineering (33–27%) and health (80–78%). As for Canada, it has not reported sex-disaggregated data for women graduates in science and engineering in recent years. Moreover, none of the four countries mentioned here have reported recent data on the share of female researchers.
South Asia
South Asia is the region where women make up the smallest proportion of researchers: 17%. This is 13 percentage points below sub-Saharan Africa. Of those countries in South Asia reporting data for 2009–2013, Nepal has the lowest representation of all (in head counts), at 8% (2010), a substantial drop from 15% in 2002. In 2013, only 14% of researchers (in full-time equivalents) were women in the region's most populous country, India, down slightly from 15% in 2009. The percentage of female researchers is highest in Sri Lanka (39%), followed by Pakistan: 24% in 2009, 31% in 2013. There are no recent data available for Afghanistan or Bangladesh.
Women are most present in the private non-profit sector – they make up 60% of employees in Sri Lanka – followed by the academic sector: 30% of Pakistani and 42% of Sri Lankan female researchers. Women tend to be less present in the government sector and least likely to be employed in the business sector, accounting for 23% of employees in Sri Lanka, 11% in India and just 5% in Nepal. Women have achieved parity in science in both Sri Lanka and Bangladesh but are less likely to undertake research in engineering. They represent 17% of the research pool in Bangladesh and 29% in Sri Lanka. Many Sri Lankan women have followed the global trend of opting for a career in agricultural sciences (54%) and they have also achieved parity in health and welfare. In Bangladesh, just over 30% choose agricultural sciences and health, which goes against the global trend. Although Bangladesh still has progress to make, the share of women in each scientific field has increased steadily over the past decade.
Southeast Asia
Southeast Asia presents a different picture entirely, with women basically on a par with men in some countries: they make up 52% of researchers in the Philippines and Thailand, for example. Other countries are close to parity, such as Malaysia and Vietnam, whereas Indonesia and Singapore are still around the 30% mark. Cambodia trails its neighbours at 20%. Female researchers in the region are spread fairly equally across the sectors of participation, with the exception of the private sector, where they make up 30% or less of researchers in most countries.
The proportion of women tertiary graduates reflects these trends, with high percentages of women in science in Brunei Darussalam, Malaysia, Myanmar and the Philippines (around 60%) and a low of 10% in Cambodia. Women make up the majority of graduates in health sciences, from 60% in Laos to 81% in Myanmar – Vietnam being an exception at 42%. Women graduates are on a par with men in agriculture but less present in engineering: Vietnam (31%), the Philippines (30%) and Malaysia (39%); here, the exception is Myanmar, at 65%. In the Republic of Korea, women make up about 40% of graduates in science and agriculture and 71% of graduates in health sciences but only 18% of female researchers overall. This represents a loss in the investment made in educating girls and women up through tertiary education, a result of traditional views of women's role in society and in the home. Kim and Moon (2011) remark on the tendency of Korean women to withdraw from the labour force to take care of children and assume family responsibilities, calling it a 'domestic brain drain'.
Women remain very much a minority in Japanese science (15% in 2013), although the situation has improved slightly (13% in 2008) since the government fixed a target in 2006 of raising the ratio of female researchers to 25%. Calculated on the basis of the current number of doctoral students, the government hopes to obtain a 20% share of women in science, 15% in engineering and 30% in agriculture and health by the end of the current Basic Plan for Science and Technology in 2016. In 2013, Japanese female researchers were most common in the public sector in health and agriculture, where they represented 29% of academics and 20% of government researchers. In the business sector, just 8% of researchers were women (in head counts), compared to 25% in the academic sector. In other public research institutions, women accounted for 16% of researchers. One of the main thrusts of Abenomics, Japan's current growth strategy, is to enhance the socio-economic role of women. Consequently, the selection criteria for most large university grants now take into account the proportion of women among teaching staff and researchers.
The low ratio of women researchers in Japan and the Republic of Korea, which both have some of the highest researcher densities in the world, brings down Southeast Asia's average to 22.5% for the share of women among researchers in the region.
Arab States
At 37%, the share of female researchers in the Arab States compares well with other regions. The countries with the highest proportion of female researchers are Bahrain and Sudan at around 40%. Jordan, Libya, Oman, Palestine and Qatar have percentage shares in the low twenties. The country with the lowest participation of female researchers is Saudi Arabia, even though they make up the majority of tertiary graduates, but the figure of 1.4% covers only the King Abdulaziz City for Science and Technology. Female researchers in the region are primarily employed in government research institutes, with some countries also seeing a high participation of women in private nonprofit organizations and universities. With the exception of Sudan (40%) and Palestine (35%), fewer than one in four researchers in the business enterprise sector is a woman; for half of the countries reporting data, there are barely any women at all employed in this sector.
Despite these variable numbers, the percentage of female tertiary-level graduates in science and engineering is very high across the region, which indicates there is a substantial drop between graduation and employment and research. Women make up half or more than half of science graduates in all but Sudan and over 45% in agriculture in eight out of the 15 countries reporting data, namely Algeria, Egypt, Jordan, Lebanon, Sudan, Syria, Tunisia and the United Arab Emirates. In engineering, women make up over 70% of graduates in Oman, with rates of 25–38% in the majority of the other countries, which is high in comparison to other regions.
The participation of women is somewhat lower in health than in other regions, possibly on account of cultural norms restricting interactions between males and females. Iraq and Oman have the lowest percentages (mid-30s), whereas Iran, Jordan, Kuwait, Palestine and Saudi Arabia are at gender parity in this field. The United Arab Emirates and Bahrain have the highest rates of all: 83% and 84%.
Once Arab women scientists and engineers graduate, they may come up against barriers to finding gainful employment. These include a misalignment between university programmes and labour market demand – a phenomenon which also affects men –, a lack of awareness about what a career in their chosen field entails, family bias against working in mixed-gender environments and a lack of female role models.
One of the countries with the smallest female labour force is developing technical and vocational education for girls as part of a wider scheme to reduce dependence on foreign labour. By 2017, the Technical and Vocational Training Corporation of Saudi Arabia is to have constructed 50 technical colleges, 50 girls' higher technical institutes and 180 industrial secondary institutes. The plan is to create training placements for about 500 000 students, half of them girls. Boys and girls will be trained in vocational professions that include information technology, medical equipment handling, plumbing, electricity and mechanics.
Sub-Saharan Africa
Just under one in three (30%) researchers in sub-Saharan Africa is a woman. Much of sub-Saharan Africa is seeing solid gains in the share of women among tertiary graduates in scientific fields. In two of the top four countries for women's representation in science, women graduates are part of very small cohorts, however: they make up 54% of Lesotho's 47 tertiary graduates in science and 60% of those in Namibia's graduating class of 149. South Africa and Zimbabwe, which have larger graduate populations in science, have achieved parity, with 49% and 47% respectively. The next grouping clusters seven countries poised at around 35–40% (Angola, Burundi, Eritrea, Liberia, Madagascar, Mozambique and Rwanda). The rest are grouped around 30% or below (Benin, Ethiopia, Ghana, Swaziland and Uganda). Burkina Faso ranks lowest, with women making up 18% of its science graduates.
Female representation in engineering is fairly high in sub-Saharan Africa in comparison with other regions. In Mozambique and South Africa, for instance, women make up more than 34% and 28% of engineering graduates, respectively. Numbers of female graduates in agricultural science have been increasing steadily across the continent, with eight countries reporting the share of women graduates of 40% or more (Lesotho, Madagascar, Mozambique, Namibia, Sierra Leone, South Africa, Swaziland and Zimbabwe). In health, this rate ranges from 26% and 27% in Benin and Eritrea to 94% in Namibia.
Of note is that women account for a relatively high proportion of researchers employed in the business enterprise sector in South Africa (35%), Kenya (34%), Botswana and Namibia (33%) and Zambia (31%). Female participation in industrial research is lower in Uganda (21%), Ethiopia (15%) and Mali (12%).
Lack of agency and representation
Social pressures to both conform to femininity and which punish femininity
Beginning in the twentieth century to present day, more and more women are becoming acknowledged for their work in science. However, women often find themselves at odds with expectations held towards them in relation to their scientific studies. For example, in 1968 James Watson questioned scientist Rosalind Franklin's place in the industry. He claimed that "the best place for a feminist was in another person's lab", most often a male's research lab. Women were and still are often critiqued of their overall presentation. In Franklin's situation, she was seen as lacking femininity for she failed to wear lipstick or revealing clothing.
Since on average most of a woman's colleagues in science are men who do not see her as a true social peer, she will also find herself left out of opportunities to discuss possible research opportunities outside of the laboratory. In Londa Schiebinger's book, Has Feminism Changed Science?, she mentions that men would have discussed their research outside of the lab, but this conversation is preceded by culturally "masculine" small-talk topics that, whether intentionally or not, excluded women influenced by their culture's feminine gender role from the conversation. Consequently, this act of excluding many women from the after-hours work discussions produced a more separate work environment between the men and the women in science; as women then would converse with other women in science about their current findings and theories. Ultimately, the women's work was devalued as a male scientist was not involved in the overall research and analysis.
According to Oxford University Press, the inequality toward women is "endorsed within cultures and entrenched within institutions [that] hold power to reproduce that inequality". There are various gendered barriers in social networks that prevent women from working in male-dominated fields and top management jobs. Social networks are based on the cultural beliefs such as schemas and stereotypes. According to social psychology studies, top management jobs are more likely to have incumbent schemas that favor "an achievement-oriented aggressiveness and emotional toughness that is distinctly male in character". Gender stereotypes of feminine style set by men assume women to be conforming and submissive to male culture creating a sense of unqualified women for top management jobs. However, when the women try to prove their competence and power, they often faced obstacles. They are likely to be seen as dislikable and untrustworthy even when they excel at "masculine" tasks. In addition, women's achievements are likely to be dismissed or discredited. These "untrustworthy, dislikable women" could have very well been denied achievement from the fear men held of a woman overtaking his management position. Social networks and gender stereotypes produce many injustices that women have to experience in their workplace, as well as, the various obstacles they encounter when trying to advance in male-dominated and top management jobs. Women in professions like science, technology, and other related industries are likely to encounter these gendered barriers in their careers. Based on the meritocratic explanations of gender inequality, "as long as the people accept the mechanisms that produce unequal outcomes", all the outcomes will be legitimated in the society. When women try to deny the stereotypes and the discriminations by becoming "competent, integrated, well-liked", the society is more likely to look at these impressions as selfishness or "being a whiner". However, there have been positive attempts to reduce gender discrimination in the public domain. For example, in the United States, Title IX of the Education Amendments of 1972 provides opportunities for women to achieve to a wide range of education programs and activities by prohibiting sex discrimination. The law states "No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subject to discrimination under any educational program or activity receiving federal financial assistance." Although, even with laws prohibiting gender discrimination, society and social institutions continue to minimize women's competencies and accomplishments, especially, in the workforce by dismissing or discrediting their achievements as stated above.
Underrepresentation of homosexual and bi women, and gender nonconformists in STEM
While there has been a push to encourage more women to participate in science, there is less outreach to lesbian, bi, or gender nonconforming women, and gender nonconforming people more broadly. Due to the lack of data and statistics of LGBTQ members involvement in the STEM field, it is unknown to what exact degree lesbian and bisexual women, gender non-conformers (transgender, nonbinary/agender, or anti-gender gender abolitionists who eschew the system altogether) are potentially even more repressed and underrepresented than their straight peers. But a general lack of out lesbian and bi women in STEM has been noted. Reasons for under-representation of same-sex attracted women and anyone gender nonconforming in STEM fields include lack of role models in K–12, the desire of some transgender girls and women to adopt traditional heteronormative gender roles as gender is a cultural performance and socially-determined subjective internal experience, employment discrimination, and the possibility of sexual harassment in the workplace. Historically, women who have accepted STEM research positions for the government or the military remained in the closet due to lack of federal protections or the fact that homosexual or gender nonconforming expression was criminalized in their country. A notable example is Sally Ride, a physicist, the first American female astronaut, and a lesbian. Sally Ride chose not to reveal her sexuality until after her death in 2012; she purposefully revealed her sexual orientation in her obituary. She has been known as the first female (and youngest) American to enter space, as well as, starting her own company, Sally Ride Science, that encourages young girls to enter the STEM field. She chose to keep her sexuality to herself because she was familiar with "the male-dominated" NASA's anti-homosexual policies at the time of her space travel. Sally Ride's legacy continues as her company is still working to increase young girls and women's participation in the STEM fields.
In a nationwide study of LGBTQA employees in STEM fields in the United States, same-sex attracted and gender nonconforming women in engineering, earth sciences, and mathematics reported that they were less likely to be out in the workplace. In general, LGBTQA people in this survey reported that, when more female or feminine gender role-identified people worked in their labs, the more accepting and safe the work environment. In another study of over 30,000 LGBT employees in STEM-related federal agencies in the United States, queer women in these agencies reported feeling isolated in the workplace and having to work harder than their gender conforming male colleagues. This isolation and overachievement remained constant as they earned supervisory positions and worked their way up the ladder. Gender nonconforming people in physics, particularly those identified as trans women in physics programs and labs, felt the most isolated and perceived the most hostility.
Organizations such as Lesbians Who Tech, National Organization of Gay and Lesbian Scientists and Technical Professionals (NOGLSTP), Out in Science, Technology, Engineering and Mathematics (OSTEM), Pride in STEM, and House of STEM currently provide networking and mentoring opportunities for lesbian girls and women and LGBT people interested in or currently working in STEM fields. These organizations also advocate for the rights of lesbian and bi women and gender nonconformists in STEM in education and the workplace.
Reasons for disadvantages
Margaret Rossiter, an American historian of science, offered three concepts to explain the reasons behind the data in statistics and how these reasons disadvantaged women in the science industry. The first concept is hierarchical segregation. This is a well-known phenomenon in society, that the higher the level and rank of power and prestige, the smaller the population of females participating. The hierarchical differences point out that there are fewer women participating at higher levels of both academia and industry. Based on data collected in 1982, women earn 54 percent of all bachelor's degrees in the United States, with 50 percent of these in science. The source also indicated that this number increased almost every year. There are fewer women at the graduate level; they earn 40 percent of all doctorates, with 31 percent of these in science and engineering.
The second concept included in Rossiter's explanation of women in science is territorial segregation. The term refers to how female employment is often clustered in specific industries or categories in industries. Women stayed at home or took employment in feminine fields while men left the home to work. Although nearly half of the civilian work force is female, women still comprise the majority of low-paid jobs or jobs that society considered feminine. Statistics show that 60 percent of white professional women are nurses, daycare workers, or schoolteachers. Territorial disparities in science are often found between the 1920s and 1930s, when different fields in science were divided between men and women.
Researchers collected the data on many differences between women and men in science. Rossiter found that in 1966, thirty-eight percent of female scientists held master's degrees compared to twenty-six percent of male scientists; but large proportions of female scientists were in environmental and nonprofit organizations. During the late 1960s and 1970s, equal-rights legislation made the number of female scientists rise dramatically. The statistics from National Science Board (NSB) present the change at that time. The number of science degrees awarded to woman rose from seven percent in 1970 to twenty-four percent in 1985. In 1975 only 385 women received bachelor's degrees in engineering compared to 11,000 women in 1985. Elizabeth Finkel claims that even if the number of women participating in scientific fields increases, the opportunities are still limited.. Another researcher, Harriet Zuckerman, claims that when woman and man have similar abilities for a job, the probability of the woman getting the job is lower. Elizabeth Finkel agrees, saying, "In general, while woman and men seem to be completing doctorate with similar credentials and experience, the opposition and rewards they find are not comparable. Women tend to be treated with less salary and status, many policy makers notice this phenomenon and try to rectify the unfair situation for women participating in scientific fields."
Societal disadvantages
Despite women's tendency to perform better than men academically, there are flaws involving stereotyping, lack of information, and family influence that have been found to affect women's involvement in science. Stereotyping has an effect, because people associate characteristics such as nurturing, kind, and warm or characteristics like strong and powerful with a particular gender. These character associations lead people to stereotype that certain jobs are more suitable to a particular gender. Lack of information is something that many institutions have worked hard over the years to improve by making programs such as the IFAC project (Information for a choice: empowering women through learning for scientific and technological career paths) which investigated low women participation in science and technology fields at high school to university level. However, not all efforts were as successful, "Science: it's a girl thing" campaign, which has since been removed, received backlash for further encouraging women that they must partake in "girly" or "feminine" activities. The idea being that if women are fully informed of their career choices and employability, they will be more inclined to pursue STEM field jobs. Women also struggle in the sense of lacking role models of women in science. Family influence is dependent on education level, economic status, and belief system. Education level of a student's parent matters, because oftentimes people who have higher education have a different opinion on education's importance than someone that does not. A parent can also be an influence in the sense that they want their children to follow in their footsteps and pursue a similar occupation, especially in women, it's been found that the mother's line of work tends to correlate with their daughters. Economic status can influence what kind of higher education a student might get. Economic status may influence their education depending on whether they are a work bound student or a college bound student. A work bound student may choose a shorter career path to quickly begin making money or due to lack of time. The belief system of a household can also have a big impact on women depending on their family's religious or cultural viewpoints. There are still some countries that have certain regulations on women's occupation, clothing, and curfew that limit career choices for women. Parental influence is also relevant because people tend to want to fulfill what they could not have as a child. Unfortunately, women are at such a disadvantage because not only must they overcome societal norms but then they also have to outperform men for the same recognition, studies show.
Contemporary advocacy and developments
Efforts to increase participation
A number of organizations have been set up to combat the stereotyping that may encourage girls away from careers in these areas. In the UK The WISE Campaign (Women into Science, Engineering and Construction) and the UKRC (The UK Resource Centre for Women in SET) are collaborating to ensure industry, academia and education are all aware of the importance of challenging the traditional approaches to careers advice and recruitment that mean some of the best brains in the country are lost to science. The UKRC and other women's networks provide female role models, resources and support for activities that promote science to girls and women. The Women's Engineering Society, a professional association in the UK, has been supporting women in engineering and science since 1919. In computing, the British Computer Society group BCSWomen is active in encouraging girls to consider computing careers, and in supporting women in the computing workforce.
In the United States, the Association for Women in Science is one of the most prominent organization for professional women in science. In 2011, the Scientista Foundation was created to empower pre-professional college and graduate women in science, technology, engineering and mathematics (STEM), to stay in the career track. There are also several organizations focused on increasing mentorship from a younger age. One of the best known groups is Science Club for Girls, which pairs undergraduate mentors with high school and middle school mentees. The model of that pairs undergraduate college mentors with younger students is quite popular. In addition, many young women are creating programs to boost participation in STEM at a younger level, either through conferences or competitions.
In efforts to make women scientists more visible to the general public, the Grolier Club in New York hosted a "landmark exhibition" titled "Extraordinary Women in Science & Medicine: Four Centuries of Achievement", showcasing the lives and works of 32 women scientists in 2003. The National Institute for Occupational Safety and Health (NIOSH) developed a video series highlighting the stories of female researchers at NIOSH. Each of the women featured in the videos share their journey into science, technology, engineering, or math (STEM), and offers encouragement to aspiring scientists. NIOSH also partners with external organizations in efforts to introduce individuals to scientific disciplines and funds several science-based training programs across the country.
Creative Resilience: Art by Women in Science is a multi–media exhibition and accompanying publication, produced in 2021 by the Gender Section of the United Nations Educational, Scientific and Cultural Organization (UNESCO). The project aims to give visibility to women, both professionals and university students, working in science, technology, engineering and mathematics (STEM). With short biographical information and graphic reproductions of their artworks dealing with the Covid-19 pandemic and accessible online, the project provides a platform for women scientists to express their experiences, insights, and creative responses to the pandemic.
Kizzmekia Corbett, recognized as one of the leading scientists in the United States for vaccine research, is a true pioneer who is dedicated to promoting diversity and equity within her field. She is a part of a team at the National Institutes of Health that developed one of the COVID-19 vaccines that is greater than 90% effective. Given the disproportionate impact of COVID-19 on African Americans and the long history of African American and female scientists being underrecognized, it is particularly significant to acknowledge the groundbreaking contributions of Dr. Corbett.
In the media
In 2013, journalist Christie Aschwanden noted that a type of media coverage of women scientists that "treats its subject's sex as her most defining detail" was still prevalent. She proposed a checklist, the "Finkbeiner test", to help avoid this approach. It was cited in the coverage of a much-criticized 2013 New York Times obituary of rocket scientist Yvonne Brill that began with the words: "She made a mean beef stroganoff". Women are often poorly portrayed in film. The misrepresentation of women scientists in film, television and books can influence children to engage in gender stereotyping. This was seen in a 2007 meta-analysis conducted by Jocelyn Steinke and colleagues from Western Michigan University where, after engaging elementary school students in a Draw-a-Scientist Test, out of 4,000 participants only 28 girls drew female scientists.
Notable controversies and developments
A study conducted at Lund University in 2010 and 2011 analysed the genders of invited contributors to News & Views in Nature and Perspectives in Science. It found that 3.8% of the Earth and environmental science contributions to News & Views were written by women even while the field was estimated to be 16–20% female in the United States. Nature responded by suggesting that, worldwide, a significantly lower number of Earth scientists were women, but nevertheless committed to address any disparity.
In 2012, a journal article published in Proceedings of the National Academy of Sciences (PNAS) reported a gender bias among science faculty. Faculty were asked to review a resume from a hypothetical student and report how likely they would be to hire or mentor that student, as well as what they would offer as starting salary. Two resumes were distributed randomly to the faculty, only differing in the names at the top of the resume (John or Jennifer). The male student was rated as significantly more competent, more likely to be hired, and more likely to be mentored. The median starting salary offered to the male student was greater than $3,000 over the starting salary offered to the female student. Both male and female faculty exhibited this gender bias. This study suggests bias may partly explain the persistent deficit in the number of women at the highest levels of scientific fields. Another study reported that men are favored in some domains, such as biology tenure rates, but that the majority of domains were gender-fair; the authors interpreted this to suggest that the under-representation of women in the professorial ranks was not solely caused by sexist hiring, promotion, and remuneration. In April 2015 Williams and Ceci published a set of five national experiments showing that hypothetical female applicants were favored by faculty for assistant professorships over identically qualified men by a ratio of 2 to 1.
In 2014, a controversy over the depiction of pinup women on Rosetta project scientist Matt Taylor's shirt during a press conference raised questions of sexism within the European Space Agency. The shirt, which featured cartoon women with firearms, led to an outpouring of criticism and an apology after which Taylor "broke down in tears."
In 2015, stereotypes about women in science were directed at Fiona Ingleby, research fellow in evolution, behavior, and environment at the University of Sussex, and Megan Head, postdoctoral researcher at the Australian National University, when they submitted a paper analyzing the progression of PhD graduates to postdoctoral positions in the life sciences to the journal PLOS ONE. The authors received an email on 27 March informing them that their paper had been rejected due to its poor quality. The email included comments from an anonymous reviewer, which included the suggestion that male authors be added in order to improve the quality of the science and serve as a means of ensuring that incorrect interpretations of the data are not included. Ingleby posted excerpts from the email on Twitter on 29 April bringing the incident to the attention of the public and media. The editor was dismissed from the journal and the reviewer was removed from the list of potential reviewers. A spokesman from PLOS apologized to the authors and said they would be given the opportunity to have the paper reviewed again.
On 9 June 2015, Nobel prize winning biochemist Tim Hunt spoke at the World Conference of Science Journalists in Seoul. Prior to applauding the work of women scientists, he described emotional tension, saying "you fall in love with them, they fall in love with you, and when you criticise them they cry." Initially, his remarks were widely condemned and he was forced to resign from his position at University College London. However, multiple conference attendees gave accounts, including a partial transcript and a partial recording, maintaining that his comments were understood to be satirical before being taken out of context by the media.
In 2016 an article published in JAMA Dermatology reported a significant and dramatic downward trend in the number of NIH-funded woman investigators in the field of dermatology and that the gender gap between male and female NIH-funded dermatology investigators was widening. The article concluded that this disparity was likely due to a lack of institutional support for women investigators.
Problematic public statements
In January 2005, Harvard University President Lawrence Summers sparked controversy at a National Bureau of Economic Research (NBER) Conference on Diversifying the Science & Engineering Workforce. Dr. Summers offered his explanation for the shortage of women in senior posts in science and engineering. He made comments suggesting the lower numbers of women in high-level science positions may in part be due to innate differences in abilities or preferences between men and women. Making references to the field and behavioral genetics, he noted the generally greater variability among men (compared to women) on tests of cognitive abilities, leading to proportionally more men than women at both the lower and upper tails of the test score distributions. In his discussion of this, Summers said that "even small differences in the standard deviation [between genders] will translate into very large differences in the available pool substantially out [from the mean]". Summers concluded his discussion by saying:So my best guess, to provoke you, of what's behind all of this is that the largest phenomenon, by far, is the general clash between people's legitimate family desires and employers' current desire for high power and high intensity, that in the special case of science and engineering, there are issues of intrinsic aptitude, and particularly of the variability of aptitude, and that those considerations are reinforced by what are in fact lesser factors involving socialization and continuing discrimination.Despite his protégée, Sheryl Sandberg, defending Summers' actions and Summers offering his own apology repeatedly, the Harvard Graduate School of Arts and Sciences passed a motion of "lack of confidence" in the leadership of Summers who had allowed tenure offers to women plummet after taking office in 2001. The year before he became president, Harvard extended 13 of its 36 tenure offers to women and by 2004 those numbers had dropped to 4 of 32 with several departments lacking even a single tenured female professor. This controversy is speculated to have significantly contributed to Summers resignation from his position at Harvard the following year.
See also
African American women in computer science
History of science
International Day of Women and Girls in Science
List of inventions and discoveries by women
Index of women scientists articles
List of female scientists before the 20th century
List of female scientists in the 20th century
List of female scientists in the 21st century
List of female mathematicians
List of female Nobel laureates
Logology (science of science): sexual bias
Matilda effect
Organizations for women in science
Prizes, medals, and awards for women in science
Margaret W. Rossiter
Timeline of women in science
Timeline of women in science in the United States
Women in archaeology
Women in computing
Women in engineering
Women in geology
Women in chemistry
Women in medicine
Women in physics
Women in STEM fields
Women in the workforce
Women in climate change
Working Group on Women in Physics
References
Sources
Further reading
Borum, Viveka, and Erica Walker. "What makes the difference? Black women's undergraduate and graduate experiences in mathematics." Journal of Negro Education 81.4 (2012): 366–378
Chapman, Angela, et al. "'Nothing is impossible': characteristics of Hispanic females participating in an informal STEM setting." Cultural Studies of Science Education (2019): 1–15. online
Charleston, LaVar J., et al. "Navigating underrepresented STEM spaces: Experiences of Black women in US computing science higher education programs who actualize success." Journal of Diversity in Higher Education 7#3 (2014): 166–176. online
Contreras Aguirre, et al. "Latina college students' experiences in STEM at Hispanic-Serving Institutions: framed within Latino critical race theory." International Journal of Qualitative Studies in Education (2020): 1–14.
Croucher, John S. Women in Science: 100 Inspirational Lives. Gloucestershire UK 2019.
Dominus, Susan, "Sidelined: American women have been advancing science and technology for centuries. But their achievements weren't recognized until a tough-minded scholar [Margaret W. Rossiter] hit the road and rattled the academic world", Smithsonian, vol. 50, no. 6 (October 2019), pp. 42–53, 80.
Hanson, S. L. "African American women in science: Experiences from high school through the postsecondary years and beyond. NWSA Journal (2004) 16:96–115.
Henley, Megan M. "Women's success in academic science: Challenges to breaking through the ivory ceiling." Sociology Compass 9.8 (2015): 668–680. abstract
Hill, Catherine, Christianne Corbett, and Andresse St Rose. Why so few? Women in science, technology, engineering, and mathematics (American Association of University Women, 2010).
Jack, Jordynn. Science on the home front: American women scientists in World War II (U of Illinois Press, 2009).
McGee, Ebony O., and Lydia Bentley. "The troubled success of Black women in STEM." Cognition and Instruction 35.4 (2017): 265–289 online.
Morton, Terrell R., Destiny S. Gee, and Ashley N. Woodson. "Being vs. Becoming: Transcending STEM Identity Development through Afropessimism, Moving toward a Black X Consciousness in STEM." Journal of Negro Education 88.3 (2020): 327–342 .
Natarajan, Priyamvada, "Calculating Women" (review of Margot Lee Shetterly, Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race, William Morrow; Dava Sobel, The Glass Universe: How the Ladies of the Harvard Observatory Took the Measure of the Stars, Viking; and Nathalia Holt, Rise of the Rocket Girls: The Women Who Propelled Us, from Missiles to the Moon to Mars, Little, Brown), The New York Review of Books, vol. LXIV, no. 9 (25 May 2017), pp. 38–39.
Pomeroy, Claire, "Academia's Gender Problem", Scientific American, vol. 314, no. 1 (January 2016), p. 11.
Wagner, Darren N., and Joanna Wharton. "The Sexes and the Sciences." Journal for Eighteenth‐Century Studies 42.4 (2019): 399–413 online.
Watts, Ruth. Women in science: a social and cultural history (Routledge, 2007), comprehensive history of gender and women in science.
External links
Science Speaks: A Focus on NIOSH Women in Science Short, personal stories of females working in fields of science. A video series developed by the National Institute for Occupational Safety and Health (NIOSH)
Gender tutorials on women in science from Hunter College and the Graduate Center of the City University of New York (CUNY)
Statistics on women at science conferences from the American Astronomical Society, Committee on the Status of Women in Astronomy
The Library of Congress Selected Internet Resources Women in Science and Medicine
Women in Science at the Encyclopædia Britannica | Women in science | [
"Technology"
] | 25,432 | [
"Women and science",
"Women in science and technology",
"Women scientists"
] |
3,135,190 | https://en.wikipedia.org/wiki/Low-definition%20television | Low-definition television (LDTV) refers to TV systems that have a lower screen resolution than standard-definition television systems. The term is usually used in reference to digital television, in particular when broadcasting at the same (or similar) resolution as low-definition analog television systems. Mobile DTV systems usually transmit in low definition, as do all slow-scan television systems.
Sources
The Video CD format uses a progressive scan LDTV signal (352×240 or 352×288), which is half the vertical and horizontal resolution of full-bandwidth SDTV. However, most players will internally upscale VCD material to 480/576 lines for playback, as this is both more widely compatible and gives a better overall appearance. No motion information is lost due to this process, as VCD video is not high-motion and only plays back at 25 or 30 frames per second, and the resultant display is comparable to consumer-grade VHS video playback.
For the first few years of its existence, YouTube offered only one, low-definition resolution of 256x144 or 144p at 30~50 fps or less, later extending first to widescreen 426×240, then to gradually higher resolutions; once the video service had become well established and had been acquired by Google, it had access to Google's radically improved storage space and transmission bandwidth, and could rely on a good proportion of its users having high-speed internet connections, giving an overall effect reminiscent of early online video streaming attempts using RealVideo or similar services, where 160×120 at single-figure framerates was deemed acceptable to cater to those whose network connections could not sufficiently deliver 240p content.
Video games
Older video game consoles and home computers often generated a technically compliant analog 525-line NTSC or 625-line PAL signal, but only sent one field type rather than alternating between the two. This created a 262 or 312 line progressive scan signal (with half the vertical resolution), which in theory can be decoded on any receiver that can decode normal, interlaced signals.
Since the shadow mask and beam width of standard CRT televisions were designed for interlaced signals, these systems produced a distinctive fixed pattern of alternating bright and dark scan lines; many emulators for older systems offer video filters to recreate this effect. With the introduction of digital video formats these low-definition modes are usually referred to as 240p and 288p (with the standard definition modes being 480i and 576i).
With the introduction of 16-bit computers in the mid-1980s, such as the Atari ST and Amiga, followed by 16-bit consoles in the late 1980s and early 1990s, like the Sega Genesis and Super NES, outputting the standard interlaced resolutions was supported for the first time, but rarely used due to heavy demands on processing power and memory. Standard resolutions also had a tendency to produce noticeable flicker at horizontal edges unless employed quite carefully, such as using anti-aliasing, which was either not available or computationally exorbitant. Thus, progressive output with half the vertical remained the primary format on most games of the fourth and fifth generation consoles (including the Sega Saturn, the Sony PlayStation and the Nintendo 64).
With the advent of sixth generation consoles and the launch of the Dreamcast, standard interlaced resolution became more common, and progressive lower resolution usage declined.
More recent game systems tend to use only properly interlaced NTSC or PAL in addition to higher resolution modes, except when running games designed for older, compatible systems in their native modes. The PlayStation 2 generates 240p/288p if a PlayStation game calls for this mode, as do many Virtual Console emulated games on the Nintendo Wii. Nintendo's official software development kit documentation for the Wii refers to 240p as 'non-interlaced mode' or 'double-strike'.
Shortly after the launch of the Wii Virtual Console service, many users with component video cables experienced problems displaying some Virtual Console games due to certain TV models/manufacturers not supporting 240p over a component video connection. Nintendo's solution was to implement a video mode which forces the emulator to output 480i instead of 240p, however many games released prior were never updated.
Teleconferencing LDTV
Sources of LDTV using standard broadcasting techniques include mobile TV services powered by DVB-H, 1seg, DMB, or ATSC-M/H. However, this kind of LDTV transmission technology is based on existent LDTV teleconferencing standards that have been in place since the late 1990s.
Resolutions
See also
List of common resolutions
8640p, 4320p, 2160p, 1080p, 1080i, 720p, 576p, 576i, 480p, 480i
Digital television
Digital radio
DVB, ATSC, ISDB
SDTV, EDTV, HDTV
Narrow-bandwidth television
Moving Pictures Experts Group
Handheld television
References
Digital television
Broadcast engineering
Broadband | Low-definition television | [
"Engineering"
] | 1,016 | [
"Broadcast engineering",
"Electronic engineering"
] |
3,135,197 | https://en.wikipedia.org/wiki/Poromechanics | Poromechanics is a branch of physics and specifically continuum mechanics that studies the behavior of fluid-saturated porous media. A porous medium or a porous material is a solid, constituting the matrix, which is permeated by an interconnected network of pores or voids filled with a fluid. In general, the fluid may be composed of liquid or gas phases or both. In the simplest case, both the solid matrix and the pore space constitute two separate, continuously connected domains. An archtypal example of such a porous material is the kitchen sponge, which is formed of two interpenetrating continua. Some porous media has a more complex microstructure in which, for example, the porespace is disconnected. Porespace that is unable to exchange fluid with the exterior is termed occluded porespace. Alternatively, in the case of granular porous media, the solid phase may constitute disconnected domains, termed the "grains", which are load-bearing under compression, though can flow when sheared.
Natural substances including rocks, soils, biological tissues including heart and cancellous bone, and man-made materials such as foams, ceramics, and concrete can be considered as porous media. Porous media whose solid matrix is elastic and the fluid is viscous are called poroviscoelastic. A poroviscoelastic medium is characterised by its porosity, permeability, and the properties of its constituents - solid matrix and fluid. The distribution of pores, fluid pressure, and stress in the solid matrix gives rise to the viscoelastic behavior of the bulk. Porous media whose pore space is filled with a single fluid phase, typically a liquid, is considered to be saturated. Porous media whose pore space is only partially fluid is a fluid is known to be unsaturated.
The concept of a porous medium originally emerged in soil mechanics, and in particular in the works of Karl von Terzaghi, the father of soil mechanics. However a more general concept of a poroelastic medium, independent of its nature or application, is usually attributed to Maurice Anthony Biot (1905–1985), a Belgian-American engineer. In a series of papers published between 1935 and 1962 Biot developed the theory of dynamic poroelasticity (now known as Biot theory) which gives a complete and general description of the mechanical behaviour of a poroelastic medium. Biot's equations of the linear theory of poroelasticity are derived from
the equations of linear elasticity for a solid matrix,
the Navier–Stokes equations for a viscous fluid, and Darcy's law for a flow of fluid through a porous matrix.
One of the key findings of the theory of poroelasticity is that in poroelastic media, there exist three types of elastic waves: a shear or transverse wave, and two types of longitudinal or compressional waves, which Biot called type I and type II waves. The transverse and type I (or fast) longitudinal waves are similar to the transverse and longitudinal waves in an elastic solid, respectively. The slow compressional wave (Biot’s slow wave) is unique to poroelastic materials. The prediction of Biot’s slow wave generated controversy until Thomas Plona experimentally observed it in 1980. Other important early contributors to the theory of poroelasticity were Yakov Frenkel and Fritz Gassmann.
Energy conversion from fast compressional and shear waves into the highly attenuating slow compressional wave is a significant cause of elastic wave attenuation in porous media.
Recent applications of poroelasticity to biology, such as modeling blood flows through the beating myocardium, have also required an extension of the equations to nonlinear (large deformation) elasticity and the inclusion of inertia forces.
Theory of poromechanics
Descriptions of porosity
Poromechanics relates the loading of solid and fluid phases within a porous body to the deformation of the solid skeleton and pore space. A representative elementary volume (REV) of a porous medium and the superposition of the domains of the skeleton and connected pores is shown in Fig. 1. In tracking the material deformation, one must be careful to properly apportion sub-volumes that correspond to the solid matrix and pore space. To do this, it is often convenient to introduce a porosity, which measures the fraction of the REV that constitutes pore space. To keep track of the porosity in a deforming material volume, mechanicians consider two descriptions, namely:
The Eulerian porosity, , which measures the porosity with respect to the current or deformed configuration. Specifically, if represents an infinitesimal volume in the deformed material body, then the pore volume is calculated from .
The Lagrangian porosity, , which measures the porosity with respect to the initial or undeformed configuration. In a Lagrangian description of porosity, the pore volume is measured by , where represents an infinitesimal volume of the material in its undeformed state.
The Eulerian and Lagrangian descriptions of porosity are readily related by noting that
where is the Jacobian of the deformation with being the deformation gradient. In a small-strain, linearized theory of deformation, the volume ratio is approximated by , where is the infinitesimal volume strain. Another useful descriptor of the REV's pore space is the void ratio, which compares the current volume of the pores to the current volume of the solid matrix. As such, the void ratio takes definition in an Eulerian frame of reference and is calculated as
where measures the fraction of the volume occupied by the solid skeleton.
When a material element of a porous medium undergoes a deformation, the porosity changes due to i) the material's observable macroscopic dilation and ii) the volume dilation of the material's solid skeleton. The latter cannot be assess from experiments on the material's bulk structure. The volume of the solid skeleton in an infinitesimal material element, which is denoted by , is related to the deformed and undeformed total material volumes by
where the definition of the Lagrangian porosity further requires . Thus, under the assumption of infinitesimal strain theory, the total volumetric strain of a material element can be separated into strain contributions of the solid matrix and pore space as follows:
where is recognized as the linearized volume strain acting in the solid.
Small-strain linear poroelasticity
When linearizing the strain in a poroelastic solid body, several conditions should hold true. Firstly, as is the requirement for a general continuum solid, displacement gradients should be small, . Secondly, to further ensure small changes in the solid and pore volumes, the displacement field of the solid, , should be small in comparison to the characteristic length scale defining the grain size (in case of a granular material) or solid matrix (in case of a continuous solid phase), . This second requirement is stated as , and implies small changes in the Lagrangian porosity .
When measuring the linear elastic properties of porous solids, laboratory experiments are typically performed under one of two limit cases:
Poroelastic solids are loaded under drained conditions, in which fluid exchange between domains of the porous solid and the exterior occurs rapidly, and the fluid pressure in porespace is held constant, . Such a system is considered to be an open system.
Poroelastic solids are loaded under undrained conditions, in which fluid exchange between the porous solid and the exterior is precluded; , where is the local mass of the fluid. A saturated poroelastic solid loaded under undrained conditions typically experiences significant changes in fluid pressure. Such a system is considered to be a closed system.
Historical Background
Saturated porous media
Reinhard Woltman (1757-1837), a German hydraulic and geotechnical engineer, first introduced the concepts of volume fractions and angles of internal friction within porous media in his study on the connection between soil moisture and its apparent cohesion. His work addressed the calculation of earth pressure against retaining walls. Achille Delesse (1817-1881), a French geologist and mineralogist, reasoned that the volume fraction of voids – otherwise termed the volumetric porosity – equals the surface fraction of voids – otherwise termed the areal porosity – when the size, shape, and orientation of the pores are randomly distributed. Henry Darcy (1803-1858), a French hydraulic engineer, observed the proportionality between the rate of discharge and the loss of water pressure in tests with natural sand, now known as Darcy’s law. The first important concept related to saturated, deformable porous solids might be considered the principle of effective stress introduced by Karl von Terzaghi (1883-1963), an Austrian engineer. Terzaghi postulated that the mean effective stress experienced by the solid skeleton of a porous medium with incompressible constituents, , is the total stress acting on the volume element, , subtracted by the pressure of the fluid acting in the pore space, . Terzaghi combined his effective stress concept with Darcy’s law for fluid flow and derived a one-dimensional consolidation theory explaining the time-dependent deformation of soils as the pore fluid drains, which might be the first mathematical treatise on coupled hydromechanical problems in porous media.
See also
Petrophysics
Rock physics
Poroelasticity
Fluid flow through porous media
Permeability (materials science)
Darcy's Law
References
Further reading
External links
Poronet - PoroMechanics Internet Resources Network
APMR - Acoustical Porous Material Recipes
Continuum mechanics
Acoustics
Applied and interdisciplinary physics
Porous media | Poromechanics | [
"Physics",
"Materials_science",
"Engineering"
] | 2,017 | [
"Applied and interdisciplinary physics",
"Continuum mechanics",
"Porous media",
"Classical mechanics",
"Acoustics",
"Materials science"
] |
3,135,247 | https://en.wikipedia.org/wiki/Silicate%20mineral%20paint | Silicate mineral paints or mineral colors are paint coats with mineral binding agents. Two relevant mineral binders play a role in the field of colors: Lime and silicate.
Under influence of carbon dioxide, lime-based binders carbonate and water silicate-based binders solidify. Together they form calcium silicate hydrates.
Lime paints (aside of Fresco-technique) are only moderately weather resistant, so people apply them primarily in monument preservation. Mineral colors are commonly understood to be silicate paints. These paints use potassium water glass as binder. They are also called water glass paints or Keimfarben (after the inventor).
Mineral silicate paint coats are considered durable and weather resistant. Lifetimes exceeding a hundred years are possible. The city hall in Schwyz and "Gasthaus Weißer Adler" in Stein am Rhein (both in Switzerland) received their coats of mineral paint in 1891, and facades in Oslo from 1895 or in Traunstein, Germany from 1891.
History
Alchemists in their pursuit of the philosopher's stone (to manufacture gold) found glassy shimmering pearls in fireplaces. Sand mixed with potash and heat coalesced into pearls of water glass. Small round panes of water glass were first industrially manufactured for used as windows in the 19th century by Van Baerle in Gernsheim and Johann Gottfried Dingler in Augsburg. Johann Nepomuk von Fuchs made the first attempts to create paints with water glass.
Around 1850, the painters Kaulbach and Schlotthauer applied facade paints of the Pinakothek in Munich. Due to use of earth pigments, which cannot be silicated, the paintings washed out of the water glass.
In 1878, the craftsman and researcher Adolf Wilhelm Keim patented mineral paints. Since then, they have been manufactured by the successor company Keimfarben in Diedorf near Augsburg.
Keim depended on V. van Baerle as the source of water glass. Keim also attempted to manufacture silicate paints himself. His experiments took years to mature, but he finally achieved good results. The Silinwerk van Baerle in Gernsheim near the Rhine river and Keimfarben in Diedorf near Augsburg are well-known manufacturers.
The impetus for Keim's intense research originated from King Ludwig I. of Bavaria. The art-minded monarch was so impressed by the colorful lime frescoes in northern Italy that he desired to experience such artwork in his own kingdom Bavaria. But the weather north of the alps - known to be significantly more harsh - destroyed the artful paintings within short time. Therefore, he issued an order to Bavarian science to develop paint with the appearance of lime but greater durability.
Properties
Mineral paint contains inorganic colorants, and potassium-based, alkali silicate (water glass), also known as potassium silicate, liquid potassium silicate, or LIQVOR SILICIVM. A coat with mineral colors does not form a layer but instead permanently bonds to the substrate material (silicification).
The result is a highly durable connection between paint coat and substrate. The water glass binding agent is highly resistant to UV light. While dispersions based on acrylate or silicone resin over the years tend to grow brittle, chalky, and crack under UV, the inorganic binder water glass remains stable. The chemical fusion with the substrate and the UV stability of the binder are the fundamental reasons for the extraordinarily high lifetime of silicate paints.
Silicate paints require siliceous substrate for setting. For this reason they are highly suitable for mineral substrates such as mineral plasters and concrete, thus they are of only limited use for application on wood and metal. The permeability to water vapor of silicate paints is equivalent to that of the substrate, so silicate paints do not inhibit the diffusion of water vapor. Moisture contained in parts of a structure or in the plaster may diffuse outward without resistance: this keeps walls dry and prevents structural damage. This addition helps avoid condensation of water on the surface of building materials, reducing the risk of infestation by algae and fungi. The high alkalinity of the water glass binding agent adds to the inhibitory effect against infestation by microorganisms and eliminates the need for additional preservatives.
As mineral paint coats are not prone to static charging and thermo-plasticity (stickiness developing under heat), which is common for surfaces coated with dispersion or silicone resin, soiling happens less, so fewer dirt particles cling to the surface and are easier to wash off. Silicate paints are incombustible and free of organic additives or solvents (DIN 18363 Painting and coating work Section 2.4.1).
Silicate paints are highly color-tone stable. As they are solely colored with mineral pigments that do not fade with exposure to UV radiation, the silicate paint coats remain constant in color for decades.
Silicate paints are based upon mineral raw materials. They are environmentally compatible in manufacture and effect. Their high durability helps to preserve resources, and their contaminant-free composition preserves health and environment. For this reason, silicate paints have gained popularity, especially in sustainable construction.
Types
Commonly three types of silicate paints are distinguished: Pure silicate paint consisting of two components, a color powder in dry or water-paste form and the liquid binder water glass. (DIN 18363 Painting and coating work Section 2.4.1). The processing of pure silicate paints require great experience and know-how. These are especially common for the historic area.
Around the middle of the 20th century the first single-component silicate paint was developed. The addition of up to 5 mass percent of organic additives (e.g. acrylate dispersion, hydrophobisers, thickeners or similar) makes ready-to-use paint in containers possible. These are also called "dispersion silicate paints" (DIN 18363 Painting and coating work Section 2.4.1). The range of application for such silicate paints is significantly higher than for pure silicate paints as the dispersion allows coats for less solid substrates and/or organic composition. Above that handling and processing is simpler than pure silicate paint.
Since 2002 a third category of silicate paints is known: sol-silicate paint. The binder is a combination of silica sol and water glass. The organic fraction is limited to 5 mass percent similar to dispersion silicate paint allowing for chemical setting and retaining of the silicate specific advantages. The sol silicate paint allows use on non-mineral plaster. For these the bonding occurs chemically and physically. The sol-silicate paint has revolutionized the field of application of silicate paints. These paints can be applied easily and safely to nearly all common substrates.
Possible substrates
concrete
earthen plaster
lime plaster
masonry
stone
Applications
environmentally friendly, non-toxic applications
high durability, especially on masonry products, and lightfast
mineral paints with high vapor permeability
acid rain resistance
antifungal properties
reduces carbonation of cement-based materials
See also
Lime paint
References
Coatings
Cement
Concrete
Inorganic chemistry | Silicate mineral paint | [
"Chemistry",
"Engineering"
] | 1,516 | [
"Structural engineering",
"Concrete",
"Coatings",
"nan"
] |
3,135,403 | https://en.wikipedia.org/wiki/Richard%20Scott%20Perkin | Richard Scott Perkin (1906–1969) was an American entrepreneur and one of the cofounders of Perkin-Elmer.
Life
At an early age he developed an interest in astronomy, and began making telescopes and grinding lenses and mirrors. He only spent a year in college studying chemical engineering before he began working at a brokerage firm on Wall Street.
During the 1930s, he met Charles Elmer when the latter was presenting a lecture. The two had a mutual interest in astronomy and decided to go into business together. In 1937, they founded Perkin-Elmer as an optical design and consulting company. Richard served as president of the company until 1960, then became chairman of the board.
The crater Perkin on the Moon was named after him, while Elmer was named after his business partner.
Perkin was married to Gladys Frelinghuysen Talmage who became CEO after he died. A decade later, Gladys commissioned a commemorative history to be written. One hundred copies were printed and distributed to friends.
Further reading
Fahy, Thomas P., Richard Scott Perkin and the Perkin-Elmer Corporation, 1987, Perkin-Elmer Print Shop, .
See also
List of astronomical instrument makers
References
External links
1906 births
1969 deaths
Telescope manufacturers
20th-century American businesspeople | Richard Scott Perkin | [
"Astronomy"
] | 258 | [
"Telescope manufacturers",
"People associated with astronomy"
] |
3,135,512 | https://en.wikipedia.org/wiki/Phosphorolysis | Phosphorolysis is the cleavage of a compound in which inorganic phosphate is the attacking group. It is analogous to hydrolysis.
An example of this is glycogen breakdown by glycogen phosphorylase, which catalyzes attack by inorganic phosphate on the terminal glycosyl residue at the nonreducing end of a glycogen molecule. If the glycogen chain has n glucose units, the products of a single phosphorolytic event are one molecule of glucose 1-phosphate and a glycogen chain of n-1 remaining glucose units.
In addition, sometimes phosphorolysis is preferable to hydrolysis (like in the breakdown of glycogen or starch, as in the example above) because glucose 1-phosphate yields more ATP than does free glucose when subsequently catabolized to pyruvate.
Another example of phosphorolysis is seen in the conversion of glyceraldehyde 3-phosphate to 1,3-bisphosphoglycerate in glycolysis. The mechanism involves phosphorolysis.
See also
Phosphorylase
References
External links
Chemical processes | Phosphorolysis | [
"Chemistry"
] | 250 | [
"Chemical process engineering",
"Chemical process stubs",
"Chemical processes",
"nan"
] |
3,135,539 | https://en.wikipedia.org/wiki/Threading%20%28protein%20sequence%29 | In molecular biology, protein threading, also known as fold recognition, is a method of protein modeling which is used to model those proteins which have the same fold as proteins of known structures, but do not have homologous proteins with known structure.
It differs from the homology modeling method of structure prediction as it (protein threading) is used for proteins which do not have their homologous protein structures deposited in the Protein Data Bank (PDB), whereas homology modeling is used for those proteins which do. Threading works by using statistical knowledge of the relationship between the structures deposited in the PDB and the sequence of the protein which one wishes to model.
The prediction is made by "threading" (i.e. placing, aligning) each amino acid in the target sequence to a position in the template structure, and evaluating how well the target fits the template. After the best-fit template is selected, the structural model of the sequence is built based on the alignment with the chosen template. Protein threading is based on two basic observations: that the number of different folds in nature is fairly small (approximately 1300); and that 90% of the new structures submitted to the PDB in the past three years have similar structural folds to ones already in the PDB.
Classification of protein structure
The Structural Classification of Proteins database (SCOP) provides a detailed and comprehensive description of the structural and evolutionary relationships of known structure. Proteins are classified to reflect both structural and evolutionary relatedness. Many levels exist in the hierarchy, but the principal levels are family, superfamily, and fold:
Family (clear evolutionary relationship): Proteins clustered together into families are clearly evolutionarily related. Generally, this means that pairwise residue identities between the proteins are 30% and greater. However, in some cases similar functions and structures provide definitive evidence of common descent in the absence of high sequence identity; for example, many globins form a family though some members have sequence identities of only 15%.
Superfamily (probable common evolutionary origin): Proteins that have low sequence identities, but whose structural and functional features suggest that a common evolutionary origin is probable, are placed together in superfamilies. For example, actin, the ATPase domain of the heat shock protein, and hexokinase together form a superfamily.
Fold (major structural similarity): Proteins are defined as having a common fold if they have the same major secondary structures in the same arrangement and with the same topological connections. Different proteins with the same fold often have peripheral elements of secondary structure and turn regions that differ in size and conformation. In some cases, these differing peripheral regions may comprise half the structure. Proteins placed together in the same fold category may not have a common evolutionary origin: the structural similarities could arise just from the physics and chemistry of proteins favoring certain packing arrangements and chain topologies.
Method
A general paradigm of protein threading consists of the following four steps:
The construction of a structure template database: Select protein structures from the protein structure databases as structural templates. This generally involves selecting protein structures from databases such as Protein Data Bank (PDB), Families of Structurally Similar Proteins database (FSSP), Structural Classification of Proteins database (SCOP), or CATH database, after removing protein structures with high sequence similarities.
The design of the scoring function: Design a good scoring function to measure the fitness between target sequences and templates based on the knowledge of the known relationships between the structures and the sequences. A good scoring function should contain mutation potential, environment fitness potential, pairwise potential, secondary structure compatibilities, and gap penalties. The quality of the energy function is closely related to the prediction accuracy, especially the alignment accuracy.
Threading alignment: Align the target sequence with each of the structure templates by optimizing the designed scoring function. This step is one of the major tasks of all threading-based structure prediction programs that take into account the pairwise contact potential; otherwise, a dynamic programming algorithm can fulfill it.
Threading prediction: Select the threading alignment that is statistically most probable as the threading prediction. Then construct a structure model for the target by placing the backbone atoms of the target sequence at their aligned backbone positions of the selected structural template.
Comparison with homology modeling
Homology modeling and protein threading are both template-based methods and there is no rigorous boundary between them in terms of prediction techniques. But the protein structures of their targets are different. Homology modeling is for those targets which have homologous proteins with known structure (usually/maybe of same family), while protein threading is for those targets with only fold-level homology found. In other words, homology modeling is for "easier" targets and protein threading is for "harder" targets.
Homology modeling treats the template in an alignment as a sequence, and only sequence homology is used for prediction. Protein threading treats the template in an alignment as a structure, and both sequence and structure information extracted from the alignment are used for prediction. When there is no significant homology found, protein threading can make a prediction based on the structure information. That also explains why protein threading may be more effective than homology modeling in many cases.
In practice, when the sequence identity in a sequence sequence alignment is low (i.e. <25%), homology modeling may not produce a significant prediction. In this case, if there is distant homology found for the target, protein threading can generate a good prediction.
More about threading
Fold recognition methods can be broadly divided into two types: those that derive a 1-D profile for each structure in the fold library and align the target sequence to these profiles; and those that consider the full 3-D structure of the protein template. A simple example of a profile representation would be to take each amino acid in the structure and simply label it according to whether it is buried in the core of the protein or exposed on the surface. More elaborate profiles might take into account the local secondary structure (e.g. whether the amino acid is part of an alpha helix) or even evolutionary information (how conserved the amino acid is). In the 3-D representation, the structure is modeled as a set of inter-atomic distances, i.e. the distances are calculated between some or all of the atom pairs in the structure. This is a much richer and far more flexible description of the structure, but is much harder to use in calculating an alignment. The profile-based fold recognition approach was first described by Bowie, Lüthy and David Eisenberg in 1991. The term threading was first coined by David Jones, William R. Taylor and Janet Thornton in 1992, and originally referred specifically to the use of a full 3-D structure atomic representation of the protein template in fold recognition. Today, the terms threading and fold recognition are frequently (though somewhat incorrectly) used interchangeably.
Fold recognition methods are widely used and effective because it is believed that there are a strictly limited number of different protein folds in nature, mostly as a result of evolution but also due to constraints imposed by the basic physics and chemistry of polypeptide chains. There is, therefore, a good chance (currently 70-80%) that a protein which has a similar fold to the target protein has already been studied by X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy and can be found in the PDB. Currently there are nearly 1300 different protein folds known, but new folds are still being discovered every year due in significant part to the ongoing structural genomics projects.
Many different algorithms have been proposed for finding the correct threading of a sequence onto a structure, though many make use of dynamic programming in some form. For full 3-D threading, the problem of identifying the best alignment is very difficult (it is an NP-hard problem for some models of threading). Researchers have made use of many combinatorial optimization methods such as conditional random fields, simulated annealing, branch and bound, and linear programming, searching to arrive at heuristic solutions. It is interesting to compare threading methods to methods which attempt to align two protein structures (protein structural alignment), and indeed many of the same algorithms have been applied to both problems.
Protein threading software
HHpred is a popular threading server which runs HHsearch, a widely used software for remote homology detection based on pairwise comparison of hidden Markov models.
RAPTOR is an integer programming based protein threading software. It has been replaced by a new protein threading program RaptorX, which employs probabilistic graphical models and statistical inference to both single template and multi-template based protein threading. RaptorX significantly outperforms RAPTOR and is especially good at aligning proteins with sparse sequence profile. The RaptorX server is free to public.
Phyre is a popular threading server combining HHsearch with ab initio and multiple-template modelling.
MUSTER is a standard threading algorithm based on dynamic programming and sequence profile-profile alignment. It also combines multiple structural resources to assist the sequence profile alignment.
SPARKS X is a probabilistic-based sequence-to-structure matching between predicted one-dimensional structural properties of query and corresponding native properties of templates.
BioShell is a threading algorithm using optimized profile-to-profile dynamic programming algorithm combined with predicted secondary structure.
See also
Homology modeling
Protein structure prediction
Protein structure prediction software
References
Further reading
Protein methods
Bioinformatics
NP-complete problems | Threading (protein sequence) | [
"Chemistry",
"Mathematics",
"Engineering",
"Biology"
] | 1,949 | [
"Biochemistry methods",
"Biological engineering",
"Protein methods",
"Protein biochemistry",
"Computational problems",
"Bioinformatics",
"Mathematical problems",
"NP-complete problems"
] |
3,135,542 | https://en.wikipedia.org/wiki/Grothendieck%E2%80%93Katz%20p-curvature%20conjecture | In mathematics, the Grothendieck–Katz p-curvature conjecture is a local-global principle for linear ordinary differential equations, related to differential Galois theory and in a loose sense analogous to the result in the Chebotarev density theorem considered as the polynomial case. It is a conjecture of Alexander Grothendieck from the late 1960s, and apparently not published by him in any form.
The general case remains unsolved, despite recent progress; it has been linked to geometric investigations involving algebraic foliations.
Formulation
In a simplest possible statement the conjecture can be stated in its essentials for a vector system written as
for a vector v of size n, and an n-by-n matrix A of algebraic functions with algebraic number coefficients. The question is to give a criterion for when there is a full set of algebraic function solutions, meaning a fundamental matrix (i.e. n vector solutions put into a block matrix). For example, a classical question was for the hypergeometric equation: when does it have a pair of algebraic solutions, in terms of its parameters? The answer is known classically as Schwarz's list. In monodromy terms, the question is of identifying the cases of finite monodromy group.
By reformulation and passing to a larger system, the essential case is for rational functions in A and rational number coefficients. Then a necessary condition is that for almost all prime numbers p, the system defined by reduction modulo p should also have a full set of algebraic solutions, over the finite field with p elements.
Grothendieck's conjecture is that these necessary conditions, for almost all p, should be sufficient. The connection with p-curvature is that the mod p condition stated is the same as saying the p-curvature, formed by a recurrence operation on A, is zero; so another way to say it is that p-curvature of 0 for almost all p implies enough algebraic solutions of the original equation.
Katz's formulation for the Galois group
Nicholas Katz has applied Tannakian category techniques to show that this conjecture is essentially the same as saying that the differential Galois group G (or strictly speaking the Lie algebra g of the algebraic group G, which in this case is the Zariski closure of the monodromy group) can be determined by mod p information, for a certain wide class of differential equations.
Progress
A wide class of cases has been proved by Benson Farb and Mark Kisin; these equations are on a locally symmetric variety X subject to some group-theoretic conditions. This work is based on the previous results of Katz for Picard–Fuchs equations (in the contemporary sense of the Gauss–Manin connection), as amplified in the Tannakian direction by André. It also applies a version of superrigidity particular to arithmetic groups. Other progress has been by arithmetic methods.
History
Nicholas Katz related some cases to deformation theory in 1972, in a paper where the conjecture was published. Since then, reformulations have been published. A q-analogue for difference equations has been proposed.
In responding to Kisin's talk on this work at the 2009 Colloque Grothendieck, Katz gave a brief account from personal knowledge of the genesis of the conjecture. Grothendieck put it forth in public discussion in Spring 1969, but wrote nothing on the topic. He was led to the idea by foundational intuitions in the area of crystalline cohomology, at that time being developed by his student Pierre Berthelot. In some way wishing to equate the notion of "nilpotence" in the theory of connections, with the divided power structure technique that became standard in crystalline theory, Grothendieck produced the conjecture as a by-product.
Notes
References
Nicholas M. Katz, Rigid Local Systems, Chapter 9.
Further reading
Jean-Benoît Bost, Algebraic leaves of algebraic foliations over number fields, Publications Mathématiques de L'IHÉS, Volume 93, Number 1, September 2001
Yves André, Sur la conjecture des p-courbures de Grothendieck–Katz et un problème de Dwork, in Geometric Aspects of Dwork Theory (2004), editors Alan Adolphson, Francesco Baldassarri, Pierre Berthelot, Nicholas Katz, François Loeser
Anand Pillay (2006), Differential algebra and generalizations of Grothendieck's conjecture on the arithmetic of linear differential equations
Algebraic geometry
Galois theory
Ordinary differential equations
Conjectures
Unsolved problems in number theory | Grothendieck–Katz p-curvature conjecture | [
"Mathematics"
] | 935 | [
"Unsolved problems in mathematics",
"Unsolved problems in number theory",
"Fields of abstract algebra",
"Conjectures",
"Algebraic geometry",
"Mathematical problems",
"Number theory"
] |
3,135,551 | https://en.wikipedia.org/wiki/Norton%20amplifier | A Norton amplifier or current differencing amplifier (CDA) is an electronic amplifier with two low impedance current inputs and one low impedance voltage output where the output voltage is proportional to the difference between the two input currents. It is a current controlled voltage source (CCVS) controlled by the difference of two input currents.
The Norton amplifier can be regarded as the dual of the operational transconductance amplifier (OTA) which takes a differential voltage input and provides a high impedance current output. The OTA has a gain measured in units of transconductance (siemens) whereas the Norton amplifier has a gain measured in units of transimpedance (ohms).
A commercial example of this circuit is the LM3900 quad operational amplifier and its high speed cousin the LM359 (400MHz gain bandwidth product).
The LM3900 was introduced in the mid 1970s, and was designed to be an easy to use single supply op amp with comparable input bias currents (~30nA) to other bi polar op-amps of the time period (LM741, LM324), while having rail to rail output and a much higher gain bandwidth product(2.5MHz). The LM3900 was popular with designers of analog synthesizers. The LM359 was introduced in the early 1990s as video capable amplifier capable of high amplification at video frequencies (10MHz).
See also
Current differencing transconductance amplifier, current difference input and differential current output
Current-feedback operational amplifier, single-ended current input and voltage output.
References
Bibliography
Carr, Joseph, Linear Integrated Circuits, Newnes, 1996 .
Bali, S.P., Linear Integrated Circuits, Tata McGraw-Hill Education, 2008 .
Terrell, David, Op Amps: Design, Application, and Troubleshooting, Newnes, 1996 .
T. M. Frederiksen, W. F. Davis and D. W. Zobel, A new current-differencing single-supply operational amplifier, in IEEE Journal of Solid-State Circuits, vol. 6, no. 6, pp. 340-347, Dec. 1971, doi: 10.1109/JSSC.1971.1050202.
F. Anday, Realization of the biquadratic transfer functions using current differencing amplifiers, in Proceedings of the IEEE, vol. 65, no. 7, pp. 1067-1068, July 1977, doi: 10.1109/PROC.1977.10617.
J. H. Brodie, A low-pass biquad derived filter realization, in IEEE Journal of Solid-State Circuits, vol. 11, no. 4, pp. 552-555, Aug. 1976, doi: 10.1109/JSSC.1976.1050775.
C. Croskey and J. H. Brodie, Comments on "A low-pass biquad derived filter realization", in IEEE Journal of Solid-State Circuits, vol. 12, no. 3, pp. 329-330, June 1977, doi: 10.1109/JSSC.1977.1050907.
J. W. Haslett, Noise performance of the new Norton op amps, in IEEE Transactions on Electron Devices, vol. 21, no. 9, pp. 571-577, Sept. 1974, doi: 10.1109/T-ED.1974.17968.
J. W. Haslett, Noise performance limitations of single amplifiers RC active filters, in IEEE Transactions on Circuits and Systems, vol. 22, no. 9, pp. 743-747, September 1975, doi: 10.1109/TCS.1975.1084117.
Electronic amplifiers | Norton amplifier | [
"Technology"
] | 792 | [
"Electronic amplifiers",
"Amplifiers"
] |
3,135,565 | https://en.wikipedia.org/wiki/MathType | MathType is a software application created by Design Science that allows the creation of mathematical notation for inclusion in desktop and web applications.
After Design Science was acquired by Maths for More in 2017, their WIRIS web equation editor software been rebranded as MathType.
Features
MathType is a graphical editor for mathematical equations, allowing entry with the mouse or keyboard in a full graphical WYSIWYG environment. This contrasts to document markup languages such as LaTeX where equations are entered as markup in a text editor and then processed into a typeset document as a separate step.
MathType also supports the math markup languages TeX, LaTeX, and MathML. LaTeX can be entered directly into MathType, and MathType equations in Microsoft Word can be converted to and from LaTeX. MathType supports copying to and pasting from any of these markup languages.
Additionally, on Windows 7 and later, equations may be drawn using a touch screen or pen (or mouse) via the math input panel.
By default, MathType equations are typeset in Times New Roman, with Symbol used for symbols and Greek. Equations may also be typeset in Euclid, a modern font like Computer Modern used in TeX, and this is included with the software. Roman characters (i.e. variable names and functions) may be typeset in any font that contains those characters, but Greek and symbols will still use Times or Euclid.
Support for other applications
On Windows, MathType supports object linking and embedding (OLE), which is the standard Windows mechanism for including information from one application in another. In particular office suites such as Microsoft Office and OpenOffice.org for Windows allow MathType equations to be embedded in this way. Equations embedded using OLE are displayed and printed as graphics in the host application and can be edited later, in which case the host document is updated automatically. In addition, a Microsoft Word add-in is included, which adds features including equation numbering and formatting displayed equations (as opposed to inline equations), which are features that MathType does not add to other applications.
On Macs, there is no analogous standard to OLE so support is not universal. Microsoft Office for Mac supports OLE, so MathType equations may be used there as usual. MathType has support for Apple iWork '09, so equations may be embedded and updated seamlessly in that product too. In applications where no other possibility is available, such as OpenOffice.org for Mac, Design Science recommends exporting equations as images and embedding those images into documents. As on Windows, there is a plugin for Microsoft Word for Mac (except for Word 2008), which adds equation formatting features such as equation numbering, which are features that MathType does not add to other applications. AppleWorks included a special version of MathType for built-in equation editing.
For Web applications such as Gmail and Google Docs, MathType supports copying to (and pasting from) HTML <img> tags (created by translating the equation's LaTeX into Google Chart API). There is a list of web application presets in the Copy Preferences dialog, so for example choosing "Google Docs" would copy as an HTML <img> tag, whereas choosing "Wikipedia" would copy as LaTeX wrapped in a <math> wiki tag.
Version history
Since the initial introduction in 1987, Design Science has released new versions of MathType, the last in 2019.
This article describes the now obsolete Design Science version on MathType. The current version of MathType by WIRIS, although having the same name, has different functionality to the Design Science version. On the WIRIS website there has been criticism by several users, of the WIRIS version compared to the earlier Design Science version.
See also
MathJax
MathML
MathMagic
References
Notes
Further reading
External links
Official website
Formula editors | MathType | [
"Mathematics"
] | 796 | [
"Formula editors",
"Mathematical software"
] |
3,135,619 | https://en.wikipedia.org/wiki/Akira%20Haraguchi |
(born 1946, Miyagi Prefecture), is a retired Japanese engineer known for memorizing and reciting digits of pi. He is known to have recited more than 80,000 decimal places of pi in 12 hours.
Memorization of pi
Haraguchi holds the current unofficial world record for reciting 100,000 digits of pi in 16 hours, starting at 9:00a.m. (16:28 GMT) on October3, 2006. He equaled his previous record of 83,500 digits by nightfall and then continued until stopping with digit number 100,000 at 1:28 a.m. on October4, 2006. The event was filmed in a public hall in Kisarazu, east of Tokyo, where he had five-minute breaks every two hours to eat onigiri to keep up his energy levels. Even his trips to the toilet were filmed to prove that the exercise was legitimate.
His previous world record of 83,431 places was performed on 2 July 2005, itself an improvement on the earlier record he set of 54,000.
On Pi Day, 2015, he claimed to be able to recite 111,701 digits.
Despite Haraguchi's efforts and detailed documentation, the Guinness World Records have not yet accepted any of his records set.
Haraguchi views the memorization of pi as "the religion of the universe", and as an expression of his lifelong quest for eternal truth.
Haraguchi's mnemonic system
Haraguchi uses a system he developed, which assigns kana symbols to numbers, allowing for the memorization of pi as a collection of stories. The same system was developed by Lewis Carroll to assign letters from the alphabet to numbers, and creating stories to memorize numbers. This system preceded the system above which developed.
Example
0 => can be substituted by o, ra, ri, ru, re, ro, wo, on or oh;
1 => can be substituted by a, i, u, e, hi, bi, pi, an, ah, hy, hyan, bya or byan;
The same is done for each number from 2 through 9.
References
External links
BBC News, Asia-Pacific
Pi World Ranking List
Memory world records
Pi-related people
Japanese engineers
People from Miyagi Prefecture
Living people
1946 births
Hitachi people | Akira Haraguchi | [
"Mathematics"
] | 474 | [
"Pi-related people",
"Pi"
] |
3,135,626 | https://en.wikipedia.org/wiki/High-water%20mark%20%28computer%20security%29 | In the fields of physical security and information security, the high-water mark for access control was introduced by Clark Weissman in 1969. It pre-dates the Bell–LaPadula security model, whose first volume appeared in 1972.
Under high-water mark, any object less than the user's security level can be opened, but the object is relabeled to reflect the highest security level currently open, hence the name.
The practical effect of the high-water mark was a gradual movement of all objects towards the highest security level in the system. If user A is writing a CONFIDENTIAL document, and checks the unclassified dictionary, the dictionary becomes CONFIDENTIAL. Then, when user B is writing a SECRET report and checks the spelling of a word, the dictionary becomes SECRET. Finally, if user C is assigned to assemble the daily intelligence briefing at the TOP SECRET level, reference to the dictionary makes the dictionary TOP SECRET, too.
Low-water mark
Low-water mark is an extension to Biba Model. In the Biba model, no-write-up and no-read-down rules are enforced. In this model, the rules are exactly opposite of the rules in Bell-La Padula model. In the low-water mark model, read down is permitted, but the subject label, after reading, will be degraded to object label. It can be classified in floating label security models.
See also
Watermark (data synchronization)
References
Computer security models | High-water mark (computer security) | [
"Technology",
"Engineering"
] | 301 | [
"Computer security stubs",
"Computing stubs",
"Computer security models",
"Cybersecurity engineering"
] |
3,135,637 | https://en.wikipedia.org/wiki/Gut%20microbiota | Gut microbiota, gut microbiome, or gut flora are the microorganisms, including bacteria, archaea, fungi, and viruses, that live in the digestive tracts of animals. The gastrointestinal metagenome is the aggregate of all the genomes of the gut microbiota. The gut is the main location of the human microbiome. The gut microbiota has broad impacts, including effects on colonization, resistance to pathogens, maintaining the intestinal epithelium, metabolizing dietary and pharmaceutical compounds, controlling immune function, and even behavior through the gut–brain axis.
The microbial composition of the gut microbiota varies across regions of the digestive tract. The colon contains the highest microbial density of any human-associated microbial community studied so far, representing between 300 and 1000 different species. Bacteria are the largest and to date, best studied component and 99% of gut bacteria come from about 30 or 40 species. About 55% of the dry mass of feces is bacteria. Over 99% of the bacteria in the gut are anaerobes, but in the cecum, aerobic bacteria reach high densities. It is estimated that the human gut microbiota have around a hundred times as many genes as there are in the human genome.
Overview
In humans, the gut microbiota has the highest numbers and species of bacteria compared to other areas of the body. The approximate number of bacteria composing the gut microbiota is about 1013–1014 (10,000 to 100,000 billion). In humans, the gut flora is established at birth and gradually transitions towards a state resembling that of adults by the age of two, coinciding with the development and maturation of the intestinal epithelium and intestinal mucosal barrier. This barrier is essential for supporting a symbiotic relationship with the gut flora while providing protection against pathogenic organisms.
The relationship between some gut microbiota and humans is not merely commensal (a non-harmful coexistence), but rather a mutualistic relationship. Some human gut microorganisms benefit the host by fermenting dietary fiber into short-chain fatty acids (SCFAs), such as acetic acid and butyric acid, which are then absorbed by the host. Intestinal bacteria also play a role in synthesizing certain B vitamins and vitamin K as well as metabolizing bile acids, sterols, and xenobiotics. The systemic importance of the SCFAs and other compounds they produce are like hormones and the gut flora itself appears to function like an endocrine organ. Dysregulation of the gut flora has been correlated with a host of inflammatory and autoimmune conditions.
The composition of human gut microbiota changes over time, when the diet changes, and as overall health changes. A systematic review from 2016 examined the preclinical and small human trials that have been conducted with certain commercially available strains of probiotic bacteria and identified those that had the most potential to be useful for certain central nervous system disorders. It should also be highlighted that the Mediterranean diet, rich in vegetables and fibers, stimulates the activity and growth of beneficial bacteria for the brain.
Classifications
The microbial composition of the gut microbiota varies across the digestive tract. In the stomach and small intestine, relatively few species of bacteria are generally present. Fungi, protists, archaea, and viruses are also present in the gut flora, but less is known about their activities.
Many species in the gut have not been studied outside of their hosts because they cannot be cultured. While there are a small number of core microbial species shared by most individuals, populations of microbes can vary widely. Within an individual, their microbial populations stay fairly constant over time, with some alterations occurring due to changes in lifestyle, diet and age. The Human Microbiome Project has set out to better describe the microbiota of the human gut and other body locations.
The four dominant bacterial phyla in the human gut are Bacillota (Firmicutes), Bacteroidota, Actinomycetota, and Pseudomonadota. Most bacteria belong to the genera Bacteroides, Clostridium, Faecalibacterium, Eubacterium, Ruminococcus, Peptococcus, Peptostreptococcus, and Bifidobacterium. Other genera, such as Escherichia and Lactobacillus, are present to a lesser extent. Species from the genus Bacteroides alone constitute about 30% of all bacteria in the gut, suggesting that this genus is especially important in the functioning of the host.
Fungal genera that have been detected in the gut include Candida, Saccharomyces, Aspergillus, Penicillium, Rhodotorula, Trametes, Pleospora, Sclerotinia, Bullera, and Galactomyces, among others. Rhodotorula is most frequently found in individuals with inflammatory bowel disease while Candida is most frequently found in individuals with hepatitis B cirrhosis and chronic hepatitis B.
Archaea constitute another large class of gut flora which are important in the metabolism of the bacterial products of fermentation.
Industrialization is associated with changes in the microbiota and the reduction of diversity could drive certain species to extinction; in 2018, researchers proposed a biobank repository of human microbiota.
Enterotype
An enterotype is a classification of living organisms based on its bacteriological ecosystem in the human gut microbiome not dictated by age, gender, body weight, or national divisions. There are indications that long-term diet influences enterotype. Three human enterotypes have been proposed, but their value has been questioned.
Composition
Bacteriome
Stomach
Due to the high acidity of the stomach, most microorganisms cannot survive there. The main bacteria of the gastric microbiota belong to five major phyla: Firmicutes, Bacteroidetes, Actinobacteria, Fusobacteriota, and Proteobacteria. The dominant genera are Prevotella, Streptococcus, Veillonella, Rothia , and Haemophilus. The interaction between the pre-existing gastric microbiota with the introduction of H. pylori may influence disease progression. When there is a presence of H. pylori it becomes the dominant of the microbiota.
Intestines
The small intestine contains a trace amount of microorganisms due to the proximity and influence of the stomach. Gram-positive cocci and rod-shaped bacteria are the predominant microorganisms found in the small intestine. However, in the distal portion of the small intestine alkaline conditions support gram-negative bacteria of the Enterobacteriaceae. The bacterial flora of the small intestine aid in a wide range of intestinal functions. The bacterial flora provide regulatory signals that enable the development and utility of the gut. Overgrowth of bacteria in the small intestine can lead to intestinal failure. In addition the large intestine contains the largest bacterial ecosystem in the human body. About 99% of the large intestine and feces flora are made up of obligate anaerobes such as Bacteroides and Bifidobacterium. Factors that disrupt the microorganism population of the large intestine include antibiotics, stress, and parasites.
Bacteria make up most of the flora in the colon and accounts for 60% of fecal nitrogen. This fact makes feces an ideal source of gut flora for any tests and experiments by extracting the nucleic acid from fecal specimens, and bacterial 16S rRNA gene sequences are generated with bacterial primers. This form of testing is also often preferable to more invasive techniques, such as biopsies.
Five phyla dominate the intestinal microbiota: Bacteroidota, Bacillota (Firmicutes), Actinomycetota, Pseudomonadota, and Verrucomicrobiotawith Bacteroidota and Bacillota constituting 90% of the composition. Somewhere between 300 and 1000 different species live in the gut, with most estimates at about 500. However, it is probable that 99% of the bacteria come from about 30 or 40 species, with Faecalibacterium prausnitzii (phylum firmicutes) being the most common species in healthy adults.
Research suggests that the relationship between gut flora and humans is not merely commensal (a non-harmful coexistence), but rather is a mutualistic, symbiotic relationship. Though people can survive with no gut flora, the microorganisms perform a host of useful functions, such as fermenting unused energy substrates, training the immune system via end products of metabolism like propionate and acetate, preventing growth of harmful species, regulating the development of the gut, producing vitamins for the host (such as biotin and vitamin K), and producing hormones to direct the host to store fats. Extensive modification and imbalances of the gut microbiota and its microbiome or gene collection are associated with obesity. However, in certain conditions, some species are thought to be capable of causing disease by causing infection or increasing cancer risk for the host.
Mycobiome
Fungi and protists also make up a part of the gut flora, but less is known about their activities.
Due to the prevalence of fungi in the natural environment, determining which genera and species are permanent members of the gut mycobiome is difficult. Research is underway as to whether Penicillium is a permanent or transient member of the gut flora, obtained from dietary sources such as cheese, though several species in the genus are known to survive at temperatures around 37°C, about the same as the core body temperature. Saccharomyces cerevisiae, brewer's yeast, is known to reach the intestines after being ingested and can be responsible for the condition auto-brewery syndrome in cases where it is overabundant, while Candida albicans is likely a permanent member, and is believed to be acquired at birth through vertical transmission.
Virome
The human virome is mostly bacteriophages.
Variation
Age
There are common patterns of microbiome composition evolution during life. In general, the diversity of microbiota composition of fecal samples is significantly higher in adults than in children, although interpersonal differences are higher in children than in adults. Much of the maturation of microbiota into an adult-like configuration happens during the first three years of life.
As the microbiome composition changes, so does the composition of bacterial proteins produced in the gut. In adult microbiomes, a high prevalence of enzymes involved in fermentation, methanogenesis and the metabolism of arginine, glutamate, aspartate and lysine have been found. In contrast, in infant microbiomes the dominant enzymes are involved in cysteine metabolism and fermentation pathways.
Geography
Gut microbiome composition depends on the geographic origin of populations. Variations in a trade-off of Prevotella, the representation of the urease gene, and the representation of genes encoding glutamate synthase/degradation or other enzymes involved in amino acids degradation or vitamin biosynthesis show significant differences between populations from the US, Malawi, or Amerindian origin.
The US population has a high representation of enzymes encoding the degradation of glutamine and enzymes involved in vitamin and lipoic acid biosynthesis; whereas Malawi and Amerindian populations have a high representation of enzymes encoding glutamate synthase and they also have an overrepresentation of α-amylase in their microbiomes. As the US population has a diet richer in fats than Amerindian or Malawian populations which have a corn-rich diet, the diet is probably the main determinant of the gut bacterial composition.
Further studies have indicated a large difference in the composition of microbiota between European and rural African children. The fecal bacteria of children from Florence were compared to that of children from the small rural village of Boulpon in Burkina Faso. The diet of a typical child living in this village is largely lacking in fats and animal proteins and rich in polysaccharides and plant proteins. The fecal bacteria of European children were dominated by Firmicutes and showed a marked reduction in biodiversity, while the fecal bacteria of the Boulpon children was dominated by Bacteroidetes. The increased biodiversity and different composition of the gut microbiome in African populations may aid in the digestion of normally indigestible plant polysaccharides and also may result in a reduced incidence of non-infectious colonic diseases.
On a smaller scale, it has been shown that sharing numerous common environmental exposures in a family is a strong determinant of individual microbiome composition. This effect has no genetic influence and it is consistently observed in culturally different populations.
Malnourishment
Malnourished children have less mature and less diverse gut microbiota than healthy children, and changes in the microbiome associated with nutrient scarcity can in turn be a pathophysiological cause of malnutrition. Malnourished children also typically have more potentially pathogenic gut flora, and more yeast in their mouths and throats. Altering diet may lead to changes in gut microbiota composition and diversity.
Race and ethnicity
Researchers with the American Gut Project and Human Microbiome Project found that twelve microbe families varied in abundance based on the race or ethnicity of the individual. The strength of these associations is limited by the small sample size: the American Gut Project collected data from 1,375 individuals, 90% of whom were white. The Healthy Life in an Urban Setting (HELIUS) study in Amsterdam found that those of Dutch ancestry had the highest level of gut microbiota diversity, while those of South Asian and Surinamese descent had the lowest diversity. The study results suggested that individuals of the same race or ethnicity have more similar microbiomes than individuals of different racial backgrounds.
Socioeconomic status
As of 2020, at least two studies have demonstrated a link between an individual's socioeconomic status (SES) and their gut microbiota. A study in Chicago found that individuals in higher SES neighborhoods had greater microbiota diversity. People from higher SES neighborhoods also had more abundant Bacteroides bacteria. Similarly, a study of twins in the United Kingdom found that higher SES was also linked with a greater gut diversity.
Antibiotic Usage
As of 2023, a study suggests that antibiotics, especially those used in the treatment of broad-spectrum bacterial infections, have negative effects on the gut microbiota. The study also states that there are many experts on intestinal health concerned that antibody usage has reduced the diversity of the gut microbiota, many of the strains are lost, and if there is a re-emergence of the bacteria, is gradual and long-term.
Acquisition in human infants
The establishment of a gut flora is crucial to the health of an adult, as well as the functioning of the gastrointestinal tract. In humans, a gut flora similar to an adult's is formed within one to two years of birth as microbiota are acquired through parent-to-child transmission and transfer from food, water, and other environmental sources.
The traditional view of the gastrointestinal tract of a normal fetus is that it is sterile, although this view has been challenged in the past few years. Multiple lines of evidence have begun to emerge that suggest there may be bacteria in the intrauterine environment. In humans, research has shown that microbial colonization may occur in the fetus with one study showing Lactobacillus and Bifidobacterium species were present in placental biopsies. Several rodent studies have demonstrated the presence of bacteria in the amniotic fluid and placenta, as well as in the meconium of babies born by sterile cesarean section. In another study, researchers administered a culture of bacteria orally to pregnant mice, and detected the bacteria in the offspring, likely resulting from transmission between the digestive tract and amniotic fluid via the blood stream. However, researchers caution that the source of these intrauterine bacteria, whether they are alive, and their role, is not yet understood.
During birth and rapidly thereafter, bacteria from the mother and the surrounding environment colonize the infant's gut. The exact sources of bacteria are not fully understood, but may include the birth canal, other people (parents, siblings, hospital workers), breastmilk, food, and the general environment with which the infant interacts. Research has shown that the microbiome of babies born vaginally differs significantly from that of babies delivered by caesarean section and that vaginally born babies got most of their gut bacteria from their mother, while the microbiota of babies born by caesarean section had more bacteria associated with hospital environments.
During the first year of life, the composition of the gut flora is generally simple and changes a great deal with time and is not the same across individuals. The initial bacterial population are generally facultative anaerobic organisms; investigators believe that these initial colonizers decrease the oxygen concentration in the gut, which in turn allows obligately anaerobic bacteria like Bacteroidota, Actinomycetota, and Bacillota to become established and thrive. Breast-fed babies become dominated by bifidobacteria, possibly due to the contents of bifidobacterial growth factors in breast milk, and by the fact that breast milk carries prebiotic components, allowing for healthy bacterial growth. Breast milk also contains higher levels of Immunoglobulin A (IgA) to help with the tolerance and regulation of the baby's immune system. In contrast, the microbiota of formula-fed infants is more diverse, with high numbers of Enterobacteriaceae, enterococci, bifidobacteria, Bacteroides, and clostridia.
Caesarean section, antibiotics, and formula feeding may alter the gut microbiome composition. Children treated with antibiotics have less stable, and less diverse floral communities. Caesarean sections have been shown to be disruptive to mother-offspring transmission of bacteria, which impacts the overall health of the offspring by raising risks of disease such as celiac disease, asthma, and type 1 diabetes. This further evidences the importance of a healthy gut microbiome. Various methods of microbiome restoration are being explored, typically involving exposing the infant to maternal vaginal contents, and oral probiotics.
Functions
When the study of gut flora began in 1995, it was thought to have three key roles: direct defense against pathogens, fortification of host defense by its role in developing and maintaining the intestinal epithelium and inducing antibody production there, and metabolizing otherwise indigestible compounds in food. Subsequent work discovered its role in training the developing immune system, and yet further work focused on its role in the gut–brain axis.
Direct inhibition of pathogens
The gut flora community plays a direct role in defending against pathogens by fully colonising the space, making use of all available nutrients, and by secreting compounds known as cytokines that kill or inhibit unwelcome organisms that would compete for nutrients with it. Different strains of gut bacteria cause the production of different cytokines. Cytokines are chemical compounds produced by our immune system for initiating the inflammatory response against infections. Disruption of the gut flora allows competing organisms like Clostridioides difficile to become established that otherwise are kept in abeyance.
Development of enteric protection and immune system
In humans, a gut flora similar to an adult's is formed within one to two years of birth. As the gut flora gets established, the lining of the intestines – the intestinal epithelium and the intestinal mucosal barrier that it secretes – develop as well, in a way that is tolerant to, and even supportive of, commensalistic microorganisms to a certain extent and also provides a barrier to pathogenic ones. Specifically, goblet cells that produce the mucosa proliferate, and the mucosa layer thickens, providing an outside mucosal layer in which "friendly" microorganisms can anchor and feed, and an inner layer that even these organisms cannot penetrate. Additionally, the development of gut-associated lymphoid tissue (GALT), which forms part of the intestinal epithelium and which detects and reacts to pathogens, appears and develops during the time that the gut flora develops and established. The GALT that develops is tolerant to gut flora species, but not to other microorganisms. GALT also normally becomes tolerant to food to which the infant is exposed, as well as digestive products of food, and gut flora's metabolites (molecules formed from metabolism) produced from food.
The human immune system creates cytokines that can drive the immune system to produce inflammation in order to protect itself, and that can tamp down the immune response to maintain homeostasis and allow healing after insult or injury. Different bacterial species that appear in gut flora have been shown to be able to drive the immune system to create cytokines selectively; for example Bacteroides fragilis and some Clostridia species appear to drive an anti-inflammatory response, while some segmented filamentous bacteria drive the production of inflammatory cytokines. Gut flora can also regulate the production of antibodies by the immune system. One function of this regulation is to cause B cells to class switch to IgA. In most cases B cells need activation from T helper cells to induce class switching; however, in another pathway, gut flora cause NF-kB signaling by intestinal epithelial cells which results in further signaling molecules being secreted. These signaling molecules interact with B cells to induce class switching to IgA. IgA is an important type of antibody that is used in mucosal environments like the gut. It has been shown that IgA can help diversify the gut community and helps in getting rid of bacteria that cause inflammatory responses. Ultimately, IgA maintains a healthy environment between the host and gut bacteria. These cytokines and antibodies can have effects outside the gut, in the lungs and other tissues.
The immune system can also be altered due to the gut bacteria's ability to produce metabolites that can affect cells in the immune system. For example short-chain fatty acids (SCFA) can be produced by some gut bacteria through fermentation. SCFAs stimulate a rapid increase in the production of innate immune cells like neutrophils, basophils and eosinophils. These cells are part of the innate immune system that try to limit the spread of infection.
Metabolism
Without gut flora, the human body would be unable to utilize some of the undigested carbohydrates it consumes, because some types of gut flora have enzymes that human cells lack for breaking down certain polysaccharides. Rodents raised in a sterile environment and lacking in gut flora need to eat 30% more calories just to remain the same weight as their normal counterparts. Carbohydrates that humans cannot digest without bacterial help include certain starches, fiber, oligosaccharides, and sugars that the body failed to digest and absorb like lactose in the case of lactose intolerance and sugar alcohols, mucus produced by the gut, and proteins.
Bacteria turn carbohydrates they ferment into short-chain fatty acids by a form of fermentation called saccharolytic fermentation. Products include acetic acid, propionic acid and butyric acid. These materials can be used by host cells, providing a major source of energy and nutrients. Gases (which are involved in signaling and may cause flatulence) and organic acids, such as lactic acid, are also produced by fermentation. Acetic acid is used by muscle, propionic acid facilitates liver production of ATP, and butyric acid provides energy to gut cells.
Gut flora also synthesize vitamins like biotin and folate, and facilitate absorption of dietary minerals, including magnesium, calcium, and iron. Methanobrevibacter smithii is unique because it is not a species of bacteria, but rather a member of domain Archaea, and is the most abundant methane-producing archaeal species in the human gastrointestinal microbiota.
Gut microbiota also serve as a source of vitamins K and B12, which are not produced by the body or produced in little amount.
Cellulose degradation
Bacteria that degrade cellulose (such as Ruminococcus) are prevalent among great apes, ancient human societies, hunter-gatherer communities, and even modern rural populations. However, they are rare in industrialized societies. Human-associated strains have acquired genes that can degrade specific plant fibers such as maize, rice, and wheat. Bacterial strains found in primates can also degrade chitin, a polymer abundant in insects, which are part of the diet of many nonhuman primates. The decline of these bacteria in the human gut were likely influenced by the shift toward western lifestyles.
Pharmacomicrobiomics
The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial cells in the human body (over 100 trillion) greatly outnumbers Homo sapiens cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile.
Apart from carbohydrates, gut microbiota can also metabolize other xenobiotics such as drugs, phytochemicals, and food toxicants. More than 30 drugs have been shown to be metabolized by gut microbiota. The microbial metabolism of drugs can sometimes inactivate the drug.
Contribution to drug metabolism
The gut microbiota is an enriched community that contains diverse genes with huge biochemical capabilities to modify drugs, especially those taken by mouth. Gut microbiota can affect drug metabolism via direct and indirect mechanisms. The direct mechanism is mediated by the microbial enzymes that can modify the chemical structure of the administered drugs. Conversely, the indirect pathway is mediated by the microbial metabolites which affect the expression of host metabolizing enzymes such as cytochrome P450. The effects of the gut microbiota on the pharmacokinetics and bioavailability of the drug have been investigated a few decades ago. These effects can be varied; it could activate the inactive drugs such as lovastatin, inactivate the active drug such as digoxin or induce drug toxicity as in irinotecan. Since then, the impacts of the gut microbiota on the pharmacokinetics of many drugs were heavily studied.
The human gut microbiota plays a crucial role in modulating the effect of the administered drugs on the human. Directly, gut microbiota can synthesize and release a series of enzymes with the capability to metabolize drugs such as microbial biotransformation of L-dopa by decarboxylase and dehydroxylase enzymes. On the contrary, gut microbiota may also alter the metabolism of the drugs by modulating the host drug metabolism. This mechanism can be mediated by microbial metabolites or by modifying host metabolites which in turn change the expression of host metabolizing enzymes.
A large number of studies have demonstrated the metabolism of over 50 drugs by the gut microbiota. For example, lovastatin (a cholesterol-lowering agent) which is a lactone prodrug is partially activated by the human gut microbiota forming active acid hydroxylated metabolites. Conversely, digoxin (a drug used to treat Congestive Heart Failure) is inactivated by a member of the gut microbiota (i.e. Eggerthella lanta). Eggerthella lanta has a cytochrome-encoding operon up-regulated by digoxin and associated with digoxin-inactivation. Gut microbiota can also modulate the efficacy and toxicity of chemotherapeutic agents such as irinotecan. This effect is derived from the microbiome-encoded β-glucuronidase enzymes which recover the active form of the irinotecan causing gastrointestinal toxicity.
Secondary metabolites
This microbial community in the gut has a huge biochemical capability to produce distinct secondary metabolites that are sometimes produced from the metabolic conversion of dietary foods such as fibers, endogenous biological compounds such as indole or bile acids. Microbial metabolites especially short chain fatty acids (SCFAs) and secondary bile acids (BAs) play important roles for the human in health and disease states.
One of the most important bacterial metabolites produced by the gut microbiota is secondary bile acids (BAs). These metabolites are produced by the bacterial biotransformation of the primary bile acids such as cholic acid (CA) and chenodeoxycholic acid (CDCA) into secondary bile acids (BAs) lithocholic acid (LCA) and deoxy cholic acid (DCA) respectively. Primary bile acids which are synthesized by hepatocytes and stored in the gall bladder possess hydrophobic characters. These metabolites are subsequently metabolized by the gut microbiota into secondary metabolites with increased hydrophobicity. Bile salt hydrolases (BSH) which are conserved across gut microbiota phyla such as Bacteroides, Firmicutes, and Actinobacteria responsible for the first step of secondary bile acids metabolism. Secondary bile acids (BAs) such as DCA and LCA have been demonstrated to inhibit both Clostridioides difficile germination and outgrowth.
Dysbiosis
The gut microbiota is important for maintaining homeostasis in the intestine. Development of intestinal cancer is associated with an imbalance in the natural microflora (dysbiosis). The secondary bile acid deoxycholic acid is associated with alterations of the microbial community that lead to increased intestinal carcinogenesis. Increased exposure of the colon to secondary bile acids resulting from dysbiosis can cause DNA damage, and such damage can produce carcinogenic mutations in cells of the colon. The high density of bacteria in the colon (about 1012 per ml.) that are subject to dysbiosis compared to the relatively low density in the small intestine (about 102 per ml.) may account for the greater than 10-fold higher incidence of cancer in the colon compared to the small intestine.
Gut–brain axis
The gut microbiota contributes to digestion and immune modulation, as it plays a role in the gut-brain axis, where microbial metabolites such as short-chain fatty acids and neurotransmitters influence brain function and behavior. The gut–brain axis is the biochemical signaling that takes place between the gastrointestinal tract and the central nervous system. That term has been expanded to include the role of the gut flora in the interplay; the term "microbiome––brain axis" is sometimes used to describe paradigms explicitly including the gut flora. Broadly defined, the gut–brain axis includes the central nervous system, neuroendocrine and neuroimmune systems including the hypothalamic–pituitary–adrenal axis (HPA axis), sympathetic and parasympathetic arms of the autonomic nervous system including the enteric nervous system, the vagus nerve, and the gut microbiota. Studies show links between gut dysbiosis and mental health conditions, indicating a complex interaction that impacts mood and cognitive functions.
A systematic review from 2016 examined the preclinical and small human trials that have been conducted with certain commercially available strains of probiotic bacteria and found that among those tested, Bifidobacterium and Lactobacillus genera (B. longum, B. breve, B. infantis, L. helveticus, L. rhamnosus, L. plantarum, and L. casei), had the most potential to be useful for certain central nervous system disorders.
Alterations in microbiota balance
Effects of antibiotic use
Altering the numbers of gut bacteria, for example by taking broad-spectrum antibiotics, may affect the host's health and ability to digest food. Antibiotics can cause antibiotic-associated diarrhea by irritating the bowel directly, changing the levels of microbiota, or allowing pathogenic bacteria to grow. Another harmful effect of antibiotics is the increase in numbers of antibiotic-resistant bacteria found after their use, which, when they invade the host, cause illnesses that are difficult to treat with antibiotics.
Changing the numbers and species of gut microbiota can reduce the body's ability to ferment carbohydrates and metabolize bile acids and may cause diarrhea. Carbohydrates that are not broken down may absorb too much water and cause runny stools, or lack of SCFAs produced by gut microbiota could cause diarrhea.
A reduction in levels of native bacterial species also disrupts their ability to inhibit the growth of harmful species such as C. difficile and Salmonella Kedougou, and these species can get out of hand, though their overgrowth may be incidental and not be the true cause of diarrhea. Emerging treatment protocols for C. difficile infections involve fecal microbiota transplantation of donor feces (see Fecal transplant). Initial reports of treatment describe success rates of 90%, with few side effects. Efficacy is speculated to result from restoring bacterial balances of bacteroides and firmicutes classes of bacteria.
The composition of the gut microbiome also changes in severe illnesses, due not only to antibiotic use but also to such factors as ischemia of the gut, failure to eat, and immune compromise. Negative effects from this have led to interest in selective digestive tract decontamination, a treatment to kill only pathogenic bacteria and allow the re-establishment of healthy ones.
Antibiotics alter the population of the microbiota in the gastrointestinal tract, and this may change the intra-community metabolic interactions, modify caloric intake by using carbohydrates, and globally affect host metabolic, hormonal, and immune homeostasis.
There is reasonable evidence that taking probiotics containing Lactobacillus species may help prevent antibiotic-associated diarrhea and that taking probiotics with Saccharomyces (e.g., Saccharomyces boulardii ) may help to prevent Clostridioides difficile infection following systemic antibiotic treatment.
Pregnancy
The gut microbiota of a woman changes as pregnancy advances, with the changes similar to those seen in metabolic syndromes such as diabetes. The change in gut microbiota causes no ill effects. The newborn's gut microbiota resemble the mother's first-trimester samples. The diversity of the microbiome decreases from the first to third trimester, as the numbers of certain species go up.
Probiotics, prebiotics, synbiotics, and pharmabiotics
Probiotics contain live microorganisms. When consumed, they are believed to provide health benefits by altering the microbiome composition. Current research explores using probiotics as a way to restore the microbial balance of the intestine by stimulating the immune system and inhibiting pro-inflammatory cytokines.
With regard to gut microbiota, prebiotics are typically non-digestible, fiber compounds that pass undigested through the upper part of the gastrointestinal tract and stimulate the growth or activity of advantageous gut flora by acting as substrate for them.
Synbiotics refers to food ingredients or dietary supplements combining probiotics and prebiotics in a form of synergism.
The term "pharmabiotics" is used in various ways, to mean: pharmaceutical formulations (standardized manufacturing that can obtain regulatory approval as a drug) of probiotics, prebiotics, or synbiotics; probiotics that have been genetically engineered or otherwise optimized for best performance (shelf life, survival in the digestive tract, etc.); and the natural products of gut flora metabolism (vitamins, etc.).
There is some evidence that treatment with some probiotic strains of bacteria may be effective in irritable bowel syndrome, abdominal bloating and chronic idiopathic constipation. Those organisms most likely to result in a decrease of symptoms have included:
Bifidobacterium breve
Bifidobacterium infantis
Enterococcus faecium
Lactobacillus plantarum
Lactobacillus reuteri
Lactobacillus rhamnosus
Lactobacillus salivarius
Propionibacterium freudenreichii
Saccharomyces boulardii
Escherichia coli Nissle 1917
Streptococcus thermophilus
Fecal floatation
Feces of about 10–15% of people consistently floats in toilet water ('floaters'), while the rest produce feces that sinks ('sinkers') and production of gas causes feces to float. While conventional mice often produce 'floaters', gnotobiotic germfree mice no gut microbiota (bred in germfree isolator) produce 'sinkers', and gut microbiota colonization in germfree mice leads to food transformation to microbial biomass and enrichment of multiple gasogenic bacterial species that turns the 'sinkers' into 'floaters'.
Research
Tests for whether non-antibiotic drugs may impact human gut-associated bacteria were performed by in vitro analysis on more than 1000 marketed drugs against 40 gut bacterial strains, demonstrating that 24% of the drugs inhibited the growth of at least one of the bacterial strains.
Role in disease
Bacteria in the digestive tract can contribute to and be affected by disease in various ways. The presence or overabundance of some kinds of bacteria may contribute to inflammatory disorders such as inflammatory bowel disease. Additionally, metabolites from certain members of the gut flora may influence host signalling pathways, contributing to disorders such as obesity and colon cancer. Some gut bacteria may also cause infections and sepsis, for example when they are allowed to pass from the gut into the rest of the body.
Ulcers
Helicobacter pylori infection can initiate formation of stomach ulcers when the bacteria penetrate the stomach epithelial lining, then causing an inflammatory phagocytotic response. In turn, the inflammation damages parietal cells which release excessive hydrochloric acid into the stomach and produce less of the protective mucus. Injury to the stomach lining, leading to ulcers, develops when gastric acid overwhelms the defensive properties of cells and inhibits endogenous prostaglandin synthesis, reduces mucus and bicarbonate secretion, reduces mucosal blood flow, and lowers resistance to injury. Reduced protective properties of the stomach lining increase vulnerability to further injury and ulcer formation by stomach acid, pepsin, and bile salts.
Bowel perforation
Normally-commensal bacteria can harm the host if they extrude from the intestinal tract. Translocation, which occurs when bacteria leave the gut through its mucosal lining, can occur in a number of different diseases. If the gut is perforated, bacteria invade the interstitium, causing a potentially fatal infection.
Inflammatory bowel diseases
The two main types of inflammatory bowel diseases, Crohn's disease and ulcerative colitis, are chronic inflammatory disorders of the gut; the causes of these diseases are unknown and issues with the gut flora and its relationship with the host have been implicated in these conditions. Additionally, it appears that interactions of gut flora with the gut–brain axis have a role in IBD, with physiological stress mediated through the hypothalamic–pituitary–adrenal axis driving changes to intestinal epithelium and the gut flora in turn releasing factors and metabolites that trigger signaling in the enteric nervous system and the vagus nerve.
The diversity of gut flora appears to be significantly diminished in people with inflammatory bowel diseases compared to healthy people; additionally, in people with ulcerative colitis, Proteobacteria and Actinobacteria appear to dominate; in people with Crohn's, Enterococcus faecium and several Proteobacteria appear to be over-represented.
There is reasonable evidence that correcting gut flora imbalances by taking probiotics with Lactobacilli and Bifidobacteria can reduce visceral pain and gut inflammation in IBD.
Irritable bowel syndrome
Irritable bowel syndrome is a result of stress and chronic activation of the HPA axis; its symptoms include abdominal pain, changes in bowel movements, and an increase in proinflammatory cytokines. Overall, studies have found that the luminal and mucosal microbiota are changed in irritable bowel syndrome individuals, and these changes can relate to the type of irritation such as diarrhea or constipation. Also, there is a decrease in the diversity of the microbiome with low levels of fecal Lactobacilli and Bifidobacteria, high levels of facultative anaerobic bacteria such as Escherichia coli, and increased ratios of Firmicutes: Bacteroidetes.
Asthma
With asthma, two hypotheses have been posed to explain its rising prevalence in the developed world. The hygiene hypothesis posits that children in the developed world are not exposed to enough microbes and thus may contain lower prevalence of specific bacterial taxa that play protective roles. The second hypothesis focuses on the Western pattern diet, which lacks whole grains and fiber and has an overabundance of simple sugars. Both hypotheses converge on the role of short-chain fatty acids (SCFAs) in immunomodulation. These bacterial fermentation metabolites are involved in immune signalling that prevents the triggering of asthma and lower SCFA levels are associated with the disease. Lacking protective genera such as Lachnospira, Veillonella, Rothia and Faecalibacterium has been linked to reduced SCFA levels. Further, SCFAs are the product of bacterial fermentation of fiber, which is low in the Western pattern diet. SCFAs offer a link between gut flora and immune disorders, and as of 2016, this was an active area of research. Similar hypotheses have also been posited for the rise of food and other allergies.
Diabetes mellitus type 1
The connection between the gut microbiota and diabetes mellitus type 1 has also been linked to SCFAs, such as butyrate and acetate. Diets yielding butyrate and acetate from bacterial fermentation show increased Treg expression. Treg cells downregulate effector T cells, which in turn reduces the inflammatory response in the gut. Butyrate is an energy source for colon cells. butyrate-yielding diets thus decrease gut permeability by providing sufficient energy for the formation of tight junctions. Additionally, butyrate has also been shown to decrease insulin resistance, suggesting gut communities low in butyrate-producing microbes may increase chances of acquiring diabetes mellitus type 2. Butyrate-yielding diets may also have potential colorectal cancer suppression effects.
Obesity and metabolic syndrome
The gut flora have been implicated in obesity and metabolic syndrome due to a key role in the digestive process; the Western pattern diet appears to drive and maintain changes in the gut flora that in turn change how much energy is derived from food and how that energy is used. One aspect of a healthy diet that is often lacking in the Western-pattern diet is fiber and other complex carbohydrates that a healthy gut flora require flourishing; changes to gut flora in response to a Western-pattern diet appear to increase the amount of energy generated by the gut flora which may contribute to obesity and metabolic syndrome. There is also evidence that microbiota influence eating behaviours based on the preferences of the microbiota, which can lead to the host consuming more food eventually resulting in obesity. It has generally been observed that with higher gut microbiome diversity, the microbiota will spend energy and resources on competing with other microbiota and less on manipulating the host. The opposite is seen with lower gut microbiome diversity, and these microbiotas may work together to create host food cravings.
Additionally, the liver plays a dominant role in blood glucose homeostasis by maintaining a balance between the uptake and storage of glucose through the metabolic pathways of glycogenesis and gluconeogenesis. Intestinal lipids regulate glucose homeostasis involving a gut–brain–liver axis. The direct administration of lipids into the upper intestine increases the long chain fatty acyl-coenzyme A (LCFA-CoA) levels in the upper intestines and suppresses glucose production even under subdiaphragmatic vagotomy or gut vagal deafferentation. This interrupts the neural connection between the brain and the gut and blocks the upper intestinal lipids' ability to inhibit glucose production. The gut–brain–liver axis and gut microbiota composition can regulate the glucose homeostasis in the liver and provide potential therapeutic methods to treat obesity and diabetes.
Just as gut flora can function in a feedback loop that can drive the development of obesity, there is evidence that restricting intake of calories (i.e., dieting) can drive changes to the composition of the gut flora.
Other animals
The composition of the human gut microbiome is similar to that of the other great apes. However, humans' gut biota has decreased in diversity and changed in composition since our evolutionary split from Pan. Humans display increases in Bacteroidetes, a bacterial phylum associated with diets high in animal protein and fat, and decreases in Methanobrevibacter and Fibrobacter, groups that ferment complex plant polysaccharides. These changes are the result of the combined dietary, genetic, and cultural changes humans have undergone since evolutionary divergence from Pan.
In addition to humans and vertebrates, some insects also have complex and diverse gut microbiota that play key nutritional roles. Microbial communities associated with termites can constitute a majority of the weight of the individuals and perform important roles in the digestion of lignocellulose and nitrogen fixation. It is known that the disruption of gut microbiota of termites using agents like antibiotics or boric acid (a common agent used in preventative treatment) causes severe damage to digestive function and leads to the rise of opportunistic pathogens. These communities are host-specific, and closely related insect species share comparable similarities in gut microbiota composition. In cockroaches, gut microbiota have been shown to assemble in a deterministic fashion, irrespective of the inoculum; the reason for this host-specific assembly remains unclear. Bacterial communities associated with insects like termites and cockroaches are determined by a combination of forces, primarily diet, but there is some indication that host phylogeny may also be playing a role in the selection of lineages.
For more than 51 years it has been known that the administration of low doses of antibacterial agents promotes the growth of farm animals to increase weight gain.
In a study carried out on mice the ratio of Firmicutes and Lachnospiraceae was significantly elevated in animals treated with subtherapeutic doses of different antibiotics. By analyzing the caloric content of faeces and the concentration of small chain fatty acids (SCFAs) in the GI tract, it was concluded that the changes in the composition of microbiota lead to an increased capacity to extract calories from otherwise indigestible constituents, and to an increased production of SCFAs. These findings provide evidence that antibiotics perturb not only the composition of the GI microbiome but also its metabolic capabilities, specifically with respect to SCFAs.
See also
Colonisation resistance
List of human flora
List of microbiota species of the lower reproductive tract of women
Skin flora
Verotoxin-producing Escherichia coli
Notes
References
Further reading
Review articles
Bacteriology
Digestive system
Bacillota
Environmental microbiology
Microbiomes | Gut microbiota | [
"Biology",
"Environmental_science"
] | 10,262 | [
"Digestive system",
"Organ systems",
"Microbiomes",
"Environmental microbiology",
"Gut flora"
] |
3,135,786 | https://en.wikipedia.org/wiki/Consultant%20pharmacist | A consultant pharmacist is a pharmacist who works as a consultant providing expert advice on clinical pharmacy, academic pharmacy or practice, public health pharmacy, industrial pharmacy, community pharmacy or practice, pharmaceutical analysis etc., regarding the safe use and production of medications or on the provision of pharmaceutical services to medical institutions, hospitals, universities, research institutions, medical practices and individual patients.
Australia
In Australia, a consultant pharmacist has historically referred to a pharmacist accredited to access funding to be remunerated for providing Residential Medication Management Reviews and Home Medication Reviews.
These pharmacists undergo a credentialing process, that was historically referred to as accreditation, and were then able to access the funding to perform these roles. The major accreditation organisation, known as the Australian Association of Consultant Pharmacy, was disbanded in 2022. The Pharmaceutical Society of Australia, Society of Hospital Pharmacists Australia and the Australian College of Pharmacy (owned by the Queensland branch of the Pharmacy Guild of Australia) are now the three organisations that provide credentialing for pharmacists to be able to undertake domiciliary medication management reviews.
The Australian Pharmacy Council will develop Aged Care Accreditation Standards in 2023 for pharmacists working in residential aged care settings and undertaking medication management reviews. These standards are being developed in response to a series of research papers published by the Consultant Pharmacists' Services Research Network (COHERENT) that has found inconsistencies in the delivery of these services and the preparedness of pharmacists generally to move into these settings.
United States
In the US, a consultant pharmacist focuses on reviewing and managing the medication regimens of patients, particularly those in institutional settings such as nursing homes. Consultant pharmacists ensure their patients’ medications are appropriate, effective, as safe as possible and used correctly; and identify, resolve, and prevent medication-related problems that may interfere with the goals of therapy.
The demand for consultant pharmacists is on the rise. Licensing and accrediting agencies such as Centers for Medicare and Medicaid (CMS), Accreditation Association for Ambulatory Health Care (AAAHC), The Joint Commission (JC) and individual states licensing bodies encourage healthcare facilities to use consultant pharmacists.
Consultants may specialize in one of the following areas: Regulatory, Quality, Technical, or Clinical.
United Kingdom
In the UK's NHS, the term consultant pharmacist refers to a pharmacist who has advanced roles in patient care, research and education in a specific medical speciality or expert area of practice.
The Department of Health for England produced guidance in 2005 which described the role of the Consultant Pharmacist which is distinct from other roles in England and internationally. The posts are intended to be innovative new posts that will help improve patient care by retaining clinical excellence within the NHS and strengthening professional leadership. The consultant pharmacist posts have been created to provide a dynamic link between clinical practice and service development to support new models for delivering patient care. The title consultant pharmacist should only apply to approved posts that meet the principles, set out in the guidance, around four main functions:
Expert practice, ensuring that the highest level of pharmaceutical expertise is available to those patients who need it and making the best use of high level pharmacy skills in patient care.
Research, evaluation and service development, playing a crucial role in addressing the need to increase research capacity and to develop a workforce that is research aware as well as contributing to audit and service evaluation.
Education, mentoring and overview of practice, with a clear role in working with higher education institutions (HEIs), undertaking teaching in their field of practice and work to enhance links between practice, professional bodies and HEIs.
Professional leadership. Consultant pharmacists will develop and identify best practice working with advanced practitioners, professional managers of pharmaceutical services, and other relevant healthcare professionals to achieve successful outcomes. They will be acknowledged sources of expertise, and will contribute to the development of service strategies, which will drive change across health and social care.
The guidance recommends that the title consultant pharmacist is not conferred on individuals purely in recognition of innovative or excellent practice, but for those practitioners who meet the required competencies for the post. In the NHS, the posts created within or across NHS organisations, are approved by Strategic Health Authorities (or clusters of SHA’s). The SHA's provide approval panels for ratification of consultants posts to ensure that business plans match the spirit of this guidance and that posts are sustainable, equitable and transferable across the NHS.
The competency requirements for consultant pharmacists are drawn from the Advanced and Consultant Level Competency Framework designed by the Competency Development and Evaluation Group (CoDEG, see www.codeg.org) which is divided into six capability (competency) clusters. Postholders are required to demonstrate:
A majority of competencies in each of the expert professional practice, building working relationships and leadership clusters at the highest level (mastery)
and
A majority of the competencies in each of the management, education, training and development and research & evaluation clusters at the intermediate level (excellence).
There is work is underway to make explicit links between the Competency Framework and the NHS Knowledge and Skills Framework. The role of consultant pharmacists in the NHS is evolving and although many posts are currently based in hospital practice, the role is developing in primary care.
See also
Classification of Pharmaco-Therapeutic Referrals
History of pharmacy
References
External links
American Society of Consultant Pharmacists
Australian Association of Consultant Pharmacy
Pharmacy
Health care occupations
Consulting occupations | Consultant pharmacist | [
"Chemistry"
] | 1,143 | [
"Pharmacology",
"Pharmacy"
] |
3,135,814 | https://en.wikipedia.org/wiki/Telecommunications%20systems%20management | Telecommunications systems management (Telecomm or TSM for short, also Telecommunication systems, Telecommunications management, Network management) is an interdisciplinary area of study offered at some universities to fill the need for a liaison between the technical aspect and the business aspect of telecommunications. At Murray State University it has been regarded as a half-and-half program, half business and half networking classes with the option to specialize in certain aspects in the field.
Colleges and Universities Offering TSM
California State University, East Bay
Capitol College
DePaul University
Istanbul Technical University
Midlands Technical College
Murray State University
University of Athens
New Jersey Institute of Technology
New York Institute Of Technology
Northeastern
Ohio University
Oklahoma State University (College of Engineering, Architecture, and Technology)
Stevens Institute of Technology
Syracuse University
Trident Technical College
University of Maryland University College
University of Pennsylvania
Indian Institute of Technology Delhi - Department of Management Studies (DMS-IIT Delhi)
Network management
Telecommunications systems | Telecommunications systems management | [
"Technology",
"Engineering"
] | 183 | [
"Computer network stubs",
"Computer networks engineering",
"Telecommunications systems",
"Computing stubs",
"Network management"
] |
3,135,830 | https://en.wikipedia.org/wiki/Systems%20biomedicine | Systems biomedicine, also called systems biomedical science, is the application of systems biology to the understanding and modulation of developmental and pathological processes in humans, and in animal and cellular models. Whereas systems biology aims at modeling exhaustive networks of interactions (with the long-term goal of, for example, creating a comprehensive computational model of the cell), mainly at intra-cellular level, systems biomedicine emphasizes the multilevel, hierarchical nature of the models (molecule, organelle, cell, tissue, organ, individual/genotype, environmental factor, population, ecosystem) by discovering and selecting the key factors at each level and integrating them into models that reveal the global, emergent behavior of the biological process under consideration.
Such an approach will be favorable when the execution of all the experiments necessary to establish exhaustive models is limited by time and expense (e.g., in animal models) or basic ethics (e.g., human experimentation).
In the year of 1992, a paper on system biomedicine by Kamada T. was published (Nov.-Dec.), and an article on systems medicine and pharmacology by Zeng B.J. was also published (April) in the same time period.
In 2009, the first collective book on systems biomedicine was edited by Edison T. Liu and Douglas A. Lauffenburger.
In October 2008, one of the first research groups uniquely devoted to systems biomedicine was established at the European Institute of Oncology. One of the first research centers specialized on systems biomedicine was founded by Rudi Balling. The Luxembourg Centre for Systems Biomedicine is an interdisciplinary center of the University of Luxembourg. The first centre devoted to spatial issues in systems biomedicine has been recently established at Oregon Health and Science University.
The first peer-reviewed journal on this topic, Systems Biomedicine, was recently established by Landes Bioscience.
See also
Systems biology
Systems medicine
References
Bioinformatics
Systems biology | Systems biomedicine | [
"Engineering",
"Biology"
] | 418 | [
"Bioinformatics",
"Biological engineering",
"Systems biology"
] |
3,136,140 | https://en.wikipedia.org/wiki/Mass%20flow%20%28life%20sciences%29 | In the life sciences, mass flow, also known as mass transfer and bulk flow, is the movement of fluids down a pressure or temperature gradient. As such, mass flow is a subject of study in both fluid dynamics and biology. Examples of mass flow include blood circulation and transport of water in vascular plant tissues. Mass flow is not to be confused with diffusion which depends on concentration gradients within a medium rather than pressure gradients of the medium itself.
Plant biology
In general, bulk flow in plant biology typically refers to the movement of water from the soil up through the plant to the leaf tissue through xylem, but can also be applied to the transport of larger solutes (e.g. sucrose) through the phloem.
Xylem
According to cohesion-tension theory, water transport in xylem relies upon the cohesion of water molecules to each other and adhesion to the vessel's wall via hydrogen bonding combined with the high water pressure of the plant's substrate and low pressure of the extreme tissues (usually leaves).
As in blood circulation in animals, (gas) embolisms may form within one or more xylem vessels of a plant. If an air bubble forms, the upward flow of xylem water will stop because the pressure difference in the vessel cannot be transmitted. Once these embolisms are nucleated , the remaining water in the capillaries begins to turn to water vapor. When these bubbles form rapidly by cavitation, the "snapping" sound can be used to measure the rate of cavitation within the plant . Plants do, however, have physiological mechanisms to reestablish the capillary action within their cells .
Phloem
Solute flow is driven by a difference in hydraulic pressure created from the unloading of solutes in the sink tissues. That is, as solutes are off-loaded into sink cells (by active or passive transport), the density of the phloem liquid decreases locally, creating a pressure gradient.
See also
Countercurrent exchange
Pounds Per Hour
Fluid Dynamics
Mass flow rate
Hemorheology
Flying and gliding animals
References
Fluid dynamics | Mass flow (life sciences) | [
"Chemistry",
"Engineering"
] | 440 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
3,136,164 | https://en.wikipedia.org/wiki/Principal%20ideal%20theorem | In mathematics, the principal ideal theorem of class field theory, a branch of algebraic number theory, says that extending ideals gives a mapping on the class group of an algebraic number field to the class group of its Hilbert class field, which sends all ideal classes to the class of a principal ideal. The phenomenon has also been called principalization, or sometimes capitulation.
Formal statement
For any algebraic number field K and any ideal I of the ring of integers of K, if L is the Hilbert class field of K, then
is a principal ideal αOL, for OL the ring of integers of L and some element α in it.
History
The principal ideal theorem was conjectured by , and was the last remaining aspect of his program on class fields to be completed, in 1929.
reduced the principal ideal theorem to a question about finite abelian groups: he showed that it would follow if the transfer from a finite group to its derived subgroup is trivial. This result was proved by Philipp Furtwängler (1929).
References
Ideals (ring theory)
Group theory
Homological algebra
Theorems in algebraic number theory | Principal ideal theorem | [
"Mathematics"
] | 222 | [
"Mathematical structures",
"Theorems in algebraic number theory",
"Group theory",
"Fields of abstract algebra",
"Theorems in number theory",
"Category theory",
"Homological algebra"
] |
3,136,353 | https://en.wikipedia.org/wiki/Umov%20effect | The Umov effect, also known as Umov's law, is a relationship between the albedo of an astronomical object, and the degree of polarization of light reflecting off it. The effect was discovered by the Russian physicist Nikolay Umov in 1905, and can be observed for celestial objects such as the surface of the Moon and the asteroids.
The degree of linear polarization of light P is defined by
where and are the intensities of light in the directions perpendicular and parallel to the plane of a polarizer aligned in the plane of reflection. Values of P are zero for unpolarized light, and ±1 for linearly polarized light.
Umov's law states
where α is the albedo of the object. Thus, highly reflective objects tend to reflect mostly unpolarized light, and dimly reflective objects tend to reflect polarized light. The law is only valid for large phase angles (angles between the incident light and the reflected light).
References
Observational astronomy
Planetary science
Equations of astronomy | Umov effect | [
"Physics",
"Astronomy"
] | 207 | [
"Concepts in astronomy",
"Observational astronomy",
"Astronomy stubs",
"Planetary science stubs",
"Equations of astronomy",
"Planetary science",
"Astronomical sub-disciplines"
] |
3,136,401 | https://en.wikipedia.org/wiki/Black%20snake%20%28firework%29 | "Black snake" is a term that can refer to at least three similar types of fireworks: the Pharaoh's snake, the sugar snake, or a popular retail composition marketed under various product names but usually known as "black snake". The "Pharaoh's snake" or "Pharaoh's serpent" is the original version of the black snake experiment. It produces a more impressive snake, but its execution depends upon mercury (II) thiocyanate, which is no longer in common use due to its toxicity. For a "sugar snake", sodium bicarbonate and sugar are the commonly used chemicals.
Once lit, the fireworks emit smoke and spew out ash resembling a snake via an intumescent reaction. They remain on the ground and emit no sparks, flares, projectiles, or sound.
Pharaoh's snake
The Pharaoh's snake is a more dramatic experiment and it requires more safety precautions than the sugar snake due to the presence of toxic mercury vapor and other mercury compounds.
This reaction was discovered by Friedrich Wöhler in 1821, soon after the first synthesis of mercury thiocyanate. It was described as "winding out from itself at the same time worm-like processes, to many times its former bulk, of a very light material of the color of graphite." For some time, a firework product called "Pharaoschlangen" was available to the public in Germany but was eventually banned when the toxic properties of the product were discovered through the deaths of several children who had mistakenly consumed the resulting solid.
The Pharaoh's snake experiment is conducted in the same manner as the sugar snake experiment, however, the former uses mercury(II) thiocyanate (Hg(SCN)2) instead of powdered sugar with baking soda. This must be done in a fume hood because all mercury compounds are hazardous.
After igniting the reagents, mercury(II) thiocyanate breaks down to form mercury(II) sulfide (HgS), carbon disulfide (CS2), and carbon nitride (C3N4). Graphitic carbon nitride, a pale yellow solid, is the main component of the ash.
2 Hg(SCN)2(s) → 2 HgS(s) + CS2(l) + C3N4(s)
Carbon disulfide ignites into carbon dioxide (CO2) and sulfur(IV) oxide (SO2).
CS2(l) + 3 O2(g) → CO2(g) + 2 SO2(g)
While carbon nitride (C3N4) will break down into nitrogen gas and dicyan
2 C3N4(s) → 3 (CN)2(g) + N2(g)
When mercury(II) sulfide (HgS) reacts with oxygen (O2), it will form gray mercury vapor and sulfur dioxide. If the reaction is performed inside a container, a gray film of mercury coating on its inner surface can be observed.
HgS(s) + O2(g) → Hg(l) + SO2(g)
Sugar snake
Unlike the carbon snake, which involves the reaction of sulfuric acid instead of sodium bicarbonate, the sugar snake grows relatively faster and to a significantly larger volume.
Solid fuel is used in this experiment. The solid fuel can be sand that is sufficiently covered in ethanol or hexamethylenetetramine. A white mixture of sucrose and sodium bicarbonate will eventually turn black and the snake will grow about long after it is lit.
Three chemical reactions occur when the snake is lit. Sodium bicarbonate breaks down into sodium carbonate, water vapor, and carbon dioxide:
2 NaHCO3(s) → Na2CO3(s) + H2O(g) + CO2(g)
Burning sucrose or ethanol (reaction with oxygen in the air) produces carbon dioxide gas and water vapor:
C12H22O11(s) + 12 O2(g) → 12 CO2(g) + 11 H2O(g)
C2H5OH(l) + 3 O2(g) → 2 CO2(g) + 3 H2O(g)
Some of the sucrose does not burn, but merely decomposes at the high temperature, giving off elemental carbon and water vapor:
C12H22O11(s) → 12 C(s) + 11 H2O(g)
The carbon in the reaction makes the snake black. The overall process is exothermic enough that the water produced in the reactions is vaporized. This steam, in addition to the carbon dioxide product, makes the snake lightweight and airy and allows it to grow to a large size from a comparably small amount of starting material.
Retail black snake
The actual formula seems to be somewhat of a secret. However, they are likely based on a nitrated pitch composition. Additionally, they are often marketed in the form of small black pellets and are nondeliquescent and shelf-stable. There are very few retailers of this product in Europe, and there is possibly only one large-scale manufacturer in the world.
Use
Black snakes are a popular firework in India, which children play with during the festival of Diwali. Though deemed toxic by the Chest Research foundation and Pune University, black snake fireworks are still in use. The objective of the study was to determine which firework produced the most air pollution in India. The conducted study showed that the snake fireworks emitted the highest particulate matter, capable of penetrating the lungs via inhalation of smoke particles and consequently, causing significant damage. Other fireworks that emit large amounts of smoke particles include fuljhadi (Sparkler), pulpul (Firecracker), chakris (Spinning Rocket) and annar (Flowerpot Firework).
See also
Chemical volcano
Soda geyser
Elephant's toothpaste
Intumescent
Starlite
References
Articles containing video clips
Types of fireworks
Chemistry classroom experiments | Black snake (firework) | [
"Chemistry"
] | 1,258 | [
"Chemistry classroom experiments"
] |
3,136,621 | https://en.wikipedia.org/wiki/When%20Flanders%20Failed | "When Flanders Failed" is the third episode of the third season of the American animated television series The Simpsons. It originally aired on Fox in the United States on October 3, 1991. In the episode, Homer makes a wish for Ned Flanders' new left-handed store to go out of business. The wish comes true and soon the Flanders family is in financial trouble. When he discovers that Ned's house is about to be repossessed, Homer feels guilty. He helps the store flourish by telling all of Springfield's left-handed residents to patronize it. Meanwhile, Bart takes karate lessons but quits after it does not turn out to be as interesting as he had hoped.
The episode was written by Jon Vitti and directed by Jim Reardon. It had an unusual amount of animation glitches because the animation studio was training a new group of animators. The episode references It's a Wonderful Life. The title is a reference to the poem "In Flanders Fields".
Since airing, the episode has received mostly positive reviews from television critics. It acquired a Nielsen rating of 13.9, and was the highest-rated show on Fox the week it aired.
Plot
Ned Flanders invites the Simpson family to a barbecue where he announces plans to quit the pharmaceutical business and open the Leftorium, a store for left-handed people. While pulling a wishbone with Ned, Homer—jealous of Ned's material success—wishes for the Leftorium to fail and go out of business. Undeterred after Lisa scolds him for indulging in schadenfreude, Homer gloats when Ned tells him business is slow. Afterwards, Homer keeps seeing left-handed citizens struggling with items made for right-handed people (including his boss, Mr. Burns) and considers telling them about the Leftorium, but decides not to.
In the B-story, Bart begins taking karate lessons at Akira's karate school. He soon finds himself bored with karate, so he decides to skip each lesson and play video games at the mall arcade instead. Whenever Bart is asked by his friends and family about the karate techniques he is learning, he refers to the Touch of Death, an ability he sees in one of the arcade games he plays. He proceeds to terrorize Lisa into doing his will by threatening her with the Touch of Death. When the school bullies take Lisa's saxophone, she tells them Bart will defend her with the Touch of Death. Unable to actually defend himself or his sister, Bart is pantsed and hung by his underwear from a playground basketball hoop rim by the bullies. Having reclaimed her saxophone, Lisa wistfully notes that sometimes two wrongs do make a right.
Eventually the store closes, plunging the Flanders family into debt and misery. Ned is forced to sell his possessions, and Homer gleefully buys many of them for a pittance. Overcome by regret, Homer decides to return Ned's possessions, but he finds Ned's house repossessed and the family living in their car. Homer tells Ned to open the store one final time and informs all the left-handed residents of Springfield about the Leftorium; they descend upon the store and buy almost everything; Mr. Burns buys the roadster with left-handed shift. The business boom helps Ned keep the store open and get his house back. Todd Flanders leads a chorus of "Put On a Happy Face".
Production
The episode was written by Jon Vitti and directed by Jim Reardon. It featured an unusual number of animation glitches because the animation studio in Korea was training a new group of animators, and this episode was one of their first efforts. Show runner Mike Reiss said he will always remember it as the episode "that came back animated with a thousand mistakes in it and was just a complete and utter mess". Reardon said there was "literally a mistake in every other scene" when the episode came back from Korea. Several scenes had to be re-animated in the United States because of these glitches, but according to Reardon, "you can still see the lesser ones that got through, such as line quality problems particularly in the first act." Though it aired in season three, "When Flanders Failed" was produced during the previous season. It was recorded in spring 1991 when the previous season had ended, and was scheduled to air in autumn. The staff therefore had more time to fix the glitches during the summer. Unlike the season premiere "Stark Raving Dad", which was originally the final episode in the season two production run, this episode was not presented in Dolby Surround and uses the season two Danny Elfman arrangement of the opening and closing themes rather than the Alf Clausen arrangement.
"When Flanders Failed" features the second appearance of the character Akira, voiced by Hank Azaria. He was previously seen in the season two episode "One Fish, Two Fish, Blowfish, Blue Fish", where he is a waiter at a Japanese restaurant and was originally voiced by George Takei. It is revealed in this episode that the characters Ned Flanders, Moe Szyslak and Montgomery Burns are left-handed, just like The Simpsons creator Matt Groening. The Simpsons writer George Meyer came up with the idea of The Leftorium when the creators were trying to figure out what Ned's failed business would be. The inspiration came from a family friend of the Meyer family who had opened a left-handed store that was quickly forced to close down due to lack of business.
Cultural references
The title is a reference to the poem "In Flanders Fields". Homer watches the Canadian Football League Draft on television, the names of the teams are real, but Simpsons writers Jay Kogen, Wallace Wolodarsky, and John Swartzwelder appear on the draft list. The smoke from Flanders's barbecue forms fingers that seem to come out of the TV, a reference to Poltergeist. Akira's school is located in the mall next to Shakespeare's Fried Chicken, a reference to the English poet and playwright William Shakespeare. Mr. Burns says "My kingdom for a left-handed can-opener!", a reference to the line "My kingdom for a horse!" in Shakespeare's Richard III. Akira gives Bart's karate class the ancient Chinese military treatise The Art of War by Sun Tzu. Richard Sakai is seen in one of the crowd shots at The Leftorium. The final scene is based on the ending of It's a Wonderful Life (1946), with Maude's dress and mannerisms modeled after Donna Reed. The episode closes with a rendition of "Put On a Happy Face" from Bye Bye Birdie.
Reception
In its original American broadcast, "When Flanders Failed" finished 29th in the ratings for the week of September 30 – October 6, 1991, with a Nielsen rating of 13.9, equivalent to approximately 12.8 million viewing households. It was the highest-rated show on Fox that week.
Since airing, the episode has received mostly positive reviews from television critics. Kirk Baird of the Las Vegas Sun named it the fifth best episode of The Simpsons, and Central Michigan Life called it an "instant classic". Pete Oliva of North Texas Daily said the episode "proves that it is possible to laugh and cry at the same time without being able to control either response". Bill Gibron of DVD Verdict said "When Flanders Failed" shows that even if The Simpsons is not dealing with famous celebrities or "high profile places", the writers can still "wring uproarious comedy out of their cast of regulars. Flanders is a special creation in the canon of humor, a regular guy who is funny because of how hyper-normal he is compared to his Neanderthal neighbors. The focus on people who are left-handed, and the whole idea of being a lefty, is an unusual basis for a television show. But then again, nothing about The Simpsons is ever common."
Hock Guan Teh of DVD Town also praised the writers, saying they "are able to craft a downtrodden tale for the perpetually clueless Flanders family that serves to illustrate how dark emotions can eventually be overcome by Homer's guilt. A memorable episode." Niel Harvey of The Roanoke Times called "When Flanders Failed" a "classic bit of Simpsonia". The episode's reference to It's a Wonderful Life was named the 26th greatest film reference in the history of the show by Total Film's Nathan Ditum. Nate Meyers of Digitally Obsessed rated the episode a (of 5) and commented that "perhaps it is not profound in its examination of jealousy causing people to behave irrationally, but it handles the topic in a serious manner while not compromising the show's humor. The side story with Bart stems from the era of the series when Bart was the big star, but it still has some funny bits."
DVD Movie Guide's Colin Jacobson wrote: "Mean Homer equals Funny Homer, so 'When Flanders Failed' presents an above average show. He seems unusually crude here, which makes him amusing. The subplot with Bart and his karate class also adds good material, especially when he threatens to turn the 'Touch of Death' on Lisa. Another sappy finish slightly mars this one, but it remains generally solid." Kimberly Potts of AOL named it tenth best episode of the show and commented: "Schadenfreude is the theme of this tight episode about Homer's joy at the failure of Flanders' Leftorium store. There are few times Homer is more shamelessly smug than he was while imitating Flanders and using Ned's yard sale grill, and we haven't even mentioned Bart's 'Touch of Death' subplot." Winston-Salem Journal's Tim Clodfelter called it an "outstanding" episode.
References
External links
The Simpsons season 3 episodes
1991 American television episodes
Handedness
Television episodes directed by Jim Reardon
Television episodes written by Jon Vitti
it:Episodi de I Simpson (terza stagione)#Quando Flanders fallì | When Flanders Failed | [
"Physics",
"Chemistry",
"Biology"
] | 2,073 | [
"Behavior",
"Motor control",
"Chirality",
"Asymmetry",
"Handedness",
"Symmetry"
] |
3,136,832 | https://en.wikipedia.org/wiki/Grammar-based%20code | Grammar-based codes or Grammar-based compression are compression algorithms based on the idea of constructing a context-free grammar (CFG) for the string to be compressed. Examples include universal lossless data compression algorithms. To compress a data sequence , a grammar-based code transforms into a context-free grammar .
The problem of finding a smallest grammar for an input sequence (smallest grammar problem) is known to be NP-hard, so many grammar-transform algorithms are proposed from theoretical and practical viewpoints.
Generally, the produced grammar is further compressed by statistical encoders like arithmetic coding.
Examples and characteristics
The class of grammar-based codes is very broad. It includes block codes, the multilevel pattern matching (MPM) algorithm, variations of the incremental parsing Lempel-Ziv code, and many other new universal lossless compression algorithms.
Grammar-based codes are universal in the sense that they can achieve asymptotically the entropy rate of any stationary, ergodic source with a finite alphabet.
Practical algorithms
The compression programs of the following are available from external links.
Sequitur is a classical grammar compression algorithm that sequentially translates an input text into a CFG, and then the produced CFG is encoded by an arithmetic coder.
Re-Pair is a greedy algorithm using the strategy of most-frequent-first substitution. The compressive performance is powerful, although the main memory space requirement is very large.
GLZA, which constructs a grammar that may be reducible, i.e., contain repeats, where the entropy-coding cost of "spelling out" the repeats is less than the cost creating and entropy-coding a rule to capture them. (In general, the compression-optimal SLG is not irreducible, and the Smallest Grammar Problem is different from the actual SLG compression problem.)
See also
Dictionary coder
Grammar induction
Straight-line grammar
References
External links
GLZA discussion and paper
Description of grammar-based codes with example
Sequitur codes
Re-Pair codes
Re-Pair codes a version of Gonzalo Navarro.
GrammarViz 2.0 - implementation of Sequitur, Re-Pair, and parallel Re-Pair in Java.
Data compression
Coding theory
Information theory | Grammar-based code | [
"Mathematics",
"Technology",
"Engineering"
] | 459 | [
"Discrete mathematics",
"Coding theory",
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
3,136,883 | https://en.wikipedia.org/wiki/Thermal%20velocity | Thermal velocity or thermal speed is a typical velocity of the thermal motion of particles that make up a gas, liquid, etc. Thus, indirectly, thermal velocity is a measure of temperature. Technically speaking, it is a measure of the width of the peak in the Maxwell–Boltzmann particle velocity distribution. Note that in the strictest sense thermal velocity is not a velocity, since velocity usually describes a vector rather than simply a scalar speed.
Since the thermal velocity is only a "typical" velocity, a number of different definitions can be and are used.
Taking to be the Boltzmann constant, the absolute temperature, and the mass of a particle, we can write the different thermal velocities:
In one dimension
If is defined as the root mean square of the velocity in any one dimension (i.e. any single direction), then
If is defined as the mean of the magnitude of the velocity in any one dimension (i.e. any single direction), then
In three dimensions
If is defined as the most probable speed, then
If is defined as the root mean square of the total velocity, then
If is defined as the mean of the magnitude of the velocity of the atoms or molecules, then
All of these definitions are in the range
Thermal velocity at room temperature
At 20 °C (293.15 kelvins), the mean thermal velocity of common gasses in three dimensions is:
References
Thermodynamic properties
Statistical mechanics | Thermal velocity | [
"Physics",
"Chemistry",
"Mathematics"
] | 293 | [
"Thermodynamics stubs",
"Statistical mechanics stubs",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Statistical mechanics",
"Physical chemistry stubs"
] |
3,137,079 | https://en.wikipedia.org/wiki/Hydrolock | Hydrolock (a shorthand notation for hydrostatic lock or hydraulic lock) is an abnormal condition of any device which is designed to compress a gas by mechanically restraining it; most commonly the reciprocating internal combustion engine, the case this article refers to unless otherwise noted. Hydrolock occurs when a volume of liquid greater than the volume of the cylinder at its minimum (end of the piston's stroke) enters the cylinder. Since liquids are nearly incompressible the piston cannot complete its travel; either the engine must stop rotating or a mechanical failure must occur.
Symptoms and damage
If an engine hydrolocks while at speed, a mechanical failure is likely. Common damage modes include bent or broken connecting rods, a fractured crank, a fractured head, a fractured block, crankcase damage, damaged bearings, or any combination of these. Forces absorbed by other interconnected components may cause additional damage. Physical damage to metal parts can manifest as a "crashing" or "screeching" sound and usually requires replacement of the engine or a substantial rebuild of its major components.
If an internal combustion engine hydrolocks while idling or under low power conditions, the engine may stop suddenly with no immediate damage. In this case the engine can often be purged by unscrewing the spark plugs or injectors and turning the engine over to expel the liquid from the combustion chambers after which a restart may be attempted. Depending on how the liquid was introduced to the engine, it possibly can be restarted and dried out with normal combustion heat, or it may require more work, such as flushing out contaminated operating fluids and replacing damaged gaskets.
If a cylinder fills with liquid while the engine is turned off, the engine will refuse to turn when a starting cycle is attempted. Since the starter mechanism's torque is normally much lower than the engine's operating torque, this will usually not damage the engine but may burn out the starter. The engine can be drained as above and restarted. If a corrosive substance such as water has been in the engine long enough to cause rusting, more extensive repairs will be required.
Amounts of water significant enough to cause hydrolock tend to upset the air/fuel mixture in gasoline engines. If water is introduced slowly enough, this effect can cut power and speed in an engine to a point that when hydrolock actually occurs it does not cause catastrophic engine damage.
Causes and special cases
Automotive
Hydrolock most commonly occurs in automobiles when driving through floods, either where the water is above the level of the air intake or the vehicle's speed is excessive, creating a tall bow wave. A vehicle fitted with a cold air intake mounted low on the vehicle will be especially vulnerable to hydrolocking when being driven through standing water or heavy precipitation. Engine coolant entering the cylinders through various means (such as a blown head gasket) is another common cause. Excessive fuel entering (flooding) one or more cylinders in liquid form due to abnormal operating conditions can also cause hydrolock.
Marine
Small boats with outboard engines and personal water crafts (PWC) tend to ingest water simply because they run in and around it. During a rollover, or when a wave washes over the craft, its engine can hydrolock, though severe damage is rare due to the special air intakes and low rotating inertia of small marine engines. Inboard marine engines have a different vulnerability as these often have their cooling water mixed with the exhaust gases in the header to quiet the engine. Rusted out exhaust headers or lengthy periods of turning the starter can cause water to build up in the exhaust line to the point it back-flows through the exhaust manifold and fills the cylinders.
On turbocharged engines the intercooler is normally cooled by sea water; if this rusts through, water will be ingested by the engine.
Diesel engines
Diesel engines are more susceptible to hydrolock than gasoline engines. Due to their higher compression ratios, diesel engines have a much smaller final combustion chamber volume, requiring much less liquid to hydrolock. Diesel engines also tend to have higher torque, rotating inertia, and stronger starter motors than gasoline engines. The result is that a diesel engine is more likely to suffer catastrophic damage.
Radial and inverted engines
Hydrolock is common on radial and inverted engines (cylinders pointing downwards) when the engine sits for a long period. Engine oil seeps down under gravity into the cylinder through various means (through the rings, valve guides, etc.) and can fill a cylinder with enough oil to hydrolock it. The seepage effect can be observed by the blue-white smoke commonly seen when a radial engine starts up. In order to prevent engine damage, it is universal practice for the ground crew or pilot to check for hydrolock during pre-flight inspection of the aircraft, typically by slowly cranking the propeller for several turns, either by hand or using the starter motor, to make sure the crankshaft cycles normally through all cylinders.
Steam engines
A hydraulic lock can occur in steam engines due to steam condensing back into water. In most steam engine designs there is a short time at the end of the return stroke of the piston when all the valves are shut and it is compressing any remaining steam. Water can be introduced from the boiler or in a cold engine, steam will condense to water on the cool walls of the cylinders and can potentially hydrolock an engine.
This is just as damaging as it is to internal combustion engines and in the case of a steam locomotive can be very dangerous as a broken connecting rod could puncture the firebox or boiler and cause a steam explosion. Steam engines (with the exception of small model and toy machines) are always fitted with cylinder drain cocks which are opened to allow excess water and steam to escape during warm up.
Cylinder drain cocks can be manual or automatic. One type of automatic drain cock contains a rolling ball which allows water to pass, but blocks the flow of steam. The ball occupies a horizontal cylinder slightly larger than the ball, allowing liquid water to flow past the ball. However, fast moving steam forces the ball to the end of the cylinder, where the ball blocks a discharge opening.
References
Engine problems | Hydrolock | [
"Technology"
] | 1,255 | [
"Engine problems",
"Engines"
] |
3,137,102 | https://en.wikipedia.org/wiki/Vaginal%20ring | Vaginal rings (also known as intravaginal rings, or V-Rings) are polymeric drug delivery devices designed to provide controlled release of drugs for intravaginal administration over extended periods of time. The ring is inserted into the vagina and provides contraception protection. Vaginal rings come in one size that fits most people.
Types
Several vaginal ring products are currently available, including:
Vaginal rings as treatment of peri-menopausal symptoms:
Estring - a low-dose estradiol-releasing ring, manufactured from silicone elastomer, for the treatment of vaginal atrophy (atrophic vaginitis).
Femring - a low-dose estradiol-acetate releasing ring, manufactured from silicone elastomer, for the relief of hot flashes and vaginal atrophy associated with menopause.
Vaginal rings as contraception:
NuvaRing - a low-dose contraceptive vaginal ring, manufactured from poly(ethylene-co-vinyl acetate), and releasing etonogestrel (a progestin) and ethinylestradiol (an estrogen).
Progering - containing progesterone as a sole ingredient, is available only in Chile and Peru.
Annovera - a contraceptive vaginal ring, manufactured from methyl siloxane-based polymers, and releasing segesterone acetate (a progestin) and ethinylestradiol (an estrogen)
A number of other vaginal ring products are also in development.
Contraception
The combined hormonal contraceptive vaginal ring is self-administered once a month. Leaving the ring in for three weeks slowly releases hormones into the body, mainly vaginally administered estrogens and/or progestogens (a group of hormones including progesterone) - the same hormones used in birth control pills. These hormones work mostly by stopping ovulation and thickening the cervical mucus, creating a barrier preventing sperm from fertilizing an egg. They could theoretically affect implantation but no evidence shows that they do. Worn continuously for three weeks followed by a week off, each vaginal ring provides anywhere from one month (NuvaRing) to one year (Annovera and Progering) of birth control. For continuous-use contraception, users can also choose to wear the vaginal ring for the full four week cycle. This manner of contraception will eliminate monthly periods. Throughout the additional week, the serum hormone levels will remain in the contraceptive range.
When compared with combined hormonal pills, the combined hormonal vaginal ring offers potentially better cycle control and treatment of heavy menstrual bleeding. However, both methods are effective short-term treatments in the reproductive age group. Vaginal rings may lead to increased normal vaginal secretions, decreased body weight, reduced symptoms of PMS, and occasionally cases of vaginitis, device-related problems and leukorrhea. Because they release estrogen, vaginal rings have an increased risk of heart attack, stroke, and other serious side effects. Additionally, certain medicines and supplements, such as the antibiotic rifampin, the anti-fungal griseofulvin, anti-seizure medicines, St. John's wort, and HIV medicines, may compromise the effectiveness of vaginal rings. Vaginal rings do not protect users from sexually transmitted diseases. The only contraceptive measures that does so are latex or polyurethane condoms.
The contraceptive vaginal ring has a failure rate of 0.3% when used as prescribed and 9% when used typically.
The correlation between breast cancer and the use of vaginal rings is under investigation, but recent literature suggests that the hormones used in vaginal rings has little, if any, relation to the risk of developing breast cancer.
Methods of use
Vaginal rings are easily inserted and removed. Vaginal walls hold them in place. Although their exact location within the vagina is not critical for clinical efficacy, rings commonly reside next to the cervix, and the deeper the placement in the vagina, the less likely the ring will be felt. Rings are typically left in place during intercourse, and most couples report no interference or discomfort. In many cases, neither partner feels the presence of the ring. Rings can be removed prior to intercourse, but, in the case of the contraceptive NuvaRing, only for one to three hours to maintain efficacy of birth control. If the ring is out for more than 48 hours, back up contraception is necessary for seven days. It typically takes between one and two months for a user's cycle to return to normal after the use of a vaginal ring is stopped.
Estring - Estring is inserted into the vagina and left in place for three months, after which it is removed and replaced with a fresh ring.
Femring - Femring is inserted into the vagina and left in place for three months, after which it is removed and replaced with a fresh ring.
NuvaRing - NuvaRing is inserted into the vagina and left in place for three weeks, after which it is removed and discarded for a 'ring-free' week to allow menstruation to occur. At the end of that week, a new NuvaRing is inserted.
Annovera - Annovera ring is inserted into the vagina and left in place for three weeks, after which it is removed for one week. Unlike the NuvaRing, users will reinsert the same Annovera Ring one week later. A single Annovera Ring is used for 3 week cycles for a total of 13 cycles.
References
External links
Estring
Femring
Nuvaring
Hormonal contraception
Drug delivery devices
Dosage forms
Vagina | Vaginal ring | [
"Chemistry"
] | 1,195 | [
"Pharmacology",
"Drug delivery devices"
] |
3,137,756 | https://en.wikipedia.org/wiki/Blue%20rinse | A blue rinse is a dilute hair dye used to reduce the yellowed appearance of grey or white hair.
The blue rinse gained popularity after Jean Harlow's appearance in the 1930 film Hell's Angels. Queen Elizabeth the Queen Mother also contributed to the popularity of the blue rinse in the 1940s. Israeli politician Benjamin Netanyahu uses the style.
In British politics, the term "Blue Rinse Brigade" has been used to refer to affluent older women involved in conservative politics, charity work, and committees.
See also
Blue hair
References
Hair coloring | Blue rinse | [
"Physics"
] | 114 | [
"Materials stubs",
"Materials",
"Matter"
] |
3,138,166 | https://en.wikipedia.org/wiki/Electron%20donor | In chemistry, an electron donor is a chemical entity that transfers electrons to another compound. It is a reducing agent that, by virtue of its donating electrons, is itself oxidized in the process. An obsolete definition equated an electron donor and a Lewis base.
In contrast to traditional reducing agents, electron transfer from a donor to an electron acceptor may be only fractional. The electron is not completely transferred, which results in an electron resonance between the donor and acceptor. This leads to the formation of charge transfer complexes, in which the components largely retain their chemical identities. The electron donating power of a donor molecule is measured by its ionization potential, which is the energy required to remove an electron from the highest occupied molecular orbital (HOMO).
The overall energy balance (ΔE), i.e., energy gained or lost, in an electron donor-acceptor transfer is determined by the difference between the acceptor's electron affinity (A) and the ionization potential (I):
Molecular electronics and devices
Electron donors are components of many devices such as organic photovoltaic devices. Typical electron donors undergo reversible redox so that they can serve as electron relays. Triarylamines are typical donors.
In biology
NADH is an example of a natural electron donor. Ascorbic acid is another example. It is a water-soluble antioxidant.
In biology, electron donors release an electron during cellular respiration, resulting in the release of energy. Microorganisms, such as bacteria, obtain energy in electron transfer processes. Through its cellular machinery, the microorganism collects the energy for its use. The final result of this process (electron transport chain) is an electron being donated to an electron acceptor. Petroleum hydrocarbons, less chlorinated solvents like vinyl chloride, soil organic matter, and reduced inorganic compounds are all compounds that can act as electron donors. These reactions are of interest not only because they allow organisms to obtain energy, but also because they are involved in the natural biodegradation of organic contaminants. When clean-up professionals use monitored natural attenuation to clean up contaminated sites, biodegradation is one of the major contributing processes.
See also
Semiconductor
Donor (semiconductors)
References
Electrochemical concepts
ru:Донор (физика) | Electron donor | [
"Chemistry"
] | 485 | [
"Electrochemistry",
"Electrochemical concepts"
] |
3,138,212 | https://en.wikipedia.org/wiki/Electron%20acceptor | An electron acceptor is a chemical entity that accepts electrons transferred to it from another compound. Electron acceptors are oxidizing agents.
The electron accepting power of an electron acceptor is measured by its redox potential.
In the simplest case, electron acceptors are reduced by one electron. The process can alter the structure of the acceptor substantially. When the added electron is highly delocalized, the structural consequences of the reduction can be subtle. The central C-C distance in the electron acceptor tetracyanoethylene elongates from 135 to 143 pm upon acceptance of an electron. In the formation of some donor-acceptor complexes, less than one electron is transferred. TTF-TCNQ is a charge transfer complex.
Biology
In biology, a terminal electron acceptor often refers to either the last compound to receive an electron in an electron transport chain, such as oxygen during cellular respiration, or the last cofactor to receive an electron within the electron transfer domain of a reaction center during photosynthesis. All organisms obtain energy by transferring electrons from an electron donor to an electron acceptor.
One practical illustration of the role of electron acceptors in biology is the high toxicity of the paraquat. The activity of this broad spectrum herbicide results from the electron acceptor property of N,N'-dimethyl-4,4'-bipyridinium.
Materials science
In some solar cells, the photocurrent entails transfer of electrons from a donor to an electron acceptor.
See also
Acceptor (semiconductors)
Redox reaction
Semiconductor
References
External links
Electron acceptor definition at United States Geological Survey website
Environmental Protection Agency
Electrochemical concepts | Electron acceptor | [
"Chemistry"
] | 343 | [
"Electrochemistry",
"Electrochemical concepts"
] |
3,138,467 | https://en.wikipedia.org/wiki/N-player%20game | In game theory, an n-player game is a game which is well defined for any number of players. This is usually used in contrast to standard 2-player games that are only specified for two players. In defining n-player games, game theorists usually provide a definition that allow for any (finite) number of players. The limiting case of is the subject of mean field game theory.
Changing games from 2-player games to n-player games entails some concerns. For instance, the Prisoner's dilemma is a 2-player game. One might define an n-player Prisoner's Dilemma where a single defection results everyone else getting the sucker's payoff. Alternatively, it might take certain amount of defection before the cooperators receive the sucker's payoff. (One example of an n-player Prisoner's Dilemma is the Diner's dilemma.)
Analysis
n-player games can not be solved using minimax, the theorem that is the basis of tree searching for 2-player games. Other algorithms, like maxn, are required for traversing the game tree to optimize the score for a specific player.
References
Game theory game classes | N-player game | [
"Mathematics"
] | 241 | [
"Game theory game classes",
"Game theory"
] |
5,692,373 | https://en.wikipedia.org/wiki/6AQ5 | The 6AQ5 (Mullard–Philips tube designation EL90) is a miniature 7-pin (B7G) audio power output pentode vacuum tube with ratings virtually identical to the 6V6 at 250 V. It was commonly used as an output audio amplifier in tube TVs and radios. It was also used in transmitter circuits. There are versions of this tube with extended ratings for industrial application which are designated as 6AQ5A (with controlled heater warm-up characteristic), and 6AQ5W/6005 or 6005W (shock and vibration resistant).
A push–pull pair is capable of producing at least 10W audio output power in class AB1.
Also, in some cases it was used as vertical deflection output tube during the 1950s, being also rated for this purpose. For this application, 6AQ5-A was preferred. It was used for 70- and 90-degree picture tubes, but also in some early colour sets in the frame output.
Other close or equivalent tube types are: 6HG5, 6HR5, 6669, 6BM5, N727, CV1862 and the Tesla 6L31. The 6CM6, like the Russian 6P1P (6П1П), while electrically equivalent (up to 250 V anode voltage), have a 9 pin (B9A) base. Another similar, but not identical, amplifier pentode with a miniature 9-Pin base (B9A), used in consumer electronics was the 6M5.
See also
List of vacuum tubes
References
External links
Several tube data sheets
Tube Data Sheet Locator
Vacuum tubes
Guitar amplification tubes | 6AQ5 | [
"Physics"
] | 349 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
5,692,791 | https://en.wikipedia.org/wiki/Neuropeptide%20Y%20receptor | Neuropeptide Y receptors are a family of receptors belonging to class A G-protein coupled receptors and they are activated by the closely related peptide hormones neuropeptide Y, peptide YY and pancreatic polypeptide. These receptors are involved in the control of a diverse set of behavioral processes including appetite, circadian rhythm, and anxiety.
Activated neuropeptide receptors release the Gi subunit from the heterotrimeric G protein complex. The Gi subunit in turn inhibits the production of the second messenger cAMP from ATP.
Only the crystal structure of Y1 in complex with two antagonist is available.
Types
There are five known mammalian neuropeptide Y receptors designated Y1 through Y5. Four neuropeptide Y receptors each encoded by a different gene have been identified in humans, all of which may represent therapeutic targets for obesity and other disorders.
Y1 -
Y2 -
Y4 -
Y5 -
Antagonists
BIBP-3226
Lu AA-33810
BIIE-0246
UR-AK49
References
External links
G protein-coupled receptors | Neuropeptide Y receptor | [
"Chemistry"
] | 224 | [
"G protein-coupled receptors",
"Signal transduction"
] |
5,693,122 | https://en.wikipedia.org/wiki/Pseudorandom%20graph | In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.
Pseudorandom properties were first formally considered by Andrew Thomason in 1987. He defined a condition called "jumbledness": a graph is said to be -jumbled for real and with if
for every subset of the vertex set , where is the number of edges among (equivalently, the number of edges in the subgraph induced by the vertex set ). It can be shown that the Erdős–Rényi random graph is almost surely -jumbled. However, graphs with less uniformly distributed edges, for example a graph on vertices consisting of an -vertex complete graph and completely independent vertices, are not -jumbled for any small , making jumbledness a reasonable quantifier for "random-like" properties of a graph's edge distribution.
Connection to local conditions
Thomason showed that the "jumbled" condition is implied by a simpler-to-check condition, only depending on the codegree of two vertices and not every subset of the vertex set of the graph. Letting be the number of common neighbors of two vertices and , Thomason showed that, given a graph on vertices with minimum degree , if for every and , then is -jumbled. This result shows how to check the jumbledness condition algorithmically in polynomial time in the number of vertices, and can be used to show pseudorandomness of specific graphs.
Chung–Graham–Wilson theorem
In the spirit of the conditions considered by Thomason and their alternately global and local nature, several weaker conditions were considered by Chung, Graham, and Wilson in 1989: a graph on vertices with edge density and some can satisfy each of these conditions if
Discrepancy: for any subsets of the vertex set , the number of edges between and is within of .
Discrepancy on individual sets: for any subset of , the number of edges among is within of .
Subgraph counting: for every graph , the number of labeled copies of among the subgraphs of is within of .
4-cycle counting: the number of labeled -cycles among the subgraphs of is within of .
Codegree: letting be the number of common neighbors of two vertices and ,
Eigenvalue bounding: If are the eigenvalues of the adjacency matrix of , then is within of and .
These conditions may all be stated in terms of a sequence of graphs where is on vertices with edges. For example, the 4-cycle counting condition becomes that the number of copies of any graph in is as , and the discrepancy condition becomes that , using little-o notation.
A pivotal result about graph pseudorandomness is the Chung–Graham–Wilson theorem, which states that many of the above conditions are equivalent, up to polynomial changes in . A sequence of graphs which satisfies those conditions is called quasi-random. It is considered particularly surprising that the weak condition of having the "correct" 4-cycle density implies the other seemingly much stronger pseudorandomness conditions. Graphs such as the 4-cycle, the density of which in a sequence of graphs is sufficient to test the quasi-randomness of the sequence, are known as forcing graphs.
Some implications in the Chung–Graham–Wilson theorem are clear by the definitions of the conditions: the discrepancy on individual sets condition is simply the special case of the discrepancy condition for , and 4-cycle counting is a special case of subgraph counting. In addition, the graph counting lemma, a straightforward generalization of the triangle counting lemma, implies that the discrepancy condition implies subgraph counting.
The fact that 4-cycle counting implies the codegree condition can be proven by a technique similar to the second-moment method. Firstly, the sum of codegrees can be upper-bounded:
Given 4-cycles, the sum of squares of codegrees is bounded:
Therefore, the Cauchy–Schwarz inequality gives
which can be expanded out using our bounds on the first and second moments of to give the desired bound. A proof that the codegree condition implies the discrepancy condition can be done by a similar, albeit trickier, computation involving the Cauchy–Schwarz inequality.
The eigenvalue condition and the 4-cycle condition can be related by noting that the number of labeled 4-cycles in is, up to stemming from degenerate 4-cycles, , where is the adjacency matrix of . The two conditions can then be shown to be equivalent by invocation of the Courant–Fischer theorem.
Connections to graph regularity
The concept of graphs that act like random graphs connects strongly to the concept of graph regularity used in the Szemerédi regularity lemma. For , a pair of vertex sets is called -regular, if for all subsets satisfying , it holds that
where denotes the edge density between and : the number of edges between and divided by . This condition implies a bipartite analogue of the discrepancy condition, and essentially states that the edges between and behave in a "random-like" fashion. In addition, it was shown by Miklós Simonovits and Vera T. Sós in 1991 that a graph satisfies the above weak pseudorandomness conditions used in the Chung–Graham–Wilson theorem if and only if it possesses a Szemerédi partition where nearly all densities are close to the edge density of the whole graph.
Sparse pseudorandomness
Chung–Graham–Wilson theorem analogues
The Chung–Graham–Wilson theorem, specifically the implication of subgraph counting from discrepancy, does not follow for sequences of graphs with edge density approaching , or, for example, the common case of -regular graphs on vertices as . The following sparse analogues of the discrepancy and eigenvalue bounding conditions are commonly considered:
Sparse discrepancy: for any subsets of the vertex set , the number of edges between and is within of .
Sparse eigenvalue bounding: If are the eigenvalues of the adjacency matrix of , then .
It is generally true that this eigenvalue condition implies the corresponding discrepancy condition, but the reverse is not true: the disjoint union of a random large -regular graph and a -vertex complete graph has two eigenvalues of exactly but is likely to satisfy the discrepancy property. However, as proven by David Conlon and Yufei Zhao in 2017, slight variants of the discrepancy and eigenvalue conditions for -regular Cayley graphs are equivalent up to linear scaling in . One direction of this follows from the expander mixing lemma, while the other requires the assumption that the graph is a Cayley graph and uses the Grothendieck inequality.
Consequences of eigenvalue bounding
A -regular graph on vertices is called an -graph if, letting the eigenvalues of the adjacency matrix of be , . The Alon-Boppana bound gives that (where the term is as ), and Joel Friedman proved that a random -regular graph on vertices is for . In this sense, how much exceeds is a general measure of the non-randomness of a graph. There are graphs with , which are termed Ramanujan graphs. They have been studied extensively and there are a number of open problems relating to their existence and commonness.
Given an graph for small , many standard graph-theoretic quantities can be bounded to near what one would expect from a random graph. In particular, the size of has a direct effect on subset edge density discrepancies via the expander mixing lemma. Other examples are as follows, letting be an graph:
If , the vertex-connectivity of satisfies
If , is edge-connected. If is even, contains a perfect matching.
The maximum cut of is at most .
The largest independent subset of a subset in is of size at least
The chromatic number of is at most
Connections to the Green–Tao theorem
Pseudorandom graphs factor prominently in the proof of the Green–Tao theorem. The theorem is proven by transferring Szemerédi's theorem, the statement that a set of positive integers with positive natural density contains arbitrarily long arithmetic progressions, to the sparse setting (as the primes have natural density in the integers). The transference to sparse sets requires that the sets behave pseudorandomly, in the sense that corresponding graphs and hypergraphs have the correct subgraph densities for some fixed set of small (hyper)subgraphs. It is then shown that a suitable superset of the prime numbers, called pseudoprimes, in which the primes are dense obeys these pseudorandomness conditions, completing the proof.
References
Graph theory | Pseudorandom graph | [
"Mathematics"
] | 1,854 | [
"Discrete mathematics",
"Mathematical relations",
"Graph theory",
"Combinatorics"
] |
5,693,354 | https://en.wikipedia.org/wiki/Leo%20Harrington | Leo Anthony Harrington (born May 17, 1946) is a professor of mathematics at the University of California, Berkeley who works in
recursion theory, model theory, and set theory.
Having retired from being a Mathematician, Professor Leo Harrington is now a Philosopher.
His notable results include proving the Paris–Harrington theorem along with Jeff Paris,
showing that if the axiom of determinacy holds for all analytic sets then x# exists for all reals x,
and proving with Saharon Shelah that the first-order theory of the partially ordered set of recursively enumerable Turing degrees is undecidable.
References
External links
Home page.
Living people
American logicians
20th-century American mathematicians
21st-century American mathematicians
Massachusetts Institute of Technology alumni
University of California, Berkeley College of Letters and Science faculty
Model theorists
American set theorists
1946 births | Leo Harrington | [
"Mathematics"
] | 172 | [
"Model theorists",
"Model theory"
] |
5,693,461 | https://en.wikipedia.org/wiki/Hendrik%20Lenstra | Hendrik Willem Lenstra Jr. (born 16 April 1949, Zaandam) is a Dutch mathematician.
Biography
Lenstra received his doctorate from the University of Amsterdam in 1977 and became a professor there in 1978. In 1987, he was appointed to the faculty of the University of California, Berkeley; starting in 1998, he divided his time between Berkeley and the University of Leiden, until 2003, when he retired from Berkeley to take a full-time position at Leiden.
Three of his brothers, Arjen Lenstra, Andries Lenstra, and Jan Karel Lenstra, are also mathematicians. Jan Karel Lenstra is the former director of the Netherlands Centrum Wiskunde & Informatica (CWI). Hendrik Lenstra was the Chairman of the Program Committee of the International Congress of Mathematicians in 2010.
Scientific contributions
Lenstra has worked principally in computational number theory. He is well known for:
Co-discovering of the Lenstra–Lenstra–Lovász lattice basis reduction algorithm (in 1982);
Developing an polynomial-time algorithm for solving a feasibility integer programming problem when the number of variables is fixed (in 1983);
Discovering the elliptic curve factorization method (in 1987);
Computing all solutions to the inverse Fermat equation (in 1992);
The Cohen-Lenstra heuristics - a set of precise conjectures about the structure of class groups of quadratic fields.
Awards and honors
In 1984, Lenstra became a member of the Royal Netherlands Academy of Arts and Sciences. He won the Fulkerson Prize in 1985 for his research using the geometry of numbers to solve integer programs with few variables in time polynomial in the number of constraints. He was awarded the Spinoza Prize in 1998, and on 24 April 2009 he was made a Knight of the Order of the Netherlands Lion. In 2009, he was awarded a Gauss Lecture by the German Mathematical Society. In 2012, he became a fellow of the American Mathematical Society.
Publications
Euclidean Number Fields. Parts 1-3, Mathematical Intelligencer 1980
with A. K. Lenstra: Algorithms in Number Theory. pp. 673–716, In Jan van Leeuwen (ed.): Handbook of Theoretical Computer Science, Vol. A: Algorithms and Complexity. Elsevier and MIT Press 1990, , .
Algorithms in Algebraic Number Theory. Bulletin of the AMS, vol. 26, 1992, pp. 211–244.
Primality testing algorithms. Séminaire Bourbaki 1981.
with Peter Stevenhagen: Artin reciprocity and Mersenne Primes. Nieuw Archief for Wiskunde 2000.
with Peter Stevenhagen: Chebotarev and his density theorem. Mathematical Intelligencer 1992 (Online at Lenstra's Homepage).
Profinite Fibonacci Numbers, December 2005, PDF
See also
Print Gallery (M. C. Escher)
References
External links
, Homepage at the Leiden Mathematisch Instituut
1949 births
Living people
20th-century Dutch mathematicians
21st-century Dutch mathematicians
Members of the Royal Netherlands Academy of Arts and Sciences
Number theorists
Spinoza Prize winners
University of Amsterdam alumni
Academic staff of the University of Amsterdam
Academic staff of Leiden University
University of California, Berkeley College of Letters and Science faculty
Fellows of the American Mathematical Society
Dutch expatriates in the United States
People from Zaanstad | Hendrik Lenstra | [
"Mathematics"
] | 684 | [
"Number theorists",
"Number theory"
] |
5,693,489 | https://en.wikipedia.org/wiki/Kenneth%20Kellermann | Kenneth Irwin Kellermann (born July 1, 1937) is an American astronomer at the National Radio Astronomy Observatory. He is best known for his work on quasars. He won the Helen B. Warner Prize for Astronomy of the American Astronomical Society in 1971, and the Bruce Medal of the Astronomical Society of the Pacific in 2014.
Kellerman was a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society.
Kellermann was born in New York City to Alexander Kellermann and Rae Kellermann (née Goodstein). His paternal grandparents emigrated from Hungary and his maternal grandparents from Romania.
Publications
Direct Link
References
Living people
1937 births
Members of the Eurasian Astronomical Society
Foreign members of the Russian Academy of Sciences
Scientists from New York City
Jewish American scientists
American people of Hungarian-Jewish descent
American people of Romanian-Jewish descent
21st-century American Jews
Members of the American Philosophical Society | Kenneth Kellermann | [
"Astronomy"
] | 184 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
5,693,654 | https://en.wikipedia.org/wiki/Celebrity%20worship%20syndrome | Celebrity worship syndrome (CWS) or celebrity obsession disorder (COD) is an obsessive addictive disorder in which a person becomes overly involved with the details of a celebrity's personal and professional life. Psychologists have indicated that though many people obsess over film, television, sport and pop stars, the only common factor between them is that they are all figures in the public eye. Written observations of celebrity worship date back to the 19th century.
Classifications
Simple obsessional
Simple obsessional stalking constitutes a majority of all stalking cases, anywhere from 69 to 79%, and is dominated by males. This form of stalking is generally associated with individuals who have shared previous personal relationships with their victims. However, this is not necessarily the case between a common member of the public exhibiting celebrity worship syndrome and the famous person with whom they are obsessed. Individuals that meet the criteria of being labeled as a "simple obsessional stalker" tend to share a set of characteristics including an inability to have successful personal relationships in their own lives, social awkwardness, feelings of powerlessness, a sense of insecurity, and very low self-esteem. Of these characteristics, low self-esteem plays a large role in the obsession that these individuals develop with their victim, in this case, the famous person. If the individual is unable to have any sort of connection to the celebrity with which they are obsessed, their own sense of self-worth may decline.
Entertainment-social
This level of admiration is linked to a celebrity's ability to capture the attention of their fans. Entertainment-social celebrity worship is used to describe a relatively low level of obsession. An example of a typical entertainment-social attitude would be "My friends and I like to discuss what my favorite celebrity has done."
It may also be seen in the form of obsessively following celebrities on social media, although considered the lowest level of celebrity worship. It has been seen to have a number of negative effects with regards the development of unhealthy eating tendencies (eating disorders), anxiety, depression, poor body image and low self esteem, especially in young adolescents aged 13 to mid-20s. This can be supported by a study carried out on a group of female adolescents between the ages of (17–20).
Intense-personal
This is an intermediate level of obsession that is associated with neuroticism as well as behaviors linked to psychoticism. An example of an intense-personal attitude toward a celebrity would include claims such as "I consider my favorite celebrity to be my soul mate." It has been found that in particular, people who worship celebrities in this manner often have low self-esteem with regards to their body type, especially if they think that the celebrity is physically attractive. The effects of intense-personal celebrity worship on body image are seen in some cases of cosmetic surgery. Females who have high levels of obsession are more accepting of cosmetic surgery than those who do not obsess over celebrities to this extent.
Love obsessional
As the name suggests, individuals who demonstrate this sort of stalking behavior develop a love obsession with somebody who they have no personal relation to. Love obsessional stalking accounts for roughly 20–25% of all stalking cases. The people that demonstrate this form of stalking behavior are likely to have a mental disorder, commonly either schizophrenia or paranoia. Individuals that are love obsessional stalkers often convince themselves that they are in fact in a relationship with the subject of their obsession. For example, a woman who had been stalking David Letterman for a total of five years claimed to be his wife when she had no personal connection to him. Other celebrities who have fallen victim to this form of stalking include Jennifer Aniston, Halle Berry, Jodie Foster, and Mila Kunis, along with numerous other A-list stars.
Erotomanic
Erotomanic, originating from the word erotomania, refers to stalkers who genuinely believe that their victims are in love with them. The victims in this case are almost always well known within their community or within the media, meaning that they can range from small-town celebrities to famous personalities from Hollywood. Comprising less than 10% of all stalking cases, erotomanic stalkers are the least common. Unlike simple-obsessional stalkers, a majority of the individuals in this category of stalking are women. Similar to love-obsessional stalkers, the behavior of erotomanic stalkers may be a result of an underlying psychological disorder such as schizophrenia, bipolar disorder, or major depression.
Individuals who have erotomania tend to believe that the celebrity with whom they are obsessed with is utilizing the media as a way to communicate with them by sending special messages or signals. Although these stalkers have unrealistic beliefs, they are less likely to seek any form of face-to-face interaction with their celebrity obsession, therefore posing less of a threat to them.
Borderline-pathological
This classification is the most severe level of celebrity worship. It is characterized by pathological attitudes and behaviors, as a result of celebrity worship. This includes willingness to commit crime on behalf of the celebrity who is the object of worship, or to spend money on common items used by the celebrity at some point, such as napkins.
Mental health
Evidence indicates that poor mental health is correlated with celebrity worship. Researchers have examined the relationship between celebrity worship and mental health in United Kingdom adult samples. One study found evidence to suggest that the intense-personal celebrity worship dimension was related to higher levels of depression and anxiety. Similarly, another study in 2004, found that the intense-personal celebrity worship dimension was not only related to higher levels of depression and anxiety, but also higher levels of stress, negative affect, and reports of illness. Both these studies showed no evidence for a significant relationship between either the entertainment-social or the borderline-pathological dimensions of celebrity worship and mental health.
Another correlated pathology examined the role of celebrity interest in shaping body image cognitions. Among three separate UK samples (adolescents, students, and older adults), individuals selected a celebrity of their own sex whose body/figure they liked and admired, and then completed the Celebrity Attitude Scale along with two measures of body image. Significant relationships were found between attitudes toward celebrities and body image among female adolescents only.
The findings suggested that, in female adolescence, there is an interaction between intense-personal celebrity worship and body image between the ages of 14 and 16, and some tentative evidence suggest that this relationship disappears at the onset of adulthood, which is between the ages of 17 and 20. These results are consistent with the authors who stress the importance of the formation of relationships with media figures, and suggest that relationships with celebrities perceived as having a good body shape may lead to a poor body image in female adolescents. This can be again supported by a study carried out, which investigated the link between mass media and its direct correlation to poor self-worth/ body image in a sample group of females between the ages of 17 and 20.
Within a clinical context the effect of celebrity might be more extreme, particularly when considering extreme aspects of celebrity worship. Relationships between the three classifications of celebrity worship (entertainment-social, intense-personal and borderline-pathological celebrity worship and obsessiveness), ego-identity, fantasy proneness and dissociation were examined. Two of these variables drew particular attention: fantasy proneness and dissociation. Fantasy proneness involves fantasizing for a duration of time, reporting hallucinatory intensities as real, reporting vivid childhood memories, having intense religious and paranormal experiences. Dissociation is the lack of a normal integration of experiences, feelings, and thoughts in everyday consciousness and memory; in addition, it is related to a number of psychiatric problems.
Though low levels of celebrity worship (entertainment-social) are not associated with any clinical measures, medium levels of celebrity worship (intense-personal) are related to fantasy proneness (approximately 10% of the shared variance), while high levels of celebrity worship (borderline-pathological) share a greater association with fantasy proneness (around 14% of the shared variance) and dissociation (around 3% of the shared variance, though the effect size of this is small and most probably due to the large sample size). This finding suggests that as "celebrity worship becomes more intense, and the individual perceives having a relationship with the celebrity, the more the individual is prone to fantasies."
Celebrity worship syndrome can lead to the manifestation of unhealthy tendencies such as materialism and compulsive buying, which can be supported by a study carried out by Robert. A. Reeves, Gary. A. Baker and Chris. S. Truluck. The results of this study link high rates of celebrity worship to high rates of materialism and compulsive buying.
A number of historical, ethnographic, netnographic and auto-ethnographic studies in diverse academic disciplines such as film studies, media studies, cultural studies and consumer research, which – unlike McCutcheon et al. focused mainly on a student sample (with two exceptions) – have actually studied real fans in the field, have come to very different conclusions that are more in line with Horton & Wohl's original concept of parasocial interaction or an earlier study by Leets.
See also
Anti-fan
Fanaticism
Fictosexuality
Nijikon
Obsessive love disorder
Paparazzi
Parasocial interaction
Sasaeng fan
Stalking
Stan (fan)
Yandere
Cyberstalking
References
Further reading
Behavioral addiction
Celebrity fandom
Fandom
Social phenomena
Stalking | Celebrity worship syndrome | [
"Biology"
] | 1,942 | [
"Behavior",
"Aggression",
"Stalking"
] |
5,693,660 | https://en.wikipedia.org/wiki/Protocyanin | Protocyanin is an anthocyanin pigment that is responsible for the red colouration of roses, but in cornflowers is blue. The pigment was first isolated in 1913 from the blue cornflower (Centaurea cyanus), and the identical pigment was isolated from a red rose in 1915. The difference in colour had been explained as a difference in flower-petal pH, but the pigment in the blue cornflower has been shown to be a supermolecular pigment consisting of anthocyanin, flavone, one ferric ion, one magnesium and two calcium ions forming a copigmentation complex.
The molecular formula of protocyanin complex is of the type of C366H384O228FeMg.
References
Anthocyanins | Protocyanin | [
"Chemistry"
] | 166 | [
"PH indicators",
"Anthocyanins"
] |
5,694,060 | https://en.wikipedia.org/wiki/Nagai%20Nagayoshi | was a Japanese pharmacist, best known for his study of ephedrine.
Early life
Nagai was born in Myōdō District, Awa Province in what is now Tokushima Prefecture, as the son of a doctor and started studying rangaku medicine at the Dutch Medical School of Nagasaki (Igaku-Denshujo) in 1864. While in Nagasaki, he made the acquaintance of Ōkubo Toshimichi, Itō Hirobumi, and other future leaders of the Meiji government.
Career
Nagai continued his studies at Tokyo Imperial University and became the first doctor of pharmacy in Japan. He was sent under government sponsorship to Prussia in 1871 to study at the University of Berlin. He was the only civilian in a group of military students sent to study in Great Britain and France, and he traveled by way of the United States and Great Britain. While in Berlin, he resided at the home of Japanese diplomat Aoki Shūzō. He was influenced by the lectures of von Hofmann, and received a doctorate with a study on eugenol while working as an assistant at von Hofmann's laboratory. He decided to take up organic chemistry in 1873.
Nagai returned to Japan in 1883 to take up a position at the Tokyo Imperial University, and became Professor of Chemistry and Pharmacy there in 1893. His research centered on the chemical analysis of various Japanese and Chinese traditional herbal medicines.
While in Germany, Nagai married Therese Schumacher, the daughter of a wealthy lumber and mining magnate. On their return to Japan, she became a professor of German language at Japan Women's University, and was active in introducing German foods and culture to Japan. In 1923, Nagai and his wife hosted Albert Einstein and his wife during their visit to Japan.
His son, Alexander Nagai, served as a diplomat at the Embassy of Japan in Berlin until the end of World War II.
As first president of the Pharmaceutical Society of Japan (PSJ, founded in 1880); Nagai had an important impact on the propagation of chemistry and pharmaceutical sciences in an industrializing Japan.
Death
Nagai died in 1929 in Tokyo of acute pneumonia.
Scientific contributions
Isolation of ephedrine from Ephedra vulgaris in 1885. Nagai recognized it to be the active component of the plant.
Synthesis of methamphetamine from ephedrine in 1893. Methamphetamine was later synthesized in crystalline form in 1919 by Akira Ogata.
Isolation of rotenone from Derris elliptica in 1902. Nagayoshi named the substance after the Japanese name for the plant, roten.
Synthesis and structural elucidation of ephedrine in 1929.
References
Lock, Margaret. East Asian Medicine in Urban Japan: Varieties of Medical Experience. University of California Press; Reprint edition (1984).
Schultes, Richard Evans, ed. Ethnobotany: The Evolution of a Discipline. Timber Press, Incorporated (2005).
W Pötsch. Lexikon bedeutender Chemiker (VEB Bibliographisches Institut Leipzig, 1989)
Specific
External links
Pharmaceutical Society of Japan
Museum of the Tokyo Pharmaceutical Association (Japanese)
1844 births
1929 deaths
Japanese organic chemists
People from Tokushima Prefecture
University of Tokyo alumni
Japanese expatriates in Germany
People of Meiji-era Japan
Japanese inventors
20th-century Japanese chemists
19th-century Japanese chemists | Nagai Nagayoshi | [
"Chemistry"
] | 682 | [
"Organic chemists",
"Japanese organic chemists"
] |
5,694,182 | https://en.wikipedia.org/wiki/Decentralized%20computing | Decentralized computing is the allocation of resources, both hardware and software, to each individual workstation, or office location. In contrast, centralized computing exists when the majority of functions are carried out, or obtained from a remote centralized location. Decentralized computing is a trend in modern-day business environments. This is the opposite of centralized computing, which was prevalent during the early days of computers.
A decentralized computer system has many benefits over a conventional centralized network. Desktop computers have advanced so rapidly, that their potential performance far exceeds the requirements of most business applications. This results in most desktop computers remaining idle (in relation to their full potential). A decentralized system can use the potential of these systems to maximize efficiency. However, it is debatable whether these networks increase overall effectiveness.
All computers have to be updated individually with new software, unlike a centralized computer system. Decentralized systems still enable file sharing and all computers can share peripherals such as printers and scanners as well as modems, allowing all the computers in the network to connect to the internet.
A collection of decentralized computers systems are components of a larger computer network, held together by local stations of equal importance and capability. These systems are capable of running independently of each other.
Origins of decentralized computing
The origins of decentralized computing originate from the work of David Chaum.
During 1979 he conceived the first concept of a decentralized computer system known as Mix Network. It provided an anonymous email communications network, which decentralized the authentication of the messages in a protocol that would become the precursor to Onion Routing, the protocol of the TOR browser. Through this initial development of an anonymous communications network, David Chaum applied his Mix Network philosophy to design the world's first decentralized payment system and patented it in 1980. Later in 1982, for his PhD dissertation, he wrote about the need for decentralized computing services in the paper Computer Systems Established, Maintained and Trusted by Mutually Suspicious Groups. Chaum proposed an electronic payment system called Ecash in 1982. Chaum's company DigiCash implemented this system from 1990 until 1998.
Peer-to-peer
Based on a "grid model" a peer-to-peer system, or P2P system, is a collection of applications run on several computers, which connect remotely to each other to complete a function or a task. There is no main operating system to which satellite systems are subordinate. This approach to software development (and distribution) affords developers great savings, as they don't have to create a central control point. An example application is LAN messaging which allows users to communicate without a central server.
Peer-to-peer networks, where no entity controls an effective or controlling number of the network nodes, running open source software also not controlled by any entity, are said to effect a decentralized network protocol. These networks are harder for outside actors to shut down, as they have no central headquarters.
File sharing applications
One of the most notable debates over decentralized computing involved Napster, a music file sharing application, which granted users access to an enormous database of files. Record companies brought legal action against Napster, blaming the system for lost record sales. Napster was found in violation of copyright laws by distributing unlicensed software, and was shut down.
After the fall of Napster, there was demand for a file sharing system that would be less vulnerable to litigation. Gnutella, a decentralized system, was developed. This system allowed files to be queried and shared between users without relying upon a central directory, and this decentralization shielded the network from litigation related to the actions of individual users.
Decentralized web
See also
Centralized computing
Distributed computing
Decentralized information technology
Decentralized network 42
Decentralized Autonomous Organization
Federation (information technology)
Federated social network
Blockchain
Decentralized finance
References
Notes | Decentralized computing | [
"Technology"
] | 793 | [
"Centralized computing",
"IT infrastructure",
"Computer systems"
] |
5,694,269 | https://en.wikipedia.org/wiki/The%20Borderland%20of%20Sol | "The Borderland of Sol" is a science fiction novelette by American writer Larry Niven. It is the fifth in the Known Space series of stories about crashlander Beowulf Shaeffer.
The story was originally published in Analog, January 1975, printed in the collection Tales of Known Space, Niven, Del Ray, reissued 1985 (), and reprinted in Crashlander, Larry Niven, New York: Ballantine, 1994, pp. 160–207 (). The story won the Hugo Award for Best Novelette in 1976 and was nominated for the Locus Poll Award for Best Novelette in 1976.
It is one of the earliest works of fiction to feature a black hole.
Segments of the novel Fleet of Worlds serve as a prequel to the story.
Plot summary
A rash of spaceship disappearances around Earth results in a dearth of available transit, stranding Beowulf "Bey" Shaeffer on Jinx away from his love, Sharrol Janss. While visiting the Institute of Knowledge he runs into his old friend Carlos Wu. Carlos is the father of Janss' two children, a fact that he found so embarrassing that he decided to leave Earth rather than face Bey upon his expected return. But Bey proves perfectly happy to hear about the children, as his albinism denies him a license to have children of his own, and he and Sharrol had agreed that Carlos should act as a surrogate.
Reconciled, Carlos mentions that he has been contacted by Sigmund Ausfaller of the Bureau of Alien Affairs, who has offered him a ride to Earth. Bey has had several run-ins with Ausfaller in the past; Ausfaller aims to protect human-alien relations in any way he can, and at one point he planted a bomb on Bey's alien-provided General Products' #2 hull to prevent him from stealing it and potentially causing a sticky diplomatic incident. Worried about what might happen to Carlos at Ausfaller's hands, he decides to accompany him on his next meeting.
Bey, Carlos and Ausfaller meet. Ausfaller explains that alien passengers were aboard some of the vessels that disappeared, and he has been given the job of finding out what is going on to avoid further issues. His ship, the Hobo Kelly, appears to be a cargo and passenger ship, but in reality is a warship built out of a nearly invulnerable General Products' #2 hull, capable of 30G of acceleration, armed with guided missiles, an x-ray laser and smaller laser cannons. Additionally, of the eight ships that have disappeared to date, only two were incoming, the other six were outgoing. Their inbound mission should thus be safe.
This proves to be the case for most of the journey, but only moments before entering the outskirts of Sol the ship suddenly lurches and drops out of hyperspace. Examining the area they discover three small tugs at some distance, but nothing else of interest. They turn towards Sol and continue on their way home while Bey checks the ship to try to find out what happened. He discovers that the hyperdrive motor is completely missing from the hull. When he informs the crew, Carlos uses the ship's hyperwave communications to retrieve information from Elephant's databanks on Earth, looking up a number of black hole related topics.
When his inquiries are finally answered, he finds that one of the bits of information was written by Dr. Julian Forward, a researcher Carlos has wanted to meet. Carlos calls him and they discuss the disappearing hyperdrive motor. Forward invites them to Forward Station to wait for a ferry to Earth. They agree to his plans, although Forward Station is right where the ships are disappearing. Ausfaller agrees that Carlos and Bey can go to Forward Station; he did not reveal himself during the conversation and the small ship would not give away the fact that there was a third crewmember.
After equipping for potential combat, Bey and Carlos ferry to the station to meet with Forward. He shows them his prize possession, the "Grabber", an electromagnetic assembly that lets him shake masses of neutronium to produce polarized gravitational waves, which he is attempting to use to establish communications with alien races who may not have discovered hyperwave. When Forward asks Carlos what he thinks has happened, Carlos explains that a black hole might have been able to do it - gravity is one of the few forces that can penetrate a General Products starship hull. When Carlos admits that he has heard of quantum black holes, Forward takes them both captive.
Forward explains that he found the Tunguska meteorite, which was actually a small black hole. Returning it to the station he fed it the sphere of neutronium he was previously using for his communications attempt, thereby increasing its mass, and then fed in the exhaust of an ion engine to give it a permanent electric charge. The hole could now be manipulated with magnets, and towed around by the tugs. The tugs move it into the path of incoming starships to disable them, and then pirate the now-defenseless ships.
When the tugs return to the station, Forward suddenly asks if someone else is aboard the Hobo Kelly, a question that is answered when Ausfaller fires on the tugs, destroying two and causing the third to flee. The tugs drop the black hole, but Forward and his assistant Angel manage to catch it in the Grabber. However, by this time Bey has managed to free himself enough to cut through his bonds, which turn out to be the power cable feeding the Grabber, releasing the black hole once again. As the hole falls towards the station it hits the dome and cuts a hole in it, sucking Forward's assistant into it. Forward makes some adjustments on his control panel and is then sucked into the hole as well.
Ausfaller rescues Bey and Carlos, who explain what was happening. They speculate that Forward deliberately turned up the air pressure in his final moments in order to allow the two to live until Ausfaller returned. They watch as the quantum black hole collapses the asteroid and it disappears in a searing blast of light.
Trivia
According to the afterword Niven wrote for this story in the collection Playgrounds of the Mind, the character Julian Forward is named in honor of science fiction author Robert Forward. Forward returned the 'favor' by naming a character in his own novel Dragon's Egg Pierre Carnot Niven.
Niven originally pitched this story as an episode of Star Trek: The Animated Series in 1973, but it was rejected by D. C. Fontana. They bought his story The Soft Weapon instead which was produced as The Slaver Weapon.
The story is retold, from the point of view of Sigmund Ausfaller, in Juggler of Worlds.
Pluto is dismissed as an escaped moon of Neptune, while the solar system's outer planets are listed as Neptune, Persephone, Caïna, Antenora, and Ptolemea, after the rounds of Cocytus in Dante's Inferno, with Judecca reserved by the International Astronomical Union for the next discovery. Persephone has a retrograde orbit that is tilted 120 degrees to the ecliptic.
See also
Neutron Star, the first story in the Beowulf Shaeffer series.
At the Core, the second story in the series.
Flatlander, the third story in the series.
Grendel, the fourth story in the series.
Procrustes, the sixth story in the series.
Ghost, the framing story in the collection Crashlander.
Fly-By-Night, the seventh story in the series, written after Crashlander.
References
External links
Fiction about black holes
Known Space stories
Hugo Award for Best Novelette–winning works
Short stories by Larry Niven
1975 short stories
Fiction set around Sirius
Fiction about trans-Neptunian objects
Works originally published in Analog Science Fiction and Fact | The Borderland of Sol | [
"Physics"
] | 1,608 | [
"Black holes",
"Unsolved problems in physics",
"Fiction about black holes"
] |
5,694,932 | https://en.wikipedia.org/wiki/Tecplot | Tecplot is a family of visualization & analysis software tools developed by American company Tecplot, Inc., which is headquartered in Bellevue, Washington. The firm was formerly operated as Amtec Engineering. In 2016, the firm was acquired by Vela Software, an operating group of Constellation Software, Inc. (TSX:CSU).
Tecplot 360
Tecplot 360 is a Computational Fluid Dynamics (CFD) and numerical simulation software package used in post-processing simulation results. Tecplot 360 is also used in chemistry applications to visualize molecule structure by post-processing charge density data.
Common tasks associated with post-processing analysis of flow solver (e.g. Fluent, OpenFOAM) data include calculating grid quantities (e.g. aspect ratios, skewness, orthogonality and stretch factors), normalizing data; Deriving flow field functions like pressure coefficient or vorticity magnitude, verifying solution convergence, estimating the order of accuracy of solutions, interactively exploring data through cut planes (a slice through a region), iso-surfaces (3-D maps of concentrations), particle paths (dropping an object in the "fluid" and watching where it goes).
Tecplot 360 may be used to visualize output from programming languages such as Fortran. Tecplot's native data format is PLT or SZPLT. Many other formats are also supported, including:
CFD Formats:
VTK, CGNS, FLOW-3D (Flow Science, Inc.), ANSYS CFX, ANSYS FLUENT .cas and .dat format and polyhedra, OpenFOAM, PLOT3D (Flow Science, Inc.), Tecplot and polyhedra, Ensight Gold, HDF5 (Hierarchical Data Format), CONVERGE CFD (Convergent Science), and Barracuda Virtual Reactor (CPFD Software).
Data Formats:
HDF, Microsoft Excel (Windows only), comma- or space-delimited ASCII.
FEA Formats:
Abaqus, ANSYS, FIDAP Neutral, LSTC/DYNA LS-DYNA, NASTRAN MSC Software, Patran MSC Software, PTC Mechanica, SDRC/IDEAS universal and 3D Systems STL.
ParaView supports Tecplot format through a VisIt importer.
Tecplot RS
Tecplot RS is a tool tailored towards visualizing the results of
reservoir simulations, which model the flow of fluids through porous media, as in oil and gas fields, and aquifers.
Tecplot Focus
Tecplot Focus is plotting software designed for measured field data, performance plotting of test data, mathematical analysis, and general engineering plotting.
Tecplot Chorus
Tecplot Chorus is a data management, design optimization, and aero database development framework used for comparing collections of CFD simulations.
References
External links
Official Site
User Community
File format definition
Graphics software
Computational fluid dynamics
Plotting software
Software that uses VTK | Tecplot | [
"Physics",
"Chemistry"
] | 617 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
5,695,222 | https://en.wikipedia.org/wiki/SMAD%20%28protein%29 | Smads (or SMADs) comprise a family of structurally similar proteins that are the main signal transducers for receptors of the transforming growth factor beta (TGF-B) superfamily, which are critically important for regulating cell development and growth. The abbreviation refers to the homologies to the Caenorhabditis elegans SMA ("small" worm phenotype) and MAD family ("Mothers Against Decapentaplegic") of genes in Drosophila.
There are three distinct sub-types of Smads: receptor-regulated Smads (R-Smads), common partner Smads (Co-Smads), and inhibitory Smads (I-Smads). The eight members of the Smad family are divided among these three groups. Trimers of two receptor-regulated SMADs and one co-SMAD act as transcription factors that regulate the expression of certain genes.
Sub-types
The R-Smads consist of Smad1, Smad2, Smad3, Smad5 and Smad8/9, and are involved in direct signaling from the TGF-B receptor.
Smad4 is the only known human Co-Smad, and has the role of partnering with R-Smads to recruit co-regulators to the complex.
Finally, Smad6 and Smad7 are I-Smads that work to suppress the activity of R-Smads. While Smad7 is a general TGF-B signal inhibitor, Smad6 associates more specifically with BMP signaling. R/Co-Smads are primarily located in the cytoplasm, but accumulate in the nucleus following TGF-β signaling, where they can bind to DNA and regulate transcription. However, I-Smads are predominantly found in the nucleus, where they can act as direct transcriptional regulators.
Discovery and nomenclature
Before Smads were discovered, it was unclear what downstream effectors were responsible for transducing TGF-B signals. Smads were first discovered in Drosophila, in which they are known as mothers against dpp (Mad), through a genetic screen for dominant enhancers of decapentaplegic (dpp), the Drosophila version of TGF-B. Studies found that Mad null mutants showed similar phenotypes to dpp mutants, suggesting that Mad played an important role in some aspect of the dpp signaling pathway.
A similar screen done in the Caenorhabditis elegans protein SMA (from gene sma for small body size) revealed three genes, Sma-2, Sma-3, and Sma-4, that had similar mutant phenotypes to those of the TGF-B like receptor Daf-4. The human homologue of Mad and Sma was named Smad1, a portmanteau of the previously discovered genes. When injected into Xenopus embryo animal caps, Smad1 was found to be able to reproduce the mesoderm ventralizing effects that BMP4, a member of the TGF-B family, has on embryos. Furthermore, it was demonstrated that Smad1 had transactivational ability localized at the carboxy terminus, which can be enhanced by adding BMP4. This evidence suggests that Smad1 is responsible in part for transducing TGF-B signals.
Protein
Smads are roughly between 400 and 500 amino acids long, and consist of two globular regions at the amino and carboxy termini, connected by a linker region. These globular regions are highly conserved in R-Smads and Co-Smads, and are called Mad homology 1 (MH1) at the N-terminus, and MH2 at the C-terminus. The MH2 domain is also conserved in I-Smads. The MH1 domain is primarily involved in DNA binding, while the MH2 is responsible for the interaction with other Smads and also for the recognition of transcriptional co-activators and co-repressors. R-Smads and Smad4 interact with several DNA motifs though the MH1 domain. These motifs include the CAGAC and its CAGCC variant, as well as the 5-bp consensus sequence GGC(GC)|(CG). Receptor-phosphorylated R-Smads can form homotrimers, as well as heterotrimers with Smad4 in vitro, via interactions between the MH2 domains. Trimers of one Smad4 molecule and two receptor-phosphorylated R-Smad molecules are thought to be the predominant effectors of TGF-β transcriptional regulation.
The linker region between MH1 and MH2 is not just a connector, but also plays a role in protein function and regulation. Specifically, R-Smads are phosphorylated in the nucleus at the linker domain by CDK8 and 9, and these phosphorylations modulate the interaction of Smad proteins with transcriptional activators and repressors. Furthermore, after this phosphorylation step, the linker undergoes a second round of phosphorylations by GSK3, labelling Smads for their recognition by ubiquitin ligases, and targeting them for proteasome-mediated degradation. The transcription activators and the ubiquitin ligases both contain pairs of WW domains. These domains interact with the PY motif present in the R-Smad linker, as well as with the phosphorylated residues located in the proximity of the motif. Indeed, the different phosphorylation patterns generated by CDK8/9 and GSK3 define the specific interactions with either transcription activators or with ubiquitin ligases. Remarkably, the linker region has the highest concentration of amino acid differences among metazoans, although the phosphorylation sites and the PY motif are highly conserved.
Sequence conservation
The components of the TGF-beta pathway and in particular, the R-Smads, Co-Smad and I-Smads, are represented in the genome of all metazoans sequenced to date. The level of sequence conservation of the Co-Smad and of R-Smads proteins across species is extremely high. This level of conservation of components—and sequences—suggests that the general functions of the TGF-beta pathway have remained generally intact ever since. I-Smads have conserved MH2 domains, but divergent MH1 domains as compared to R-Smads and Co-Smads.
Role in TGF-β signalling pathway
R/Co-Smads
TGF-B ligands bind receptors consisting of type 1 and type 2 serine/threonine kinases, which serve to propagate the signal intracellularly. Ligand binding stabilizes a receptor complex consisting of two type 1 receptors, and two type 2 receptors. Type 2 receptors then can phosphorylate type 1 receptors at locations on the GS domain, located N-terminally to the type 1 kinase domain. This phosphorylation event activates the type 1 receptors, making them capable of further propagating the TGF-B signal via Smads. Type 1 receptors phosphorylate R-Smads at two C-terminal serines, which are arranged in an SSXS motif. Smads are localized at the cell surface by Smad anchor for receptor activation (SARA) proteins, placing them in proximity of type 1 receptor kinases to facilitate phosphorylation. Phosphorylation of the R-Smad causes it to dissociate from SARA, exposing a nuclear import sequence, as well as promoting its association with a Co-Smad. This Smad complex is then localized to the nucleus, where it is able to bind their target genes, with the help of other associated proteins.
I-Smads
I-Smads disrupt TGF-B signaling through a variety of mechanisms, including preventing association of R-Smads with type 1 receptors and Co-Smads, down-regulating type 1 receptors, and making transcriptional changes in the nucleus. The conserved MH2 domain of I-Smads is capable of binding to type 1 receptors, thus making it a competitive inhibitor of R-Smad binding. Following R-Smad activation, it forms a heteromeric complex with an I-Smad, which prevents its association with a Co-Smad. In addition, the I-Smad recruits a ubiquitin ligase to target the activate R-Smad for degradation, effectively silencing the TGF-β signal. I-Smads in the nucleus also compete with R/Co-Smad complexes for association with DNA binding elements. Reporter assays show that fusing I-Smads to the DNA-binding region of reporter genes decreases their expression, suggesting that I-Smads function as transcriptional repressors.
Role in cell cycle control
In adult cells, TGF-β inhibits cell cycle progression, stopping cells from making the G1/S phase transition. This phenomenon is present in the epithelial cells of many organs, and is regulated in part by the Smad signaling pathway. The precise mechanism of control differs slightly between cell types.
One mechanism by which Smads facilitate TGF-β induced cytostasis is by downregulating Myc, which is a transcription factor that promotes cell growth. Myc also represses p15(Ink4b) and p21(Cip1), which are inhibitors of Cdk4 and Cdk2 respectively. When there is no TGF-β present, a repressor complex composed of Smad3, and the transcription factors E2F4 and p107 exist in the cytoplasm. However, when TGF-B signal is present, this complex localizes to the nucleus, where it associates with Smad4 and binds to the TGF-B inhibitory element (TIE) of the Myc promoter to repress its transcription.
In addition to Myc, Smads are also involved in the downregulation of Inhibitor of DNA Binding (ID) proteins. IDs are transcription factors that regulate genes involved in cell differentiation, maintaining multi-potency in stem cells, and promoting continuous cell cycling. Therefore, downregulating ID proteins is a pathway by which TGF-B signaling could arrest the cell cycle. In a DNA microarray screen, Id2 and Id3 were found to be repressed by TGF-B, but induced by BMP signaling. Knocking out Id2 and Id3 genes in epithelial cells enhances cell cycle inhibition by TGF-B, showing that they are important in mediating this cytostatic effect. Smads are both a direct and indirect inhibitor of Id expression. TGF-B signal triggers Smad3 phosphorylation, which in turn activates ATF3, a transcription factor that is induced during cellular stress. Smad3 and ATF3 then coordinate to repress Id1 transcription, resulting in its downregulation. Indirectly, Id downregulation is a secondary effect of Myc repression by Smad3. Since Myc is an inducer of Id2, downregulating Myc will also result in reduced Id2 signaling, which contributes to cell cycle arrest.
Studies show that Smad3, but not Smad2, is an essential effector for the cytostatic effects of TGF-B. Depleting endogeneous Smad3 via RNA interference was sufficient to interfere with TGF-B cytostasis. However, depleting Smad2 in a similar manner enhanced, rather than halted, TGF-B induced cell cycle arrest. This suggests while Smad3 is necessary for TGF-B cytostatic effect, the ratio of Smad3 to Smad2 modulates the intensity of the response. However, overexpressing Smad2 to change this ratio had no effect on the cytostatic response. Therefore, further experiments are necessary to definitely prove that the ratio of Smad3 to Smad2 regulates intensity of cytostatic effect in response to TGF-B.
Smad proteins have also been found to be direct transcriptional regulators of Cdk4. Reporter assays in which luciferase was placed under a Cdk4 promoter showed increased luciferase expression when Smad4 was targeted with siRNAs. Repression of Smad2 and 3 did not have any significant effect, suggesting that Cdk4 is directly regulated by Smad4.
Clinical significance
Role of Smad in cancer
Defects in Smad signaling can result in TGF-B resistance, causing dysregulation of cell growth. Deregulation of TGF-B signaling has been implicated in many cancer types, including pancreatic, colon, breast, lung, and prostate cancer. Smad4 is most commonly mutated in human cancers, particularly pancreatic and colon cancer. Smad4 is inactivated in nearly half of all pancreatic cancers. As a result, Smad4 was first termed Deleted in Pancreatic Cancer Locus 4 (DPC4) upon its discovery. Germline Smad4 mutations are partially responsible for genetic disposition for human familial juvenile polyposis, which puts a person at high risk of developing potentially cancerous gastrointestinal polyps. Experimental supporting evidence for this observation comes from a study showing that heterozygous Smad4 knockout mice (+/-) uniformly developed gastrointestinal polyps by 100 weeks. Many familial Smad4 mutants occur on the MH2 domain, which disrupts the protein's ability to form homo- or hetero-oligomers, thus impairing TGF-B signal transduction.
Despite evidence showing that Smad3 is more critical than Smad2 in TGF-B signaling, the rate of Smad3 mutations in cancer is lower than that of Smad2. Choriocarcinoma tumor cells are TGF-B signaling resistant, as well as lacking Smad3 expression. Studies show that reintroducing Smad3 into choriocarcinoma cells is sufficient to increase TIMP-1 (tissue inhibitor of metalloprotease-1) levels, a mediator of TGF-B's anti-invasive effect, and thus restore TGF-B signaling. However, reintroducing Smad3 was not sufficient to rescue the anti-invasive effect of TGF-B. This suggests that other signaling mechanisms in addition to Smad3 are defective in TGF-B resistant choriocarcinoma.
Role of Smad in Alzheimer's
Alzheimer's patients display elevated levels of TGF-B and phosphorylated Smad2 in their hippocampal neurons. This finding is seemingly paradoxical, as TGF-B has previously been shown to have neuroprotective effects on Alzheimer's patients. This suggests that some aspect of TGF-B signaling is defective, causing TGF-B to lose its neuroprotective effects. Research has shown that phosphorylated Smad2 is ectopically localized to cytoplasmic granules rather than the nucleus, in hippocampal neurons of patients with Alzheimer's disease. Specifically, the ectopically located phosphorylated Smad2s were found within amyloid plaques, and attached to neurofibrillary tangles. These data suggest that Smad2 is involved in the development of Alzheimer's disease. Recent studies show that the peptidyl-prolyl cis-trans isomerase NIMA-interacting 1 (PIN1) is involved in promoting the abnormal localization of Smad2. Pin1 was found to co-localize with Smad2/3 and phosphorylated tau proteins within the cytoplasmic granules, suggesting a possible interaction. Transfecting Smad2 expressing cells with Pin1 causes proteasome-mediated Smad2 degradation, as well as increased association of Smad2 with phosphorylated tau. This feedback loop is bidirectional; Smad2 is also capable of increasing Pin1 mRNA synthesis. Thus, the two proteins could be caught in a "vicious cycle" of regulation. Pin1 causes both itself and Smad2 to be associated in insoluble neurofibrillary tangles, resulting in low levels of both soluble proteins. Smad2 then promotes Pin1 RNA synthesis to try and compensate, which only drives more Smad2 degradation and association with neurofibrillary tangles.
TGF-β/Smad signaling in kidney disease
Dysregulation of TGF-B/Smad signaling is a possible pathogenic mechanism of chronic kidney disease. In the kidneys, TGF-B1 promotes accumulation of the extracellular matrix (ECM) by increasing its production and inhibiting its degradation, which is characteristic of renal fibrosis. TGF-B1 signal is transduced by the R-Smads Smad2 and Smad3, both of which are found to be overexpressed in diseased kidneys. Smad3 knockout mice display reduced progression of renal fibrosis, suggesting its importance in regulating the disease. Conversely, inhibiting Smad2 in kidney cells (full Smad2 knockouts are embryonic lethal) actually leads to more severe fibrosis, suggesting that Smad2 works antagonistically to Smad3 in the progression of renal fibrosis. Unlike the R-Smads, Smad7 protein is typically under-expressed in diseased kidney cells. This loss of TGF-B inhibition results in increased amounts of active Smad2/3, which contribute to the progression of renal fibrosis as described above.
Notes
References
External links
Transcription factors
Protein families | SMAD (protein) | [
"Chemistry",
"Biology"
] | 3,726 | [
"Transcription factors",
"Gene expression",
"Protein classification",
"Signal transduction",
"Protein families",
"Induced stem cells"
] |
5,695,480 | https://en.wikipedia.org/wiki/Ubique%20%28company%29 | Ubique was a software company based in Israel. Founded in 1994, Ubique is notable for launching the first social-networking software, which included features such as instant messaging, voice over IP (VoIP), chat rooms, web-based events, and collaborative browsing. The company is best known for its most prominent product, Virtual Places, a presence-based chat program that allowed users to explore websites together. This software required both server and client components, enabling users to overlay avatars onto their web browsers and collaborate in real-time as they visited websites. Virtual Places was utilized by providers such as VPChat and Digital Space and eventually evolved into Lotus Sametime. Despite advancements and changes, some consumer-oriented communities still use older versions of Virtual Places.
The company's technology laid the foundation for the development of a sophisticated instant messaging and presence platform, which culminated in the creation of Lotus Sametime. Ubique's mission from its inception was "to add people to the web," transforming the early static web into a dynamic, interactive environment.
In 1995, America Online (AOL) acquired Ubique with the aim of enhancing its online interactive communication services. However, after the discontinuation of GNN in 1996, Ubique shifted its focus from consumer markets to corporate presence technology and instant messaging. In 1998, Ubique was acquired by Lotus/IBM to integrate its technology into Lotus products. By 2006, elements of Ubique were incorporated into IBM Haifa Labs, which continued to develop real-time collaboration technologies.
Technology
Virtual Places
Ubique's best-known product is Virtual Places, a presence-based chat program in which users explore web sites together. It is used by providers such as VPChat and Digital Space and eventually evolved into Lotus Sametime.
Virtual Places requires a server and client software. Users start Virtual Places along with a web
browser and sign into the Virtual Places server. Avatars are overlaid onto the web browser and
users are able to collaborate with each other while they all visit web sites in real time.
Some Virtual Places consumer-oriented communities are still alive on the Web and are using the old version of it.
Instant Messaging and Chat
With the technology developed for Virtual Places, Ubique created an instant messaging and
presence technology platform which evolved into Lotus Sametime.
History
1994 – Ubique Ltd was founded in Israel by Ehud Shapiro and a group of scientists from
the Weizmann Institute to develop real-time, distributed computing products. The
company developed a presence-based chat system known as Virtual Places along with real-time
instant messaging and presence technology software. These were the very early days of the web, which at the time had only static data. Ubique's mission was "to add people to the web".
1995 – America Online Inc. purchased Ubique for $14.5 million with the intention to use Ubique's Virtual
Places technology to enhance and expand its existing live online interactive communication for both the AOL consumer online service and the new GNN brand service. Only the GNN-branded Virtual Places product was ever released.
1996 – GNN was discontinued in 1996. Ubique's management, with the support of AOL, decided to look for other markets for Virtual Places technology. The outcome was that Ubique shifted Virtual Places from the consumer market to focus on presence technology and instant messaging for the corporate market. AOL divested Ubique but remained as a principal investor while Ubique sought a new owner.
1998 - Ubique was acquired by Lotus/IBM to integrate the core
technology of instant messaging and presence functions into a software product integrated with Lotus/IBM.
2000 - Lotus announced Lotus Sametime using Ubique's technology.
2006 - Elements of Ubique along with other Israeli-based companies were integrated into the
newly created IBM Haifa Labs. The Lab develops Session Initiation Protocol (SIP) infrastructure and features of real-time collaboration, including session management, presence awareness, subscriptions and notifications, text messaging, developer toolkits, and mobile real-time messaging infrastructure.
References
External links
IBM Haifa Labs website
Instant messaging
Software companies of Israel
Israeli companies established in 1994
IBM acquisitions | Ubique (company) | [
"Technology"
] | 864 | [
"Instant messaging"
] |
5,695,625 | https://en.wikipedia.org/wiki/Jugerum | The jugerum or juger (, , , or ) was a Roman unit of area, equivalent to a rectangle 240 Roman feet in length and 120 feet in width (about 71×35½m), i.e. 28,800 square Roman feet () or about hectare (0.623 acre).
Name
It was the double of the , and from this circumstance, according to some writers, it derived its name. It seems probable that, as the word was evidently originally the same as , a yoke, and as , in its original use, meant a path wide enough to drive a single beast along, that originally meant a path wide enough for a yoke of oxen, namely, the double of the in width; and that when was used for a square measure of surface, the , by a natural analogy, became the double of the ; and that this new meaning of it superseded its old use as the double of the single .
Pliny the Elder states:
That portion of land used to be known as a "jugerum," which was capable of being ploughed by a single "jugum," or yoke of oxen, in one day; an "actus" being as much as the oxen could plough at a single spell, fairly estimated, without stopping. This last was one hundred and twenty feet in length; and two in length made a jugerum.
Pliny (Book VIII, Chapter 16) also used jugerum as a measure of length. The translator (Bostock) speculated that the jugerum length measurement was equivalent to the Greek plethron, about 30 meters or 100 feet. This was based on Pliny translating Aristotle's "plethron" to "jugerum".
The uncial division as was applied to the , its smallest part being the of 100 sq ft or 9.2 m². Thus, the contained 288 (Varro, R. R. l.c.). The was the common measure of land among the Romans. Two formed an , a hundred heredia a centuria, and four a . These divisions were derived from the original assignment of landed property, in which two were given to each citizen as heritable property.
Columella states:
The square actus is bounded by 120 feet each way: when doubled it forms a iugerum, and it has derived the name iugerum from the fact that it was formed by joining.
In Gaul, half of a jugerum was called an arepennis (“head of a furrow”). It was the measure of a plowed furrow before the plowman turned the plow to cut a new parallel furrow. It was the origin of the later French unit of area, the arpent.
See also
Ancient Roman units of measurement
Notes
References
Citations
General bibliography
A Dictionary of Greek and Roman Antiquities (1842)
Units of area
Ancient Roman units of measurement | Jugerum | [
"Mathematics"
] | 607 | [
"Quantity",
"Units of area",
"Units of measurement"
] |
5,696,235 | https://en.wikipedia.org/wiki/Threshold%20population | In microeconomics, a threshold population is the minimum number of people needed for a service to be worthwhile.
In economic geography, a threshold population is the minimum number of people necessary before a particular good or service can be provided in an area. The concept is equivalent to the "range" in central place theory and retailing, which delineates the market area of a central place for a particular good or service, and is dependent on the spatial distribution of population and the willingness of consumers to travel a given distance to purchase particular goods or services.
Typically a low-order shop (such as a grocer or newsagent) may require only 800 or so customers, whereas a higher-order store such as Marks and Spencer or Waitrose may need a threshold of 70,000 to be profitable, and a university may need 350,000 to be viable.
Thresholds may also be linked to the spending power of customers; this is most obvious in periodic markets in poor countries, where wages are so low that people can buy the goods or services only once in a while.
References
Microeconomics
Human geography | Threshold population | [
"Environmental_science"
] | 226 | [
"Environmental social science stubs",
"Environmental social science",
"Human geography"
] |
5,696,420 | https://en.wikipedia.org/wiki/Grid%20method%20multiplication | The grid method (also known as the box method) of multiplication is an introductory approach to multi-digit multiplication calculations that involve numbers larger than ten. Because it is often taught in mathematics education at the level of primary school or elementary school, this algorithm is sometimes called the grammar school method.
Compared to traditional long multiplication, the grid method differs in clearly breaking the multiplication and addition into two steps, and in being less dependent on place value.
Whilst less efficient than the traditional method, grid multiplication is considered to be more reliable, in that children are less likely to make mistakes. Most pupils will go on to learn the traditional method, once they are comfortable with the grid method; but knowledge of the grid method remains a useful "fall back", in the event of confusion. It is also argued that since anyone doing a lot of multiplication would nowadays use a pocket calculator, efficiency for its own sake is less important; equally, since this means that most children will use the multiplication algorithm less often, it is useful for them to become familiar with a more explicit (and hence more memorable) method.
Use of the grid method has been standard in mathematics education in primary schools in England and Wales since the introduction of a National Numeracy Strategy with its "numeracy hour" in the 1990s. It can also be found included in various curricula elsewhere. Essentially the same calculation approach, but not with the explicit grid arrangement, is also known as the partial products algorithm or partial products method.
Calculations
Introductory motivation
The grid method can be introduced by thinking about how to add up the number of points in a regular array, for example the number of squares of chocolate in a chocolate bar. As the size of the calculation becomes larger, it becomes easier to start counting in tens; and to represent the calculation as a box which can be sub-divided, rather than drawing a multitude of dots.
At the simplest level, pupils might be asked to apply the method to a calculation like 3 × 17. Breaking up ("partitioning") the 17 as (10 + 7), this unfamiliar multiplication can be worked out as the sum of two simple multiplications:
{| class="wikitable" style="text-align: center;"
! scope="col" width="40pt" | ×
! scope="col" width="40pt" | 10
! scope="col" width="40pt" | 7
|-
! scope="row" | 3
|30
|21
|}
so 3 × 17 = 30 + 21 = 51.
This is the "grid" or "boxes" structure which gives the multiplication method its name.
Faced with a slightly larger multiplication, such as 34 × 13, pupils may initially be encouraged to also break this into tens.
So, expanding 34 as 10 + 10 + 10 + 4 and 13 as 10 + 3, the product 34 × 13 might be represented:
{| class="wikitable" style="text-align: center;"
! scope="col" width="40pt" | ×
! scope="col" width="40pt" | 10
! scope="col" width="40pt" | 10
! scope="col" width="40pt" | 10
! scope="col" width="40pt" | 4
|-
! scope="row" | 10
|100
|100
|100
|40
|-
! scope="row" | 3
|30
|30
|30
|12
|}
Totalling the contents of each row, it is apparent that the final result of the calculation is (100 + 100 + 100 + 40) + (30 + 30 + 30 + 12) = 340 + 102 = 442.
Standard blocks
Once pupils have become comfortable with the idea of splitting the whole product into contributions from separate boxes, it is a natural step to group the tens together, so that the calculation 34 × 13 becomes
{| class="wikitable" style="text-align: center;"
! scope="col" width="40pt" | ×
! scope="col" width="120pt" | 30
! scope="col" width="40pt" | 4
|-
! scope="row" | 10
|300
|40
|-
! scope="row" | 3
|90
|12
|}
giving the addition
{|
| 300
40
90
+ 12
————
442
|}
so 34 × 13 = 442.
This is the most usual form for a grid calculation. In countries such as the UK where teaching of the grid method is usual, pupils may spend a considerable period of time regularly setting out calculations like the above, until the method is entirely comfortable and familiar.
Larger numbers
The grid method extends straightforwardly to calculations involving larger numbers.
For example, to calculate 345 × 28, the student could construct the grid with six easy multiplications
{| class="wikitable" style="text-align: center;"
! scope="col" width="40pt" | ×
! scope="col" width="40pt" | 300
! scope="col" width="40pt" | 40
! scope="col" width="40pt" | 5
|-
! scope="row" | 20
|6000
|800
|100
|-
! scope="row" | 8
|2400
|320
|40
|}
to find the answer 6900 + 2760 = 9660.
However, by this stage (at least in standard current UK teaching practice) pupils may be starting to be encouraged to set out such a calculation using the traditional long multiplication form without having to draw up a grid.
Traditional long multiplication can be related to a grid multiplication in which only one of the numbers is broken into tens and units parts to be multiplied separately:
{| class="wikitable" style="text-align: center;"
! scope="col" width="40pt" | ×
! scope="col" width="120pt" | 345
|-
! scope="row" | 20
|6900
|-
! scope="row" | 8
|2760
|}
The traditional method is ultimately faster and much more compact; but it requires two significantly more difficult multiplications which pupils may at first struggle with . Compared to the grid method, traditional long multiplication may also be more abstract and less manifestly clear , so some pupils find it harder to remember what is to be done at each stage and why . Pupils may therefore be encouraged for quite a period to use the simpler grid method alongside the more efficient traditional long multiplication method, as a check and a fall-back.
Other applications
Fractions
While not normally taught as a standard method for multiplying fractions, the grid method can readily be applied to simple cases where it is easier to find a product by breaking it down.
For example, the calculation 2 × 1 can be set out using the grid method
{| class="wikitable" style="text-align: center;"
! scope="col" width="40pt" | ×
! scope="col" width="40pt" | 2
! scope="col" width="40pt" |
|-
! scope="row" | 1
| 2
|
|-
! scope="row" |
| 1
|
|}
to find that the resulting product is 2 + + 1 + = 3
Algebra
The grid method can also be used to illustrate the multiplying out of a product of binomials, such as (a + 3)(b + 2), a standard topic in elementary algebra (although one not usually met until secondary school):
{| class="wikitable" style="text-align: center;"
! scope="col" width="40pt" | ×
! scope="col" width="40pt" | a
! scope="col" width="40pt" | 3
|-
! scope="row" | b
| ab
| 3b
|-
! scope="row" | 2
| 2a
| 6
|}
Thus (a + 3)(b + 2) = ab + 3b + 2a + 6.
Computing
32-bit CPUs usually lack an instruction to multiply two 64-bit integers. However, most CPUs support a "multiply with overflow" instruction, which takes two 32-bit operands, multiplies them, and puts the 32-bit result in one register and the overflow in another, resulting in a carry. For example, these include the umull instruction added in the ARMv4t instruction set or the pmuludq instruction added in SSE2 which operates on the lower 32 bits of an SIMD register containing two 64-bit lanes.
On platforms that support these instructions, a slightly modified version of the grid method is used.
The differences are:
Instead of operating on multiples of 10, they are operated on 32-bit integers.
Instead of higher bits being multiplied by ten, they are multiplied by 0x100000000. This is usually done by either shifting to the left by 32 or putting the value into a specific register that represents the higher 32 bits.
Any values that lie above the 64th bit are truncated. This means that multiplying the highest bits is not required, because the result will be shifted out of the 64-bit range. This also means that only a 32-bit multiply is required for the higher multiples.
{| class="wikitable" style="text-align: center;"
! scope="col" width="40pt" | ×
! scope="col" width="40pt" | b
! scope="col" width="40pt" | a
|-
! scope="row" | d
| -
| ad
|-
! scope="row" | c
| bc
| ac
|}
This would be the routine in C:
#include <stdint.h>
uint64_t multiply(uint64_t ab, uint64_t cd)
{
/* These shifts and masks are usually implicit, as 64-bit integers
* are often passed as 2 32-bit registers. */
uint32_t b = ab >> 32, a = ab & 0xFFFFFFFF;
uint32_t d = cd >> 32, c = cd & 0xFFFFFFFF;
/* multiply with overflow */
uint64_t ac = (uint64_t)a * (uint64_t)c;
uint32_t high = ac >> 32; /* overflow */
uint32_t low = ac & 0xFFFFFFFF;
/* 32-bit multiply and add to high bits */
high += (a * d); /* add ad */
high += (b * c); /* add bc */
/* multiply by 0x100000000 (via left shift) and add to the low bits with a binary or. */
return ((uint64_t)high << 32) | low;
}
This would be the routine in ARM assembly:
multiply:
@ a = r0
@ b = r1
@ c = r2
@ d = r3
push {r4, lr} @ backup r4 and lr to the stack
umull r12, lr, r2, r0 @ multiply r2 and r0, store the result in r12 and the overflow in lr
mla r4, r2, r1, lr @ multiply r2 and r1, add lr, and store in r4
mla r1, r3, r0, r4 @ multiply r3 and r0, add r4, and store in r1
@ The value is shifted left implicitly because the
@ high bits of a 64-bit integer are returned in r1.
mov r0, r12 @ Set the low bits of the return value to r12 (ac)
pop {r4, lr} @ restore r4 and lr from the stack
bx lr @ return the low and high bits in r0 and r1 respectively
Mathematics
Mathematically, the ability to break up a multiplication in this way is known as the distributive law, which can be expressed in algebra as the property that a(b+c) = ab + ac. The grid method uses the distributive property twice to expand the product, once for the horizontal factor, and once for the vertical factor.
Historically the grid calculation (tweaked slightly) was the basis of a method called lattice multiplication, which was the standard method of multiple-digit multiplication developed in medieval Arabic and Hindu mathematics. Lattice multiplication was introduced into Europe by Fibonacci at the start of the thirteenth century along with Arabic numerals themselves; although, like the numerals also, the ways he suggested to calculate with them were initially slow to catch on. Napier's bones were a calculating help introduced by the Scot John Napier in 1617 to assist lattice-method calculations.
See also
Multiplication algorithm
Multiplication Table
References
Rob Eastaway and Mike Askew, Maths for Mums and Dads, Square Peg, 2010. . pp. 140–153.
External links
Long multiplication − The Box method, Maths online.
Long multiplication and division, BBC GCSE Bitesize
Mathematics education
Elementary arithmetic
Multiplication
Primary education | Grid method multiplication | [
"Mathematics"
] | 2,820 | [
"Elementary mathematics",
"Arithmetic",
"Elementary arithmetic"
] |
5,696,506 | https://en.wikipedia.org/wiki/Colossal%20Typewriter | Colossal Typewriter by John McCarthy and Roland Silver was one of the earliest computer text editors. The program ran on the PDP-1 at Bolt, Beranek and Newman (BBN) by December 1960.
About this time, both authors were associated with the Massachusetts Institute of Technology, but it is unclear whether the editor ran on the TX-0 on loan to MIT from Lincoln Laboratory or on the PDP-1 donated to MIT in 1961 by Digital Equipment Corporation. A "Colossal Typewriter Program" is in the BBN Program Library, and, under the same name, in the DECUS Program Library as BBN- 6 (CT).
See also
Expensive Typewriter
TECO
RUNOFF
TJ-2
Notes
1960 software
Text editors
History of software | Colossal Typewriter | [
"Technology"
] | 154 | [
"History of software",
"History of computing"
] |
5,697,015 | https://en.wikipedia.org/wiki/Hypnodermatology | Hypnodermatology is an informal label for the use of hypnosis in treating the skin conditions that fall between conventional medical dermatology and the mental health disciplines.
The use of hypnosis to provide relief for some skin conditions is based on observations that the severity of the disease may correlate with emotional issues. In addition, hypnotherapy has been used to suggest improvement on dermatological symptoms, such as chronic psoriasis, eczema, ichthyosis, warts and alopecia areata.
Philip D. Shenefelt, a research dermatologist at the University of South Florida School of Medicine, has identified two dozen dermatologic conditions that have shown response to hypnosis in the literature, with varying degrees of evidence. These include successful results in controlled trials on verruca vulgaris, psoriasis, and atopic dermatitis. A 2005 review in the Mayo Clinic Proceedings stated that, "A review of the use of hypnosis in dermatology supports its value for many skin conditions not believed to be under conscious control". The most comprehensively studied skin conditions in relation to hypnotherapy are psoriasis and warts. Hypnosis may have positive effects on dermatological conditions in both adults and children.
Hypnotherapy may contribute towards reducing itching and discomfort brought on by the presence of warts and improves and possibly decreasing lesions.
See also
Psychodermatology
References
External links
American Academy of Family Physicians
National Psoriasis Foundation
Hypnosis
Clinical psychology
Dermatology | Hypnodermatology | [
"Biology"
] | 327 | [
"Behavioural sciences",
"Behavior",
"Clinical psychology"
] |
5,697,044 | https://en.wikipedia.org/wiki/Negishi%20coupling | The Negishi coupling is a widely employed transition metal catalyzed cross-coupling reaction. The reaction couples organic halides or triflates with organozinc compounds, forming carbon-carbon bonds (C-C) in the process. A palladium (0) species is generally utilized as the catalyst, though nickel is sometimes used. A variety of nickel catalysts in either Ni0 or NiII oxidation state can be employed in Negishi cross couplings such as Ni(PPh3)4, Ni(acac)2, Ni(COD)2 etc.
The leaving group X is usually chloride, bromide, or iodide, but triflate and acetyloxy groups are feasible as well. X = Cl usually leads to slow reactions.
The organic residue R = alkenyl, aryl, allyl, alkynyl or propargyl.
The halide X' in the organozinc compound can be chloride, bromine or iodine and the organic residue R' is alkenyl, aryl, allyl, alkyl, benzyl, homoallyl, and homopropargyl.
The metal M in the catalyst is nickel or palladium
The ligand L in the catalyst can be triphenylphosphine, dppe, BINAP, chiraphos or XPhos.
Palladium catalysts in general have higher chemical yields and higher functional group tolerance.
The Negishi coupling finds common use in the field of total synthesis as a method for selectively forming C-C bonds between complex synthetic intermediates. The reaction allows for the coupling of sp3, sp2, and sp carbon atoms, (see orbital hybridization) which make it somewhat unusual among the palladium-catalyzed coupling reactions. Organozincs are moisture and air sensitive, so the Negishi coupling must be performed in an oxygen and water free environment, a fact that has hindered its use relative to other cross-coupling reactions that require less robust conditions (i.e. Suzuki reaction). However, organozincs are more reactive than both organostannanes and organoborates which correlates to faster reaction times.
The reaction is named after Ei-ichi Negishi who was a co-recipient of the 2010 Nobel Prize in Chemistry for the discovery and development of this reaction.
Negishi and coworkers originally investigated the cross-coupling of organoaluminum reagents in 1976 initially employing Ni and Pd as the transition metal catalysts, but noted that Ni resulted in the decay of stereospecifity whereas Pd did not. Transitioning from organoaluminum species to organozinc compounds Negishi and coworkers reported the use of Pd complexes in organozinc coupling reactions and carried out methods studies, eventually developing the reaction conditions into those commonly utilized today. Alongside Richard F. Heck and Akira Suzuki, El-ichi Negishi was a co-recipient of the Nobel Prize in Chemistry in 2010, for his work on "palladium-catalyzed cross couplings in organic synthesis".
Reaction mechanism
The reaction mechanism is thought to proceed via a standard Pd catalyzed cross-coupling pathway, starting with a Pd(0) species, which is oxidized to Pd(II) in an oxidative addition step involving the organohalide species. This step proceeds with aryl, vinyl, alkynyl, and acyl halides, acetates, or triflates, with substrates following standard oxidative addition relative rates (I>OTf>Br>>Cl).
The actual mechanism of oxidative addition is unresolved, though there are two likely pathways. One pathway is thought to proceed via an SN2 like mechanism resulting in inverted stereochemistry. The other pathway proceeds via concerted addition and retains stereochemistry.
Though the additions are cis- the Pd(II) complex rapidly isomerizes to the trans- complex.
Next, the transmetalation step occurs where the organozinc reagent exchanges its organic substituent with the halide in the Pd(II) complex, generating the trans- Pd(II) complex and a zinc halide salt. The organozinc substrate can be aryl, vinyl, allyl, benzyl, homoallyl, or homopropargyl. Transmetalation is usually rate limiting and a complete mechanistic understanding of this step has not yet been reached though several studies have shed light on this process. Alkylzinc species form higher-order zincate species prior to transmetalation whereas arylzinc species do not. ZnXR and ZnR2 can both be used as reactive reagents, and Zn is known to prefer four coordinate complexes, which means solvent coordinated Zn complexes, such as cannot be ruled out a priori. Studies indicate competing equilibriums exist between cis- and trans- bis alkyl organopalladium complexes, but that the only productive intermediate is the cis complex.
The last step in the catalytic pathway of the Negishi coupling is reductive elimination, which is thought to proceed via a three coordinate transition state, yielding the coupled organic product and regenerating the Pd(0) catalyst. For this step to occur, the aforementioned cis- alkyl organopalladium complex must be formed.
Both organozinc halides and diorganozinc compounds can be used as starting materials. In one model system it was found that in the transmetalation step the former give the cis-adduct R-Pd-R' resulting in fast reductive elimination to product while the latter gives the trans-adduct which has to go through a slow trans-cis isomerization first.
A common side reaction is homocoupling. In one Negishi model system the formation of homocoupling was found to be the result of a second transmetalation reaction between the diarylmetal intermediate and arylmetal halide:
Ar–Pd–Ar' + Ar'–Zn–X → Ar'–Pd–Ar' + Ar–Zn–X
Ar'–Pd–Ar' → Ar'–Ar' + Pd(0) (homocoupling)
Ar–Zn–X + H2O → Ar–H + HO–Zn–X (reaction accompanied by dehalogenation)
Nickel catalyzed systems can operate under different mechanisms depending on the coupling partners. Unlike palladium systems which involve only Pd0 or PdII, nickel catalyzed systems can involve nickel of different oxidation states. Both systems are similar in that they involve similar elementary steps: oxidative addition, transmetalation, and reductive elimination. Both systems also have to address issues of β-hydride elimination and difficult oxidative addition of alkyl electrophiles.
For unactivated alkyl electrophiles, one possible mechanism is a transmetalation first mechanism. In this mechanism, the alkyl zinc species would first transmetalate with the nickel catalyst. Then the nickel would abstract the halide from the alkyl halide resulting in the alkyl radical and oxidation of nickel after addition of the radical.
One important factor when contemplating the mechanism of a nickel catalyzed cross coupling is that reductive elimination is facile from NiIII species, but very difficult from NiII species. Kochi and Morrell provided evidence for this by isolating NiII complex Ni(PEt3)2(Me)(o-tolyl), which did not undergo reductive elimination quickly enough to be involved in this elementary step.
Scope
The Negishi coupling has been applied the following illustrative syntheses:
unsymmetrical 2,2'-bipyridines from 2-bromopyridine with tetrakis(triphenylphosphine)palladium(0),
biphenyl from o-tolylzinc chloride and o-iodotoluene and tetrakis(triphenylphosphine)palladium(0),
5,7-hexadecadiene from 1-decyne and (Z)-1-hexenyl iodide.
Negishi coupling has been applied in the synthesis of hexaferrocenylbenzene:
with hexaiodidobenzene, diferrocenylzinc and tris(dibenzylideneacetone)dipalladium(0) in tetrahydrofuran. The yield is only 4% signifying substantial crowding around the aryl core.
In a novel modification palladium is first oxidized by the haloketone 2-chloro-2-phenylacetophenone 1 and the resulting palladium OPdCl complex then accepts both the organozinc compound 2 and the organotin compound 3 in a double transmetalation:
Examples of nickel catalyzed Negishi couplings include sp2-sp2, sp2-sp3, and sp3-sp3 systems. In the system first studied by Negishi, aryl-aryl cross coupling was catalyzed by Ni(PPh3)4 generated in situ through reduction of Ni(acac)2 with PPh3 and (i-Bu)2AlH.
Variations have also been developed to allow for the cross-coupling of aryl and alkenyl partners. In the variation developed by Knochel et al, aryl zinc bromides were reacted with vinyl triflates and vinyl halides.
Reactions between sp3-sp3 centers are often more difficult; however, adding an unsaturated ligand with an electron withdrawing group as a cocatalyst improved the yield in some systems. It is believed that added coordination from the unsaturated ligand favors reductive elimination over β-hydride elimination. This also works in some alkyl-aryl systems.
Several asymmetric variants exist and many utilize Pybox ligands.
Industrial applications
The Negishi coupling is not employed as frequently in industrial applications as its cousins the Suzuki reaction and Heck reaction, mostly as a result of the water and air sensitivity of the required aryl or alkyl zinc reagents. In 2003 Novartis employed a Negishi coupling in the manufacture of PDE472, a phosphodiesterase type 4D inhibitor, which was being investigated as a drug lead for the treatment of asthma. The Negishi coupling was used as an alternative to the Suzuki reaction providing improved yields, 73% on a 4.5 kg scale, of the desired benzodioxazole synthetic intermediate.
Applications in total synthesis
Where the Negishi coupling is rarely used in industrial chemistry, a result of the aforementioned water and oxygen sensitivity, it finds wide use in the field of natural products total synthesis. The increased reactivity relative to other cross-coupling reactions makes the Negishi coupling ideal for joining complex intermediates in the synthesis of natural products. Additionally, Zn is more environmentally friendly than other metals such as Sn used in the Stille coupling. The Negishi coupling historically is not used as much as the Stille or Suzuki coupling. When it comes to fragment-coupling processes the Negishi coupling is particularly useful, especially when compared to the aforementioned Stille and Suzuki coupling reactions. The major drawback of the Negishi coupling, aside from its water and oxygen sensitivity, is its relative lack of functional group tolerance when compared to other cross-coupling reactions.
(−)-stemoamide is a natural product found in the root extracts of ‘’Stemona tuberosa’’. These extracts have been used Japanese and Chinese folk medicine to treat respiratory disorders, and (−)-stemoamide is also an anthelminthic. Somfai and coworkers employed a Negishi coupling in their synthesis of (−)-stemoamide. The reaction was implemented mid-synthesis, forming an sp3-sp2 c-c bond between β,γ-unsaturated ester and an intermediate diene 4 with a 78% yield of product 5. Somfai completed the stereoselective total synthesis of (−)-stemoamide in 12-steps with a 20% overall yield.
Kibayashi and coworkers utilized the Negishi coupling in the total synthesis of Pumiliotoxin B. Pumiliotoxin B is one of the major toxic alkaloids isolated from Dendrobates pumilio, a Panamanian poison frog. These toxic alkaloids display modulatory effects on voltage-dependent sodium channels, resulting in cardiotonic and myotonic activity. Kibayashi employed the Negishi coupling late stage in the synthesis of Pumiliotoxin B, coupling a homoallylic sp3 carbon on the zinc alkylidene indolizidine 6 with the (E)-vinyl iodide 7 with a 51% yield. The natural product was then obtained after deprotection.
δ-trans-tocotrienoloic acid isolated from the plant, Chrysochlamys ulei, is a natural product shown to inhibit DNA polymerase β (pol β), which functions to repair DNA via base excision. Inhibition of pol B in conjunction with other chemotherapy drugs may increase the cytotoxicity of these chemotherapeutics, leading to lower effective dosages. The Negishi coupling was implemented in the synthesis of δ-trans-tocotrienoloic acid by Hecht and Maloney coupling the sp3 homopropargyl zinc reagent 8 with sp2 vinyl iodide 9. The reaction proceeded with quantitative yield, coupling fragments mid-synthesis en route to the stereoselectively synthesized natural product δ-trans-tocotrienoloic acid.
Smith and Fu demonstrated that their method to couple secondary nucleophiles with secondary alkyl electrophiles could be applied to the formal synthesis of α-cembra-2,7,11-triene-4,6-diol, a target with antitumor activity. They achieved a 61% yield on a gram scale using their method to install an iso-propyl group. This method would be highly adaptable in this application for diversification and installing other alkyl groups to enable structure-activbity relationship (SAR) studies.Kirschning and Schmidt applied nickel catalyzed negishi cross-coupling to the first total synthesis of carolactone. In this application, they achieved 82% yield and dr = 10:1.
Preparation of organozinc precursors
Alkylzinc reagents can be accessed from the corresponding alkyl bromides using iodine in dimethylacetamide (DMAC). The catalytic I2 serves to activate the zinc towards nucleophilic addition.
Aryl zincs can be synthesized using mild reaction conditions via a Grignard like intermediate.
Organozincs can also be generated in situ and used in a one pot procedure as demonstrated by Knochel et al.
Further reading
See also
CPhos
Heck reaction
Suzuki reaction
References
External links
The Negishi coupling at www.organic-chemistry.org
Carbon-carbon bond forming reactions
Condensation reactions
Name reactions | Negishi coupling | [
"Chemistry"
] | 3,209 | [
"Carbon-carbon bond forming reactions",
"Coupling reactions",
"Organic reactions",
"Name reactions",
"Condensation reactions"
] |
5,697,585 | https://en.wikipedia.org/wiki/Testicular%20sperm%20extraction | Testicular sperm extraction (TESE) is a surgical procedure in which a small portion of tissue is removed from the testicle and any viable sperm cells from that tissue are extracted for use in further procedures, most commonly intracytoplasmic sperm injection (ICSI) as part of in vitro fertilisation (IVF). TESE is often recommended to patients who cannot produce sperm by ejaculation due to azoospermia.
Medical uses
TESE is recommended to patients who do not have sperm present in their ejaculate, azoospermia, or who cannot ejaculate at all. In general, azoospermia can be divided into obstructive and non-obstructive subcategories.
TESE is primarily used for non-obstructive azoospermia, where patients do not have sperm present in the ejaculate but who may produce sperm in the testis. Azoospermia in these patients could be a result of Y chromosome microdeletions, cancer of the testicles or damage to the pituitary gland or hypothalamus, which regulate sperm production. Often in these cases, TESE is used as a second option, after prior efforts to treat the azoospermia through hormone therapy have failed.
However, if azoospermia is related to a disorder of sexual development, such as Klinefelter syndrome, TESE is not used clinically; as of 2016, this was in the research phase.
More rarely, TESE is used to extract sperm in cases of obstructive azoospermia. Obstructive azoospermia can be caused in a variety of ways:
vasectomy
trauma
congenital absence of the vas deferens (CAVD)
cystic fibrosis.
TESE can also be used as a fertility preservation option for patients undergoing gender reassignment surgery and who cannot ejaculate sperm.
Technique
Conventional TESE is usually performed under local, or sometimes spinal or general, anaesthesia. An incision in the median raphe of the scrotum is made and continued through the dartos fibres and the tunica vaginalis. The testicle and epidydymis are then visible. Incisions are then made through the outer covering of the testis to retrieve biopsies of seminiferous tubules, which are the structures that contain sperm. The incision is closed with sutures and each sample is assessed under a microscope to confirm the presence of sperm.
Following extraction, sperm is often cryogenically preserved for future use, but can also be used fresh.
Micro-TESE
Micro-TESE, or microdissection testicular sperm extraction, includes the use of an operating microscope. This allows the surgeon to observe regions of seminiferous tubules of the testes that have more chance of containing spermatozoa. The procedure is more invasive than conventional TESE, requiring general anaesthetic, and usually used only in patients with non-obstructive azoospermia. Similarly to TESE, an incision is made in the scrotum and surface of the testicle to expose seminiferous tubules. However, this exposure is much more wide in micro-TESE. This allows exploration of the incision under the microscope to identify areas of tubules more likely to contain more sperm. If none can be identified, biopsies are instead taken at random from a wide range of locations. The incision is closed with sutures. Samples are re-examined post-surgery to locate and then purify sperm.
When compared with FNA of the testis, conventional TESE is 2-fold more effective at identifying sperm in men with non-obstructive azoospermia. Compared with conventional TESE, micro-TESE has about 1.5-fold higher success in extracting sperm; as such, micro-TESE is preferable in cases of non-obstructive azoospermia< , where infertility is caused by a lack of sperm production rather than a blockage. In these cases, micro-TESE is more likely to yield sufficient sperm for use in ICSI.
TESE vs TESA
TESE is different to testicular sperm aspiration (TESA). TESA is done under local anaesthesia, does not involve an open biopsy and is suitable for patients with obstructive azoospermia.
Complications
Micro-TESE and TESE have risks of postoperative infection, bleeding and pain. TESE can result in testicular abnormalities and scarring of the tissue. The procedure can cause testicular fibrosis and inflammation, which can reduce testicular function and cause testicular atrophy. Both procedures can alter the steroid function of the testes causing a decline in serum testosterone levels, which can result in testosterone deficiency. This can cause side-effects including muscle weakness, decreased sexual function, anxiety, leading to sleep deficiency. The blood supply to the testis can also be altered during this procedure, potentially reducing supply. Long-term follow-ups are often recommended to prevent these complications.
Micro-TESE has limited postoperative complications compared with TESE. The use of the surgical microscope allows for small specific incisions to retrieve seminiferous tubules and evade damaging blood vessels by avoiding regions with no vasculature.
If TESE needs to be repeated due to insufficient sperm recovery, patients are usually advised to wait 6–12 months in order to allow adequate healing of the testis before further surgery.
See also
Azoospermia
Intracytoplasmic sperm injection
Percutaneous epididymal sperm aspiration
Semen cryopreservation
References
Fertility medicine
Assisted reproductive technology
Urologic surgery
Semen | Testicular sperm extraction | [
"Biology"
] | 1,195 | [
"Assisted reproductive technology",
"Medical technology"
] |
5,697,589 | https://en.wikipedia.org/wiki/AN/TPQ-36%20Firefinder%20radar | Hughes AN/TPQ-36 Firefinder weapon locating system is a mobile radar system developed in the mid-late 1970s by Hughes Aircraft Company and manufactured by Northrop Grumman and ThalesRaytheonSystems, achieving initial operational capability in May 1982. The system is a "weapon-locating radar", designed to detect and track incoming mortar, artillery and rocket fire to determine the point of origin for counter-battery fire. It is currently in service at battalion and higher levels in the United States Army, United States Marine Corps, Australian Army, Portuguese Army, Turkish Army, and the Armed Forces of Ukraine.
The radar is typically trailer-mounted and towed by a Humvee.
Upgrades
Firefinder (V)7 adds a Modular Azimuth Position System (MAPS). MAPS has a north seeking laser gyrocompass and a microprocessor controlled Honeywell H-726 inertial navigation system. Prior Firefinders used a survey team to find site latitude, longitude, and direction to North. With MAPS, reaction time was limited only by the time taken to set up the site, since system geo-position was pre-loaded before sortie deployment. Crew was reduced from 8 to 6.
Firefinder (V)8 extends system performance, improves operator survivability and lowers life cycle cost. Greater processing power and the addition of a low noise amplifier to the radar antenna improves detection range (by up to 50%) and performance accuracy against certain threats.
Operations/maintainers/specifications
The AN/TPQ-36 is an electronically steered radar, meaning the radar antenna does not actually move while in operation. The radar antenna may however be moved manually if required. The system may also be operated in a friendly fire mode to determine the accuracy of counterbattery return fire, or for conducting radar registration or mean point of impact calibrations for friendly artillery.
It can locate mortars, artillery, and rocket launchers, simultaneously locate 10 weapons, locate targets on first round and perform high-burst, datum-plane, and impact registrations. It can be used to adjust friendly fire, interfaces with tactical fire and predicts the impact of hostile projectiles.
Its maximum range is with an effective range of for artillery and for rockets. Its azimuth sector is 90°. It operates in the X-band at 32 frequencies. Peak transmitted power is 23 kW, min.
It features permanent storage for 99 targets, has a field exercise mode and uses a digital data interface.
Manufacturers
Northrop Grumman manufactures the AN/TPQ-36(V)8 Firefinder radar.
Before its acquisition by Raytheon, the Hughes Aircraft Co. developed the AN/TPQ-36 Firefinder radar at its facility at Fullerton, California, and manufactured it at its plant in Forest, Mississippi.
Nomenclature
Per the Joint Electronics Type Designation System (JETDS), the nomenclature AN/TPQ-36 is thus derived:
"AN/" indicating Army/Navy(Marines)--a system nomenclature derived from the JETDS.
"T" for 'transportable', indicating it is carried by a vehicle but is not an integral part of said vehicle (compare with 'V' for vehicle-mounted).
"P" indicating a position finder (radar).
"Q" for a special-purpose(multipurpose) radar, in this case counterbattery.
"36" is the 36'th version of this family, of TPQ radars.
Users
: Used by Australian Defence Force
: Used by Chilean Army
: Used by Royal Netherlands Army
: Used by Portuguese Army (5th Artillery Regiment)
: Used by Spanish Army
: Used by Sri Lankan Army
: Used by Turkish Land Forces
:
Two units delivered by US Army in 2015.
Five units delivered by the Netherlands Ministry of Defence in March 2022, during the 2022 Russian invasion of Ukraine.
Ten units delivered by US Army on April 13, 2022, three more deliveries on May 19, during the 2022 Russian invasion of Ukraine.
: Used by United States Army, United States Marine Corps
See also
AN/MPQ-64
ARTHUR (military)
Red Color
SLC-2 Radar
Swathi Weapon Locating Radar
List of radars
List of military electronics of the United States
References
External links
Product Description for AN/TPQ-36 from ThalesRaytheonSystems
TPQ-36 Radar Data Sheet from ThalesRaytheonSystems
Fact sheet for the AN/TPQ-36 from Raytheon
ROCS new upgrades for TPQ-36/37 from BES Systems
Fact file for the AN/TPQ-36 from GlobalSecurity.org
Ground radars
Hughes Aircraft Company
Military radars of the United States
Northrop Grumman radars
Radar equipment of the Cold War
Raytheon Company products
Weapon locating radar
Military radars of the United States Marine Corps
Military equipment introduced in the 1980s
Military electronics of the United States | AN/TPQ-36 Firefinder radar | [
"Technology"
] | 1,019 | [
"Warning systems",
"Weapon locating radar"
] |
5,697,912 | https://en.wikipedia.org/wiki/Imaging%20spectrometer | An imaging spectrometer is an instrument used in hyperspectral imaging and imaging spectroscopy to acquire a spectrally-resolved image of an object or scene, usually to support analysis of the composition the object being imaged. The spectral data produced for a pixel is often referred to as a datacube due to the three-dimensional representation of the data. Two axes of the image correspond to vertical and horizontal distance and the third to wavelength. The principle of operation is the same as that of the simple spectrometer, but special care is taken to avoid optical aberrations for better image quality.
Example imaging spectrometer types include: filtered camera, whiskbroom scanner, pushbroom scanner, integral field spectrograph (or related dimensional reformatting techniques), wedge imaging spectrometer, Fourier transform imaging spectrometer, computed tomography imaging spectrometer (CTIS), image replicating imaging spectrometer (IRIS), coded aperture snapshot spectral imager (CASSI), and image mapping spectrometer (IMS).
Background
In 1704, Sir Isaac Newton demonstrated that white light could be split up into component colours. The subsequent history of spectroscopy led to precise measurements and provided the empirical foundations for atomic and molecular physics (Born & Wolf, 1999). Significant achievements in imaging spectroscopy are attributed to airborne instruments, particularly arising in the early 1980s and 1990s (Goetz et al., 1985; Vane et al., 1984). However, it was not until 1999 that the first imaging spectrometer was launched in space (the NASA Moderate-resolution Imaging Spectroradiometer, or MODIS).
Terminology and definitions evolve over time. At one time, >10 spectral bands sufficed to justify the term imaging spectrometer but presently the term is seldom defined by a total minimum number of spectral bands, rather by a contiguous (or redundant) statement of spectral bands.
Principle
Imaging spectrometers are used specifically for the purpose of measuring the spectral content of light and electromagnetic light. The spectral data gathered is used to give the operator insight into the sources of radiation. Prism spectrometers use a classical method of dispersing radiation by means of a prism as a refracting element.
The imaging spectrometer works by imaging a radiation source onto what is called a "slit" by means of a source imager. A collimator collimates the beam that is dispersed by a refracting prism and re-imaged onto a detection system by a re-imager. Special care is taken to produce the best possible image of the source onto the slit. The purpose of the collimator and re-imaging optics are to take the best possible image of the slit. An area-array of elements fills the detection system at this stage. The source image is reimaged, every point, as a line spectrum on what is called a detector-array column. The detector array signals supply data pertaining to spectral content, in particular, spatially resolved source points inside source area. These source points are imaged onto the slit and then re-imaged onto the detector array. Simultaneously, the system provides spectral information about the source area and its line of spatially resolved points. The line is then scanned in order to build a database of information about the spectral content.
In imaging spectroscopy (also hyperspectral imaging or spectral imaging) each pixel of an image acquires many bands of light intensity data from the spectrum, instead of just the three bands of the RGB color model. More precisely, it is the simultaneous acquisition of spatially coregistered images in many spectrally contiguous bands.
Some spectral images contain only a few image planes of a spectral data cube, while others are better thought of as full spectra at every location in the image. For example, solar physicists use the spectroheliograph to make images of the Sun built up by scanning the slit of a spectrograph, to study the behavior of surface features on the Sun; such a spectroheliogram may have a spectral resolution of over 100,000 () and be used to measure local motion (via the Doppler shift) and even the magnetic field (via the Zeeman splitting or Hanle effect) at each location in the image plane. The multispectral images collected by the Opportunity rover, in contrast, have only four wavelength bands and hence are only a little more than 3-color images.
Unmixing
Hyperspectral data is often used to determine what materials are present in a scene. Materials of interest could include roadways, vegetation, and specific targets (i.e. pollutants, hazardous materials, etc.). Trivially, each pixel of a hyperspectral image could be compared to a material database to determine the type of material making up the pixel. However, many hyperspectral imaging platforms have low resolution (>5m per pixel) causing each pixel to be a mixture of several materials. The process of unmixing one of these 'mixed' pixels is called hyperspectral image unmixing or simply hyperspectral unmixing.
A solution to hyperspectral unmixing is to reverse the mixing process. Generally, two models of mixing are assumed: linear and nonlinear.
Linear mixing models the ground as being flat and incident sunlight on the ground causes the materials to radiate some amount of the incident energy back to the sensor. Each pixel then, is modeled as a linear sum of all the radiated energy curves of materials making up the pixel. Therefore, each material contributes to the sensor's observation in a positive linear fashion. Additionally, a conservation of energy constraint is often observed thereby forcing the weights of the linear mixture to sum to one in addition to being positive. The model can be described mathematically as follows:
where represents a pixel observed by the sensor, is a matrix of material reflectance signatures (each signature is a column of the matrix), and is the proportion of material present in the observed pixel. This type of model is also referred to as a simplex.
With satisfying the two constraints:
1. Abundance Nonnegativity Constraint (ANC) - each element of x is positive.
2. Abundance Sum-to-one Constraint (ASC) - the elements of x must sum to one.
Non-linear mixing results from multiple scattering often due to non-flat surface such as buildings and vegetation.
There are many algorithms to unmix hyperspectral data each with their own strengths and weaknesses. Many algorithms assume that pure pixels (pixels which contain only one materials) are present in a scene.
Some algorithms to perform unmixing are listed below:
Pixel Purity Index Works by projecting each pixel onto one vector from a set of random vectors spanning the reflectance space. A pixel receives a score when it represent an extremum of all the projections. Pixels with the highest scores are deemed to be spectrally pure.
N-FINDR
Gift Wrapping Algorithm
Independent Component Analysis Endmember Extraction Algorithm - works by assuming that pure pixels occur independently than mixed pixels. Assumes pure pixels are present.
Vertex Component Analysis - works on the fact that the affine transformation of a simplex is another simplex which helps to find hidden (folded) vertices of the simplex. Assumes pure pixels are present.
Principal component analysis - could also be used to determine endmembers, projection on principal axes could permit endmember selection [Smith, Johnson et Adams (1985), Bateson et Curtiss (1996)]
Multi endmembers spatial mixture analysis based on the SMA algorithm
Spectral phasor analysis based on Fourier transformation of spectra and plotting them on a 2D plot.
Non-linear unmixing algorithms also exist: support vector machines or analytical neural network.
Probabilistic methods have also been attempted to unmix pixel through Monte Carlo unmixing algorithm.
Once the fundamental materials of a scene are determined, it is often useful to construct an abundance map of each material which displays the fractional amount of material present at each pixel. Often linear programming is done to observed ANC and ASC.
Applications
Planetary observations
The practical application of imaging spectrometers is they are used to observe the planet Earth from orbiting satellites. The spectrometer functions by recording all points of color on a picture, thus, the spectrometer is focused on specific parts of the Earth's surface to record data. The advantages of spectral content data include vegetation identification, physical condition analysis, mineral identification for the purpose of potential mining, and the assessment of polluted waters in oceans, coastal zones and inland waterways.
Prism spectrometers are ideal for Earth observation because they measure wide spectral ranges competently. Spectrometers can be set to cover a range from 400 nm to 2,500 nm, which interests scientists who are able to observe Earth by means of aircraft and satellite. The spectral resolution of the prism spectrometer is not desirable for most scientific applications; thus, its purpose is specific to recording spectral content of areas with greater spatial variations.
Venus express, orbiting Venus, had a number of imaging spectrometers covering NIR-vis-UV.
Geophysical imaging
One application is spectral geophysical imaging, which allows quantitative and qualitative characterization of the surface and of the atmosphere, using radiometric measurements. These measurements can then be used for unambiguous direct and indirect identification of surface materials and atmospheric trace gases, the measurement of their relative concentrations, subsequently the assignment of the proportional contribution of mixed pixel signals (e.g., the spectral unmixing problem), the derivation of their spatial distribution (mapping problem), and finally their study over time (multi-temporal analysis). The Moon Mineralogy Mapper on Chandrayaan-1 was a geophysical imaging spectrometer.
Disadvantages
The lenses of the prism spectrometer are used for both collimation and re-imaging; however, the imaging spectrometer is limited in its performance by the image quality provided by the collimators and re-imagers. The resolution of the slit image at each wavelength limits spatial resolution; likewise, the resolution of optics across the slit image at each wavelength limits spectral resolution. Moreover, distortion of the slit image at each wavelength can complicate the interpretation of the spectral data.
The refracting lenses used in the imaging spectrometer limit performance by the axial chromatic aberrations of the lens. These chromatic aberrations are bad because they create differences in focus, which prevent good resolution; however, if the range is restricted it is possible to achieve good resolution. Furthermore, chromatic aberrations can be corrected by using two or more refracting materials over the full visible range. It is harder to correct chromatic aberrations over wider spectral ranges without further optical complexity.
Systems
Spectrometers intended for very wide spectral ranges are best if made with all-mirror systems. These particular systems have no chromatic aberrations, and that is why they are preferable. On the other hand, spectrometers with single point or linear array detection systems require simpler mirror systems. Spectrometers using area-array detectors need more complex mirror systems to provide good resolution. It is conceivable that a collimator could be made that would prevent all aberrations; however, this design is expensive because it requires the use of aspherical mirrors.
Smaller two-mirror systems can correct aberrations, but they are not suited for imaging spectrometers. Three mirror systems are compact and correct aberrations as well, but they require at least two asperical components. Systems with more than four mirrors tend to be large and a lot more complex. Catadioptric systems are used in Imagine Spectrometers and are compact, too; however, the collimator or imager will be made up of two curved mirrors and three refracting elements, and thus, the system is very complex.
Optical complexity is unfavorable, however, because effects scatter all optical surfaces and stray reflections. Scattered radiation can interfere with the detector by entering into it and causing errors in recorded spectra. Stray radiation is referred to as stray light. By limiting the total number of surfaces that can contribute to scatter, it limits the introduction of stray light into the equation.
Imaging spectrometers are meant to produce well resolved images. In order for this to occur, imaging spectrometers need to be made with few optical surfaces and have no aspherical optical surfaces.
Sensors
Planned:
EnMAP
Current and Past:
AVIRIS — airborne
MODIS — on board EOS Terra and Aqua platforms
MERIS — on board Envisat
Hyperion — on board Earth Observing-1
Several commercial manufacturers for laboratory, ground-based, aerial, or industrial imaging spectrographs
Examples
Ralph (New Horizons), Visible and ultraviolet imaging spectrometer on New Horizons
Jovian Infrared Auroral Mapper, infrared imaging spectrometer on Juno Jupiter orbiter
Mapping Imaging Spectrometer for Europa (planned for developmental Europa Clipper spacecraft
Compact Reconnaissance Imaging Spectrometer for Mars (CRISM), imaging spectrometer in Mars orbit aboard Mars Reconnaissance Orbiter
Special Sensor Ultraviolet Limb Imager, to observe the earth's ionosphere and thermosphere
See also
Landsat
Remote sensing
Hyperspectral imaging
Full Spectral Imaging
List of Earth observation satellites
Chemical Imaging
Infrared Microscopy
Phasor approach to fluorescence lifetime and spectral imaging
Video spectroscopy
References
Further readling
Goetz, A.F.H., Vane, G., Solomon, J.E., & Rock, B.N. (1985) Imaging spectrometry for earth remote sensing. Science, 228, 1147.
Schaepman, M. (2005) Spectrodirectional Imaging: From Pixels to Processes. Inaugural address, Wageningen University, Wageningen (NL).
Vane, G., Chrisp, M., Emmark, H., Macenka, S., & Solomon, J. (1984) Airborne Visible Infrared Imaging Spec-trometer (AVIRIS): An Advanced Tool for Earth Remote Sensing. European Space Agency, (Special Publication) ESA SP, 2, 751.
External links
List of imaging spectrometer instruments
About imaging spectroscopy (USGS): http://speclab.cr.usgs.gov/aboutimsp.html
Link to resources (OKSI): http://www.techexpo.com/WWW/opto-knowledge/IS_resources.html
Special Interest Group Imaging Spectroscopy (EARSeL): https://web.archive.org/web/20051230225147/http://www.op.dlr.de/dais/SIG-IS/SIG-IS.html
Applications of Spectroscopic and Chemical Imaging in Research: http://www3.imperial.ac.uk/vibrationalspectroscopyandchemicalimaging/research
Analysis tool for spectral unmixing : http://www.spechron.com
Image sensors
Spectrometers | Imaging spectrometer | [
"Physics",
"Chemistry"
] | 3,098 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
5,698,231 | https://en.wikipedia.org/wiki/Expensive%20Tape%20Recorder | Expensive Tape Recorder is a digital audio program written by David Gross while a student at the Massachusetts Institute of Technology. Gross developed the idea with Alan Kotok, a fellow member of the Tech Model Railroad Club. The recorder and playback system ran in the late 1950s or early 1960s on MIT's TX-0 computer on loan from Lincoln Laboratory.
The name
Gross referred to this project by this name casually in the context of Expensive Typewriter and other programs that took their names in the spirit of "Colossal Typewriter". It is unclear if the typewriters were named for the 3 million USD development cost of the TX-0. Or they could have been named for the retail price of the DEC PDP-1, a descendant of the TX-0, installed next door at MIT in 1961. The PDP-1 was one of the least expensive computers money could buy, about 120,000 in 1962 USD. The program has been referred to as a hack, perhaps in the historical sense or in the MIT hack sense. Or the term may have been applied to it in the sense of Hackers: Heroes of the Computer Revolution, a book by Steven Levy.
The project
Gross recalled and very briefly described the project in a 1984 Computer Museum meeting. A person associated with the Tixo Web site spoke with Gross and Kotok, and posted the only other description known.
Influence
According to Kotok, the project was "digital recording more than 20 years ahead of its time." In 1984, when Jack Dennis asked if they could recognize Beethoven, Computer Museum meeting minutes record the authors as saying, "It wasn't bad, considering." Digital audio pioneer Thomas Stockham worked with Dennis and like Kotok helped develop a contemporary debugger. Whether he was first influenced by Expensive Tape Recorder or more by the work of Kenneth N. Stevens is unknown.
See also
PDP-1
Digital recording
Expensive Typewriter
Expensive Desk Calculator
Expensive Planetarium
Harmony Compiler
Notes
References
Digital audio
History of software | Expensive Tape Recorder | [
"Technology"
] | 404 | [
"History of software",
"History of computing"
] |
5,698,359 | https://en.wikipedia.org/wiki/Immunodermatology | Immunodermatology studies skin as an organ of immunity in health and disease. Several areas have special attention, such as photo-immunology (effects of UV light on skin defense), inflammatory diseases such as Hidradenitis suppurativa, allergic contact dermatitis and atopic eczema, presumably autoimmune skin diseases such as vitiligo and psoriasis, and finally the immunology of microbial skin diseases such as retrovirus infections and leprosy. New therapies in development for the immunomodulation of common immunological skin diseases include biologicals aimed at neutralizing TNF-alfa and chemokine receptor inhibitors.
Testing sites
There are multiple universities currently do Immunodermatology:
University of Utah Health.
University of North Carolina.
See also
Dermatology
Immune response
References
Branches of immunology
Dermatology | Immunodermatology | [
"Biology"
] | 186 | [
"Branches of immunology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.