id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
399,590
https://en.wikipedia.org/wiki/Yamaha%20DSP-1
The Yamaha DSP-1 is a processor of early home theater surround sound equipment, produced in 1986. The DSP-1 (referred to by Yamaha as a Digital Soundfield Processor) allowed owners to synthesize up to 6-channels of surround sound from 2 channel stereo sound via a complex digital signal processor (DSP). Much like today's home theater receivers the DSP-1 offered sixteen "sound fields" created through the DSP including a jazz club, a cathedral, a concert hall, and a stadium. However, unlike today's integrated amps and receivers, these soundfield modes were highly editable, allowing the owner to customize the effect to his or her own personal taste. The DSP-1 also included an analog Dolby Surround decoder as well as other effects such as real-time echo and pitch change. Most of the DSP-1's controls are on the unit's remote control. The reason, as mentioned in the manual, being that it was felt that adjustments should be done at the listening position. This can make it difficult for collectors to find a complete functioning unit, although there is at least one provider of aftermarket remote controls with duplicate programming for the DSP-1 if needed. In Dolby Surround mode, only 4 channels are active, with just the front main channels and rear surround channels operating, the forward surround channels being muted. Yamaha has kept the DSP prefix for many of its home DSP and audio amp/receiver products. See also AMD TrueAudio E-mu 20K References External links Yamaha Yamaha DSP Demo from late 1980s Yamaha DSP Demo from early 1990s Alternate Yamaha DSP Demo from early 1990s DSP-1 Audio electronics
Yamaha DSP-1
Engineering
353
47,551,381
https://en.wikipedia.org/wiki/Piricaudiopsis%20punicae
Piricaudiopsis punicae is a fungus occurring on dead branches of Punica granatum, hence its name. It was first found in a tropical forest in southern China. It differs from other Piricaudiopsis species in conidial morphology and in the proliferation of its conidiogenous cell. The presence or proliferation of the conidiogenous cells and the conidial appendages, as well as the height of its conidia are considered putative phylogenetic characters of this genus. References Further reading Zhang, Kai, et al. "Xiuguozhangia, a new genus of microfungi to accommodate five Piricaudiopsis species." Mycotaxon 128.1 (2014): 131–135. 张凯.中国海南, 云南两省凋落枯枝暗色丝孢真菌分类研究. MS thesis. 山东农业大学, 2009. External links MycoBank Fungal plant pathogens and diseases Enigmatic Ascomycota taxa Fungus species
Piricaudiopsis punicae
Biology
215
42,670,365
https://en.wikipedia.org/wiki/Journal%20of%20Geometry%20and%20Physics
The Journal of Geometry and Physics is a scientific journal in mathematical physics. Its scope is to stimulate the interaction between geometry and physics by publishing primary research and review articles which are of common interest to practitioners in both fields. The journal is published by Elsevier since 1984. The Journal covers the following areas of research: Methods of: Algebraic and Differential Topology Algebraic Geometry Real and Complex Differential Geometry Riemannian and Finsler Manifolds Symplectic Geometry Global Analysis, Analysis on Manifolds Geometric Theory of Differential Equations Geometric Control Theory Lie Groups and Lie Algebras Supermanifolds and Supergroups Discrete Geometry Spinors and Twistors Applications to: Strings and Superstrings Noncommutative Topology and Geometry Quantum Groups Geometric Methods in Statistics and Probability Geometry Approaches to Thermodynamics Classical and Quantum Dynamical Systems Classical and Quantum Integrable Systems Classical and Quantum Mechanics Classical and Quantum Field Theory General Relativity Quantum Information Quantum Gravity Editors The editor-in-chief is G. Landi (Università di Trieste). The Advisory Editor is U. Bruzzo. The Editors are L. Jeffrey, V. Mathai and V. Rubtsov. Impact factor According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.249. Abstracting and indexing This journal is indexed by the following services: Current Contents / Physics, Chemical, & Earth Sciences Web of Science Mathematical Reviews INSPEC Zentralblatt MATH Scopus External links Geometry journals Physics journals Academic journals established in 1984 English-language journals Elsevier academic journals Mathematical physics journals Algebraic geometry journals
Journal of Geometry and Physics
Mathematics
320
1,149,065
https://en.wikipedia.org/wiki/Fluidization
Fluidization (or fluidisation) is a process similar to liquefaction whereby a granular material is converted from a static solid-like state to a dynamic fluid-like state. This process occurs when a fluid (liquid or gas) is passed up through the granular material. When a gas flow is introduced through the bottom of a bed of solid particles, it will move upwards through the bed via the empty spaces between the particles. At low gas velocities, aerodynamic drag on each particle is also low, and thus the bed remains in a fixed state. Increasing the velocity, the aerodynamic drag forces will begin to counteract the gravitational forces, causing the bed to expand in volume as the particles move away from each other. Further increasing the velocity, it will reach a critical value at which the upward drag forces will exactly equal the downward gravitational forces, causing the particles to become suspended within the fluid. At this critical value, the bed is said to be fluidized and will exhibit fluidic behavior. By further increasing gas velocity, the bulk density of the bed will continue to decrease, and its fluidization becomes more intense until the particles no longer form a bed and are "conveyed" upwards by the gas flow. When fluidized, a bed of solid particles will behave as a fluid, like a liquid or gas. Like water in a bucket: the bed will conform to the volume of the chamber, its surface remaining perpendicular to gravity; objects with a lower density than the bed density will float on its surface, bobbing up and down if pushed downwards, while objects with a higher density sink to the bottom of the bed. The fluidic behavior allows the particles to be transported like a fluid, channeled through pipes, not requiring mechanical transport (e.g. conveyor belt). A simplified every-day-life example of a gas-solid fluidized bed would be a hot-air popcorn popper. The popcorn kernels, all being fairly uniform in size and shape, are suspended in the hot air rising from the bottom chamber. Because of the intense mixing of the particles, akin to that of a boiling liquid, this allows for a uniform temperature of the kernels throughout the chamber, minimizing the amount of burnt popcorn. After popping, the now larger popcorn particles encounter increased aerodynamic drag which pushes them out of the chamber and into a bowl. The process is also key in the formation of a sand volcano and fluid escape structures in sediments and sedimentary rocks. Applications Most of the fluidization applications use one or more of three important characteristics of fluidized beds: Fluidized solids can be easily transferred between reactors. The intense mixing within a fluidized bed means that its temperature is uniform. There is excellent heat transfer between a fluidized bed and heat exchangers immersed in the bed. In the 1920s, the Winkler process was developed to gasify coal in a fluidized bed, using oxygen. It was not commercially successful. The first large scale commercial implementation, in the early 1940s, was the fluid catalytic cracking (FCC) process, which converted heavier petroleum cuts into gasoline. Carbon-rich "coke" deposits on the catalyst particles and deactivates the catalyst in less than 1 second. The fluidized catalyst particles are shuttled between the fluidized bed reactor and a fluidized bed burner where the coke deposits are burned off, generating heat for the endothermic cracking reaction. By the 1950s, fluidized bed technology was being applied to mineral and metallurgical processes such as drying, calcining, and sulfide roasting. In the 1960s, several fluidized bed processes dramatically reduced the cost of some important monomers. Examples are the Sohio process for acrylonitrile and the oxychlorination process for vinyl chloride. These chemical reactions are highly exothermic and fluidization ensures a uniform temperature, minimizing unwanted side reactions, and efficient heat transfer to cooling tubes, ensuring high productivity. In the late 1970s, a fluidized bed process for the synthesis of polyethylene dramatically reduced the cost of this important polymer, making its use economical in many new applications. The polymerization reaction generates heat and the intense mixing associated with fluidization prevents hot spots where the polyethylene particles would melt. A similar process is used for the synthesis of polypropylene. Currently, most of the processes that are being developed for the industrial production of carbon nanotubes use a fluidized bed. Arkema uses a fluidized bed to produce 400 tonnes/year of multiwall carbon nanotubes. A new potential application of fluidization technology is chemical looping combustion, which has not yet been commercialized. One solution to reducing the potential effect of carbon dioxide generated by fuel combustion (e.g. in power stations) on global warming is carbon dioxide sequestration. Regular combustion with air produces a gas that is mostly nitrogen (as it is air's main component at about 80% by volume), which prevents economical sequestration. Chemical looping uses a metal oxide as a solid oxygen carrier. These metal oxide particles replace air (specifically oxygen in the air) in a combustion reaction with a solid, liquid, or gaseous fuel in a fluidized bed, producing solid metal particles from the reduction of the metal oxides and a mixture of carbon dioxide and water vapor, the major products of any combustion reaction. The water vapor is condensed, leaving pure carbon dioxide which can be sequestered. The solid metal particles are circulated to another fluidized bed where they react with air (and again, specifically oxygen in the air), producing heat and oxidizing the metal particles to metal oxide particles that are recirculated to the fluidized bed combustor. A similar process is used to produce maleic anhydride through the partial oxidation of n-butane, with the circulating particles acting as both catalyst and oxygen carrier; pure oxygen is also introduced directly into the bed. Nearly 50% of the silicon in solar cells is produced in fluidized beds. For example, metallurgical-grade silicon is first reacted to silane gas. The silane gas is thermally cracked in a fluidized bed of seed silicon particles, and the silicon deposits on the seed particles. The cracking reaction is endothermic, and heat is provided through the bed wall, typically made of graphite (to avoid metal contamination of the product silicon). The bed particle size can be controlled using attrition jets. Silane is often premixed with hydrogen to reduce the explosion risk of leaked silane in the air (see silane). Liquid-solid fluidization has a number of applications in engineering The best-known application of liquid-solid fluidization is the backwash of granular filters using water. Fluidization has many applications with the use of ion exchange particles for the purification and processing of many industrial liquid streams. Industries such as food & beverage, hydrometallurgical, water softening, catalysis, bio-based chemical etc. use ion exchange as a critical step in processing. Conventionally ion exchange has been used in a packed bed where a pre-clarified liquid passes downward through a column. Much work has been done at the University of Western Ontario in London Ontario, Canada on the use of a continuous fluidized ion exchange system, named "Liquid-solid circulating fluidized bed" (LSCFB), recently being called "Circulating fluidized ion exchange" (CFIX). This system has widespread applications extending the use of traditional ion exchange systems because it can handle feed streams with large amounts of suspended solids due to the use of fluidization. References External links UBC Fluidization Research Centre ICFAR Chemical processes
Fluidization
Chemistry
1,564
12,218,154
https://en.wikipedia.org/wiki/Cylindrical%20algebraic%20decomposition
In mathematics, cylindrical algebraic decomposition (CAD) is a notion, along with an algorithm to compute it, that is fundamental for computer algebra and real algebraic geometry. Given a set S of polynomials in Rn, a cylindrical algebraic decomposition is a decomposition of Rn into connected semialgebraic sets called cells, on which each polynomial has constant sign, either +, − or 0. To be cylindrical, this decomposition must satisfy the following condition: If 1 ≤ k < n and π is the projection from Rn onto Rn−k consisting in removing the last k coordinates, then for every pair of cells c and d, one has either π(c) = π(d) or π(c) ∩ π(d) = ∅. This implies that the images by π of the cells define a cylindrical decomposition of Rn−k. The notion was introduced by George E. Collins in 1975, together with an algorithm for computing it. Collins' algorithm has a computational complexity that is double exponential in n. This is an upper bound, which is reached on most entries. There are also examples for which the minimal number of cells is doubly exponential, showing that every general algorithm for cylindrical algebraic decomposition has a double exponential complexity. CAD provides an effective version of quantifier elimination over the reals that has a much better computational complexity than that resulting from the original proof of Tarski–Seidenberg theorem. It is efficient enough to be implemented on a computer. It is one of the most important algorithms of computational real algebraic geometry. Searching to improve Collins' algorithm, or to provide algorithms that have a better complexity for subproblems of general interest, is an active field of research. Implementations Mathematica: CylindricalDecomposition QEPCAD -- Quantifier Elimination by Partial Cylindrical Algebraic Decomposition redlog Maple: The RegularChains Library and ProjectionCAD References Basu, Saugata; Pollack, Richard; Roy, Marie-Françoise Algorithms in real algebraic geometry. Second edition. Algorithms and Computation in Mathematics, 10. Springer-Verlag, Berlin, 2006. x+662 pp. ; 3-540-33098-4 Strzebonski, Adam. Cylindrical Algebraic Decomposition from MathWorld. Cylindrical Algebraic Decomposition in Chapter 6 ("Combinatorial Motion Planning") of Planning algorithms by Steven M. LaValle. Accessed 8 February 2023 Caviness, Bob; Johnson, Jeremy; Quantifier Elimination and Cylindrical Algebraic Decomposition. Texts and Monographs in Symbolic Computation. Springer-Verlag, Berlin, 1998. Collins, George E.: Quantifier elimination for the elementary theory of real closed fields by cylindrical algebraic decomposition, Second GI Conf. Automata Theory and Formal Languages, Springer LNCS 33, 1975. Davenport, James H.; Heintz, Joos: Real quantifier elimination is doubly exponential, Journal of Symbolic Computation, 1988. Volume 5, Issues 1–2, ISSN 0747-7171, Computer algebra Real algebraic geometry Polynomials Quantifier (logic)
Cylindrical algebraic decomposition
Mathematics,Technology
626
30,295,708
https://en.wikipedia.org/wiki/RF%20planning
In the context of mobile radio communication systems, RF planning is the process of assigning frequencies, transmitter locations and parameters to a wireless communications system to evaluate coverage and capacity. Coverage is the distance at which the RF signal has sufficient strength to sustain a call/data session. Capacity relates to the system data rate. The RF Planning process consists of four major stages. Phase 1: initial radio link budgeting A statistical propagation model (e.g. Hata, COST-231 Hata or Erceg-Greenstein) is used to approximate the coverage area of the planned sites and to eventually determine how many sites are required. The statistical propagation of the model does not include terrain effects and has a model for each type of environment (rural, urban, suburban, etc.). Two essential inputs at this level are simple radio transceiver characteristics and 'flat' map of the area. This fairly simplistic approach allows for a quick analysis of the number of sites that may be required to cover a certain area. Phase 2: detailed RF propagation modelling The second level of the RF Planning process relies on a more detailed propagation model. Automatic planning tools are often employed in this phase to perform detailed predictions. The propagation model takes into account the characteristics of the selected antenna, the terrain, and the land use and land clutter surrounding each site. This requires precise and accurate characterization of every transceiver and detailed, three-dimensional model of the terrain. Since these factors are considered, this propagation model provides a better estimate of the coverage of the sites than the initial statistical propagation model. Thus, its use, in conjunction with the RF link budget, produces a more accurate determination of the number of sites required. Following is a typical list of outputs produced at this stage: Number of sites and site locations (and height) Antenna directions and downtilts Neighbour cell lists for each site Mobility (handover and cell re-selection) parameters for each site. Frequency plan Detailed coverage predictions (e.g. signal strength (RSRP), signal quality (RSRQ) best CINR, best server areas, uplink and downlink throughput) Phase 3: fine tuning and optimisation The third phase of the RF planning process incorporates further detail into the RF plan. This stage includes items such as collecting drive data to be used to tune or calibrate the propagation prediction model, predicting the available data throughout each site, fine-tuning of parameter settings (e.g. antenna orientation, downtilting, frequency plan). Phase 4: continuous optimisation The final phase of the RF planning process involves continuous optimisation of the RF plan to accommodate for changes in the environment or additional service requirements (e.g. additional coverage or capacity). This phase starts from initial network deployment and involves collecting measurement data on a regular basis which could be via drive testing or centralised collection. The data is then used to plan new sites or to optimize the parameter settings (e.g. antenna orientation, downtilting, frequency plan) of existing sites. See also Friis transmission equation Decibel Radiation pattern Multipath propagation Free space loss Network Simulator References Mobile telecommunications
RF planning
Technology
646
7,876,543
https://en.wikipedia.org/wiki/OpenESB
OpenESB is a Java-based open-source enterprise service bus. It can be used as a platform for both enterprise application integration and service-oriented architecture. OpenESB allows developers to integrate legacy systems, external and internal partners and new development in business processes. It supports a multitude of integration technologies including standard JBI (Java Business Integration), XML with support for XML Schemas, WSDL, and BPEL with the aim of simplicity, efficiency, long-term durability, and low TCO (Total Cost of Ownership). It used to be owned by Sun Microsystems, but after Oracle and Sun Microsystems merged (see: Sun acquisition by Oracle), the OpenESB Community was created to maintain, improve, promote and support OpenESB. Architecture OpenESB consists of 5 parts: the framework, the container, the components, the Integrated Development Environment and the development plugins. Framework The framework consists of a lightweight JBI implementation in Java. This implementation is container-agnostic and can work on any platform and any container. Even if development and support are mainly focused on Glassfish V2 and V3 platforms, beta projects on JBoss and standalone JVM work well and are in progress (2012 Q2). In addition to the OpenESB framework being lightweight, it is also reliable and highly scalable. It is embedded in a Java virtual machine and communicates with other framework instances through Binding components. This architecture matches perfectly with new cloud architectures and allows easy deployment and management on very complex infrastructures. The framework is fully manageable with any JMX-based tool such as Jconsole or more sophisticated tools like Opsview or Nagios. The framework implements a virtual bus known as the Normalised Message Router (NMR). This is a powerful asynchronous intelligent communication channel between components. Components The JBI specification defines 2 component types: The services engine (SE) and the binding component (BC). The SE and BC implement the same interface contract, however, they behave differently: - Binding components act as the interface between the outside world and the bus, being able to generate bus messages upon receipt of stimuli from an external source, or generate an external action/interaction in response to a message received from the bus. - Service engines receive messages from the bus and send messages to the bus. SE's have no direct contact with the outside world. They rely on the bus for interaction with other components, whether binding components or other service engines. OpenESB includes many Components 'out of the box'. OpenESB Binding Components OpenESB Service Engines Integrated Development Environment & Plugins OpenESB offers a set of graphical tools to ease complex SOA and integration developments. Powerful XLM, XML Schema, WSDL, BPEL editor, data mapping and Composition Applications graphical editors are proposed with OpenESB. Similarly, build, deploy, un-deploy, run, test and debug tasks are managed by graphical tools. OpenESB provides the best ergonomics for ESB and SOA developments. Container OpenESB V3.1.2 does not use any container but just a JVM. So, its memory footprint is very low (less than 300 Mo) and allows OpenESB to run in a Raspberry PI or in a many instances on a cloud. Next versions are planned for 2019. OpenESB community The table below lists the web sites and forum managed by OpenESB community See also Service-oriented architecture (SOA) Service Component Architecture (SCA) Apache Camel Apache CXF System integration Enterprise Service Bus Enterprise Integration Patterns Event-driven SOA Java CAPS eclipse sirius - Free and GPL eclipse tool to build your own arbitrary complex military grade modeling tools on one hour eclipse SCA Tools - Gnu free composite tool Free GPL obeodesigner made with eclipse sirius References Java Business Integration JBI specification External links OpenESB project Pymma OpenESB Enterprise Edition, Consulting, training, architecture design, development and Global 24x7 Support LogiCoy OpenESB Development, Consulting and Global 24x7 Support Youtube - NetBeans Open ESB SOA Tools, Composite Application, CASA Quick Start Guide to the NetBeans Open ESB CASA Editor https://soa.netbeans.org/ Integration platform Middleware Java enterprise platform
OpenESB
Technology,Engineering
898
66,000,776
https://en.wikipedia.org/wiki/Content%20house
A content house, or also known as a collab house, creator house, content collective or influencer group, is a residential property which is most commonly used by internet celebrities, social media influencers or content creators in order to provide a focus on creating content for social media platforms, such as YouTube, TikTok, and Instagram. Content houses are intended to provide a fertile ground for influencers to help provide content for their viewers, in addition to helping grow their profile and brand through collaborations with other members of the house. They are most associated with the users of TikTok, a video-sharing social networking service; and have been referred to as "TikTok houses". History An early example of a content house was first seen in the 1999 reality television show Big Brother, and the franchise that the show inspired. Contestants lived together in a home specifically designed to be isolated from the outside world, and the drama of the series derived from the interactions between its "housemates". The first social media content houses were created in 2012, with one of the earliest formed by YouTuber Connor Franta for the YouTube channel Our Second Life. Notable content houses include the former Team 10 house inhabited by Jake Paul, the FaZe House, the Hype House, the Sway House and The Creature House. The origins of collab houses date back to 2014 when the members of Our Second Life lived and created content in their 02L Mansion. In 2015 popular users of Vine occupied an apartment at 1600 Vine Street in Los Angeles. The proximity of fellow content creators and the availability of emotional support from their peers have contributed to the popularity of collab houses. It is essential that a collab house has lots of natural light and privacy from fans and neighbors. Harper's Magazine, described collab houses as "grotesquely lavish abodes where teens and early twenty somethings live and work together, trying to achieve viral fame on a variety of media platforms" and attributed their rise in popularity to the COVID-19 pandemic when they "began to proliferate in impressive if not mind-boggling numbers, to the point where it became difficult for a casual observer even to keep track of them". The reporter stayed at the Clubhouse For the Boys in Los Angeles and felt that the management of the clubhouse "actually care[d] very little about the long-term fates of these kids. After all, there's a fungible supply of well-complected youngsters constantly streaming into Los Angeles. Only a very small percentage of these kids will actually make it in the industry; the rest of them, Amir [Ben-Yohanan] tells me, will eventually just "cycle through". The Clubhouse For the Boys in Los Angeles was based in a 7,000sq ft house valued at $8 million. The occupants of the house were expected to post three to five videos a week to social media accounts linked to the Clubhouse in exchange for free room and board. The house was owned by external investors who took up to 20% of the earnings of the occupants. The house had House Rules listed on a whiteboard, which included exhortations to refrain from drinking alcohol between Sunday and Thursday and to "finish brand deliverables before inviting guests". The popularity of collab houses arose at the same time as the burgeoning COVID-19 pandemic in the United States. The reporter felt that several articles in The New York Times about the collab houses had characterized their residents as "incorrigible Dionysians" as a result of the disparity between their lifestyle and the demands of the public health emergency. A January 2020 article in The New York Times described Los Angeles as "home to a land rush" of collab houses. Hype House, a collective of content creators was set in a 'Spanish-style mansion perched at the top of a hill on a gated street' with a 'a palatial backyard, a pool and enormous kitchen, dining and living quarters' and was home to four members of the group. Hype House was formed in December 2019, TikTok videos tagged #hypehouse had accrued 100 million views by January 2020. On April 22, 2021, Netflix announced that it was in production of a reality television series entitled The Hype House, which is set at the content house of the same name. The Hype House is set to star various content creators such as Nikita Dragun, Lil Huddy (also known as Chase Hudson), and Thomas Petrou. Reception to the announcement on social media was mostly negative, with some Netflix subscribers threatening to cancel their subscriptions if the series was aired. Partial list of content houses Byte House Clubhouse BH Clubhouse Beverly Hills Clubhouse FTB Clubhouse For the Boys Drip House Myth Crib FaZe House Fenty Beauty House Girls in the Valley Hype House Not a Content House Sway House The House of Collab 'YouTuber' mansions The Vlog Squad house in Studio City Jake Paul's Team 10 in West Hollywood and Calabasas The Clout House in the Hollywood Hills References Social media TikTok Instagram YouTube Online media collectives Art venues History of the Internet Artist groups and collectives
Content house
Technology
1,058
2,296,159
https://en.wikipedia.org/wiki/Powder%20diffraction
Powder diffraction is a scientific technique using X-ray, neutron, or electron diffraction on powder or microcrystalline samples for structural characterization of materials. An instrument dedicated to performing such powder measurements is called a powder diffractometer. Powder diffraction stands in contrast to single crystal diffraction techniques, which work best with a single, well-ordered crystal. Explanation The most common type of powder diffraction is with X-rays, the focus of this article although some aspects of neutron powder diffraction are mentioned. (Powder electron diffraction is more complex due to dynamical diffraction and is not discussed further herein.) Typical diffractometers use electromagnetic radiation (waves) with known wavelength and frequency, which is determined by their source. The source is often X-rays, and neutrons are also common sources, with their frequency determined by their de Broglie wavelength. When these waves reach the sample, the incoming beam is either reflected off the surface, or can enter the lattice and be diffracted by the atoms present in the sample. If the atoms are arranged symmetrically with a separation distance d, these waves will interfere constructively only where the path-length difference 2d sin θ is equal to an integer multiple of the wavelength, producing a diffraction maximum in accordance with Bragg's law. These waves interfere destructively at points between the intersections where the waves are out of phase, and do not lead to bright spots in the diffraction pattern. Because the sample itself is acting as the diffraction grating, this spacing is the atomic spacing. The distinction between powder and single crystal diffraction is the degree of texturing in the sample. Single crystals have maximal texturing, and are said to be anisotropic. In contrast, in powder diffraction, every possible crystalline orientation is represented equally in a powdered sample, the isotropic case. Powder X-ray diffraction (PXRD) operates under the assumption that the sample is randomly arranged. Therefore, a statistically significant number of each plane of the crystal structure will be in the proper orientation to diffract the X-rays. Therefore, each plane will be represented in the signal. In practice, it is sometimes necessary to rotate the sample orientation to eliminate the effects of texturing and achieve true randomness. Mathematically, crystals can be described by a Bravais lattice with some regularity in the spacing between atoms. Because of this regularity, we can describe this structure in a different way using the reciprocal lattice, which is related to the original structure by a Fourier transform. This three-dimensional space can be described with reciprocal axes x*, y*, and z* or alternatively in spherical coordinates q, φ*, and χ*. In powder diffraction, intensity is homogeneous over φ* and χ*, and only q remains as an important measurable quantity. This is because orientational averaging causes the three-dimensional reciprocal space that is studied in single crystal diffraction to be projected onto a single dimension. When the scattered radiation is collected on a flat plate detector, the rotational averaging leads to smooth diffraction rings around the beam axis, rather than the discrete Laue spots observed in single crystal diffraction. The angle between the beam axis and the ring is called the scattering angle and in X-ray crystallography always denoted as 2θ (in scattering of visible light the convention is usually to call it θ). In accordance with Bragg's law, each ring corresponds to a particular reciprocal lattice vector G in the sample crystal. This leads to the definition of the scattering vector as: In this equation, G is the reciprocal lattice vector, q is the length of the reciprocal lattice vector, k is the momentum transfer vector, θ is half of the scattering angle, and λ is the wavelength of the source. Powder diffraction data are usually presented as a diffractogram in which the diffracted intensity, I, is shown as a function either of the scattering angle 2θ or as a function of the scattering vector length q. The latter variable has the advantage that the diffractogram no longer depends on the value of the wavelength λ. The advent of synchrotron sources has widened the choice of wavelength considerably. To facilitate comparability of data obtained with different wavelengths the use of q is therefore recommended and gaining acceptability. Uses Relative to other methods of analysis, powder diffraction allows for rapid, non-destructive analysis of multi-component mixtures without the need for extensive sample preparation. This gives laboratories the ability to quickly analyze unknown materials and perform materials characterization in such fields as metallurgy, mineralogy, chemistry, forensic science, archeology, condensed matter physics, and the biological and pharmaceutical sciences. Identification is performed by comparison of the diffraction pattern to a known standard or to a database such as the International Centre for Diffraction Data's Powder Diffraction File (PDF) or the Cambridge Structural Database (CSD). Advances in hardware and software, particularly improved optics and fast detectors, have dramatically improved the analytical capability of the technique, especially relative to the speed of the analysis. The fundamental physics upon which the technique is based provides high precision and accuracy in the measurement of interplanar spacings, sometimes to fractions of an Ångström, resulting in authoritative identification frequently used in patents, criminal cases and other areas of law enforcement. The ability to analyze multiphase materials also allows analysis of how materials interact in a particular matrix such as a pharmaceutical tablet, a circuit board, a mechanical weld, a geologic core sampling, cement and concrete, or a pigment found in an historic painting. The method has been historically used for the identification and classification of minerals, but it can be used for nearly any material, even amorphous ones, so long as a suitable reference pattern is known or can be constructed. Phase identification The most widespread use of powder diffraction is in the identification and characterization of crystalline solids, each of which produces a distinctive diffraction pattern. Both the positions (corresponding to lattice spacings) and the relative intensity of the lines in a diffraction pattern are indicative of a particular phase and material, providing a "fingerprint" for comparison. A multi-phase mixture, e.g. a soil sample, will show more than one pattern superposed, allowing for the determination of the relative concentrations of phases in the mixture. J.D. Hanawalt, an analytical chemist who worked for Dow Chemical in the 1930s, was the first to realize the analytical potential of creating a database. Today it is represented by the Powder Diffraction File (PDF) of the International Centre for Diffraction Data (formerly Joint Committee for Powder Diffraction Studies). This has been made searchable by computer through the work of global software developers and equipment manufacturers. There are now over 1,047,661 reference materials in the 2021 Powder Diffraction File Databases, and these databases are interfaced to a wide variety of diffraction analysis software and distributed globally. The Powder Diffraction File contains many subfiles, such as minerals, metals and alloys, pharmaceuticals, forensics, excipients, superconductors, semiconductors, etc., with large collections of organic, organometallic and inorganic reference materials. Crystallinity In contrast to a crystalline pattern consisting of a series of sharp peaks, amorphous materials (liquids, glasses etc.) produce a broad background signal. Many polymers show semicrystalline behavior, i.e. part of the material forms an ordered crystallite by folding of the molecule. A single polymer molecule may well be folded into two different, adjacent crystallites and thus form a tie between the two. The tie part is prevented from crystallizing. The result is that the crystallinity will never reach 100%. Powder XRD can be used to determine the crystallinity by comparing the integrated intensity of the background pattern to that of the sharp peaks. Values obtained from powder XRD are typically comparable but not quite identical to those obtained from other methods such as DSC. Lattice parameters The position of a diffraction peak is independent of the atomic positions within the cell and entirely determined by the size and shape of the unit cell of the crystalline phase. Each peak represents a certain lattice plane and can therefore be characterized by a Miller index. If the symmetry is high, e.g.: cubic or hexagonal it is usually not too hard to identify the index of each peak, even for an unknown phase. This is particularly important in solid-state chemistry, where one is interested in finding and identifying new materials. Once a pattern has been indexed, this characterizes the reaction product and identifies it as a new solid phase. Indexing programs exist to deal with the harder cases, but if the unit cell is very large and the symmetry low (triclinic) success is not always guaranteed. Expansion tensors, bulk modulus Cell parameters are somewhat temperature and pressure dependent. Powder diffraction can be combined with in situ temperature and pressure control. As these thermodynamic variables are changed, the observed diffraction peaks will migrate continuously to indicate higher or lower lattice spacings as the unit cell distorts. This allows for measurement of such quantities as the thermal expansion tensor and the isothermal bulk modulus, as well determination of the full equation of state of the material. Phase transitions At some critical set of conditions, for example 0 °C for water at 1 atm, a new arrangement of atoms or molecules may become stable, leading to a phase transition. At this point new diffraction peaks will appear or old ones disappear according to the symmetry of the new phase. If the material melts to an isotropic liquid, all sharp lines will disappear and be replaced by a broad amorphous pattern. If the transition produces another crystalline phase, one set of lines will suddenly be replaced by another set. In some cases however lines will split or coalesce, e.g. if the material undergoes a continuous, second order phase transition. In such cases the symmetry may change because the existing structure is distorted rather than replaced by a completely different one. For example, the diffraction peaks for the lattice planes (100) and (001) can be found at two different values of q for a tetragonal phase, but if the symmetry becomes cubic the two peaks will come to coincide. Crystal structure refinement and determination Crystal structure determination from powder diffraction data is extremely challenging due to the overlap of reflections in a powder experiment. A number of different methods exist for structural determination, such as simulated annealing and charge flipping. The crystal structures of known materials can be refined, i.e. as a function of temperature or pressure, using the Rietveld method. The Rietveld method is a so-called full pattern analysis technique. A crystal structure, together with instrumental and microstructural information, is used to generate a theoretical diffraction pattern that can be compared to the observed data. A least squares procedure is then used to minimize the difference between the calculated pattern and each point of the observed pattern by adjusting model parameters. Techniques to determine unknown structures from powder data do exist, but are somewhat specialized. A number of programs that can be used in structure determination are TOPAS, Fox, DASH, GSAS-II, EXPO2004, and a few others. Size and strain broadening There are many factors that determine the width B of a diffraction peak. These include: instrumental factors the presence of defects to the perfect lattice differences in strain in different grains the size of the crystallites It is often possible to separate the effects of size and strain. When size broadening is independent of q (K = 1/d), strain broadening increases with increasing q-values. In most cases there will be both size and strain broadening. It is possible to separate these by combining the two equations in what is known as the Hall–Williamson method: Thus, when we plot vs. we get a straight line with slope and intercept . The expression is a combination of the Scherrer equation for size broadening and the Stokes and Wilson expression for strain broadening. The value of η is the strain in the crystallites, the value of D represents the size of the crystallites. The constant k is typically close to unity and ranges from 0.8 to 1.39. Comparison of X-ray and neutron scattering X-ray photons scatter by interaction with the electron cloud of the material, neutrons are scattered by the nuclei. This means that, in the presence of heavy atoms with many electrons, it may be difficult to detect light atoms by X-ray diffraction. In contrast, the neutron scattering lengths of most atoms are approximately equal in magnitude. Neutron diffraction techniques may therefore be used to detect light elements such as oxygen or hydrogen in combination with heavy atoms. The neutron diffraction technique therefore has obvious applications to problems such as determining oxygen displacements in materials like high temperature superconductors and ferroelectrics, or to hydrogen bonding in biological systems. A further complication in the case of neutron scattering from hydrogenous materials is the strong incoherent scattering of hydrogen (80.27(6) barn). This leads to a very high background in neutron diffraction experiments, and may make structural investigations impossible. A common solution is deuteration, i.e., replacing the 1-H atoms in the sample with deuterium (2-H). The incoherent scattering length of deuterium is much smaller (2.05(3) barn) making structural investigations significantly easier. However, in some systems, replacing hydrogen with deuterium may alter the structural and dynamic properties of interest. As neutrons also have a magnetic moment, they are additionally scattered by any magnetic moments in a sample. In the case of long range magnetic order, this leads to the appearance of new Bragg reflections. In most simple cases, powder diffraction may be used to determine the size of the moments and their spatial orientation. Aperiodically arranged clusters Predicting the scattered intensity in powder diffraction patterns from gases, liquids, and randomly distributed nano-clusters in the solid state is (to first order) done rather elegantly with the Debye scattering equation: where the magnitude of the scattering vector q is in reciprocal lattice distance units, N is the number of atoms, fi(q) is the atomic scattering factor for atom i and scattering vector q, while rij is the distance between atom i and atom j. One can also use this to predict the effect of nano-crystallite shape on detected diffraction peaks, even if in some directions the cluster is only one atom thick. Semi-quantitative analysis Semi-quantitative analysis of polycrystalline mixtures can be performed by using traditional single-peaks methods such as the Relative Intensity Ratio (RIR) or whole-pattern methods using Rietveld Refinement or PONCKS (Partial Or No Known Crystal Structures) method. The use of each method depends on the knowledge on the analyzed system, given that, for instance, Rietveld refinement needs the solved crystal structure of each component of the mixture to be performed. In the last decades, multivariate analysis begun spreading as an alternative method for phase quantification. Devices Cameras The simplest cameras for X-ray powder diffraction consist of a small capillary and either a flat plate detector (originally a piece of X-ray film, now more and more a flat-plate detector or a CCD-camera) or a cylindrical one (originally a piece of film in a cookie-jar, but increasingly bent position sensitive detectors are used). The two types of cameras are known as the Laue and the Debye–Scherrer camera. In order to ensure complete powder averaging, the capillary is usually spun around its axis. For neutron diffraction vanadium cylinders are used as sample holders. Vanadium has a negligible absorption and coherent scattering cross section for neutrons and is hence nearly invisible in a powder diffraction experiment. Vanadium does however have a considerable incoherent scattering cross section which may cause problems for more sensitive techniques such as neutron inelastic scattering. A later development in X-ray cameras is the Guinier camera. It is built around a focusing bent crystal monochromator. The sample is usually placed in the focusing beam, e.g. as a dusting on a piece of sticky tape. A cylindrical piece of film (or electronic multichannel detector) is put on the focusing circle, but the incident beam prevented from reaching the detector to prevent damage from its high intensity. Cameras based on hybrid photon counting technology, such as the PILATUS detector, are widely used in applications where high data acquisition speeds and increased data quality are required. Diffractometers Diffractometers can be operated both in transmission and reflection, but reflection is more common. The powder sample is loaded in a small disc-like container and its surface carefully flattened. The disc is put on one axis of the diffractometer and tilted by an angle θ while a detector (scintillation counter) rotates around it on an arm at twice this angle. This configuration is known under the name Bragg–Brentano θ-2θ. Another configuration is the Bragg–Brentano θ-θ configuration in which the sample is stationary while the X-ray tube and the detector are rotated around it. The angle formed between the X-ray source and the detector is 2θ. This configuration is most convenient for loose powders. Diffractometer settings for different experiments can schematically be illustrated by a hemisphere, in which the powder sample resides in the origin. The case of recording a pattern in the Bragg-Brentano θ-θ mode is shown in the figure, where K0 and K stand for the wave vectors of the incoming and diffracted beam that both make up the scattering plane. Various other settings for texture or stress/strain measurements can also be visualized with this graphical approach. Position-sensitive detectors (PSD) and area detectors, which allow collection from multiple angles at once, are becoming more popular on currently supplied instrumentation. Neutron diffraction Sources that produce a neutron beam of suitable intensity and speed for diffraction are only available at a small number of research reactors and spallation sources in the world. Angle dispersive (fixed wavelength) instruments typically have a battery of individual detectors arranged in a cylindrical fashion around the sample holder, and can therefore collect scattered intensity simultaneously on a large 2θ range. Time of flight instruments normally have a small range of banks at different scattering angles which collect data at varying resolutions. X-ray tubes Laboratory X-ray diffraction equipment relies on the use of an X-ray tube, which is used to produce the X-rays. The most commonly used laboratory X-ray tube uses a copper anode, but cobalt and molybdenum are also popular. The wavelength in nm varies for each source. The table below shows these wavelengths, determined by Bearden (all values in nm): According to the last re-examination of Hölzer et al. (1997), and quoted in the International Tables for Crystallography these values are respectively: Other sources In-house applications of X-ray diffraction has always been limited to the relatively few wavelengths shown in the table above. The available choice was much needed because the combination of certain wavelengths and certain elements present in a sample can lead to strong fluorescence which increases the background in the diffraction pattern. A notorious example is the presence of iron in a sample when using copper radiation. In general elements just below the anode element in the period system need to be avoided. Another limitation is that the intensity of traditional generators is relatively low, requiring lengthy exposure times and precluding any time dependent measurement. The advent of synchrotron sources has drastically changed this picture and caused powder diffraction methods to enter a whole new phase of development. Not only is there a much wider choice of wavelengths available, the high brilliance of the synchrotron radiation makes it possible to observe changes in the pattern during chemical reactions, temperature ramps, changes in pressure and the like. The tunability of the wavelength also makes it possible to observe anomalous scattering effects when the wavelength is chosen close to the absorption edge of one of the elements of the sample. Neutron diffraction has never been an in house technique because it requires the availability of an intense neutron beam only available at a nuclear reactor or spallation source. Typically the available neutron flux, and the weak interaction between neutrons and matter, require relative large samples. Advantages and disadvantages Although it is possible to solve crystal structures from powder X-ray data alone, its single crystal analogue is a far more powerful technique for structure determination. This is directly related to the fact that information is lost by the collapse of the 3D space onto a 1D axis. Nevertheless, powder X-ray diffraction is a powerful and useful technique in its own right. It is mostly used to characterize and identify phases, and to refine details of an already known structure, rather than solving unknown structures. Advantages of the technique are: simplicity of sample preparation rapidity of measurement the ability to analyze mixed phases, e.g. soil samples "in situ" structure determination By contrast growth and mounting of large single crystals is notoriously difficult. In fact there are many materials for which, despite many attempts, it has not proven possible to obtain single crystals. Many materials are readily available with sufficient microcrystallinity for powder diffraction, or samples may be easily ground from larger crystals. In the field of solid-state chemistry that often aims at synthesizing new materials, single crystals thereof are typically not immediately available. Powder diffraction is therefore one of the most powerful methods to identify and characterize new materials in this field. Particularly for neutron diffraction, which requires larger samples than X-ray diffraction due to a relatively weak scattering cross section, the ability to use large samples can be critical, although newer and more brilliant neutron sources are being built that may change this picture. Since all possible crystal orientations are measured simultaneously, collection times can be quite short even for small and weakly scattering samples. This is not merely convenient, but can be essential for samples which are unstable either inherently or under X-ray or neutron bombardment, or for time-resolved studies. For the latter it is desirable to have a strong radiation source. The advent of synchrotron radiation and modern neutron sources has therefore done much to revitalize the powder diffraction field because it is now possible to study temperature dependent changes, reaction kinetics and so forth by means of time-resolved powder diffraction. See also Bragg diffraction Condensed matter physics Crystallographic database Crystallography Diffractometer Electron crystallography Electron diffraction Materials science Metallurgy Neutron diffraction Pair distribution function Solid state chemistry Texture (crystalline) Ultrafast x-ray X-ray crystallography X-ray scattering techniques References Further reading External links International Centre for Diffraction Data Powder Diffraction on the Web Diffraction Neutron-related techniques Synchrotron-related techniques Diffraction
Powder diffraction
Physics,Chemistry,Materials_science
4,824
29,427,482
https://en.wikipedia.org/wiki/Gunter%20Faure
Gunter Faure is a geochemist who currently holds the position of professor emeritus in the school of earth science of Ohio State University. He obtained his PhD from the Massachusetts Institute of Technology in 1961. Books Introduction to Planetary Science: The Geological Perspective, Gunter Faure and Teresa M. Mensing, Springer, 2007, 526 pp. Isotopes: Principles and Applications, Gunter Faure and Teresa M. Mensing, Wiley; 3rd edition, 2005. Origin of Igneous Rocks: The Isotopic Evidence, Gunter Faure, Springer, 2000, 496 pp. (2010 reprint ) Principles and Applications of Geochemistry, Gunter Faure, Prentice Hall, 1998, 2nd Ed., 625 pp. Principles and Applications of Inorganic Geochemistry, Gunter Faure, Macmillan, 1991, 500pp. The Transantarctic Mountains: Rocks, Ice, Meteorites and Water, Gunter Faure and Teresa M. Mensing, Springer, 2010, 804 pp. References American geochemists Living people Year of birth missing (living people)
Gunter Faure
Chemistry
214
43,523,948
https://en.wikipedia.org/wiki/NGC%20518
NGC 518 is a spiral galaxy located in the Pisces constellation. It was discovered by Albert Marth on 17 December 1864. See also Spiral galaxy List of NGC objects (1–1000) Pisces (constellation) References External links Deep Sky Browser - NGC518 Aladin previewer - image 0518 Spiral galaxies Pisces (constellation) 18641217 Discoveries by Albert Marth 00952 005161 +01-04-049
NGC 518
Astronomy
96
4,117,029
https://en.wikipedia.org/wiki/NGC%20246
NGC 246 (also known as the Skull Nebula or Caldwell 56) is a planetary nebula in the constellation Cetus. It was discovered in 1785 by William Herschel. The nebula and the stars associated with it are listed in several catalogs, as summarized by the SIMBAD database. The nebula is roughly away. NGC 246's central star is the 12th magnitude white dwarf HIP 3678 A. In 2014, astronomers discovered a second companion to NGC 246's central star, which has a comoving companion star called HIP 3678 B. The second companion star, a red dwarf known as HIP 3678 C, was discovered using the European Southern Observatory's Very Large Telescope. This makes NGC 246 the first planetary nebula to have a hierarchical triple star system at its center. NGC 246 is not to be confused with the Rosette Nebula (NGC 2337), which is also referred to as the "Skull." Among some amateur astronomers, NGC 246 is known as the "Pac-Man Nebula" because of the arrangement of its central stars and the surrounding star field. Image gallery References External links Planetary nebulae Cetus 0246 056b 17841127 Discoveries by William Herschel 3678 00445-1207
NGC 246
Astronomy
252
47,090,707
https://en.wikipedia.org/wiki/Sylvester%20graph
The Sylvester graph is the unique distance-regular graph with intersection array . It is a subgraph of the Hoffman–Singleton graph. References External links A.E. Brouwer's website: the Sylvester graph Individual graphs Regular graphs
Sylvester graph
Mathematics
49
24,279,120
https://en.wikipedia.org/wiki/Fresnel%20imager
A Fresnel imager is a proposed ultra-lightweight design for a space telescope that uses a Fresnel array as primary optics instead of a typical lens. It focuses light with a thin opaque foil sheet punched with specially shaped holes, thus focusing light on a certain point by using the phenomenon of diffraction. Such patterned sheets, called Fresnel zone plates, have long been used for focusing laser beams, but have so far not been used for astronomy. No optical material is involved in the focusing process as in traditional telescopes. Rather, the light collected by the Fresnel array is concentrated on smaller classical optics (e.g. 1/20 of the array size), to form a final image. The long focal lengths of the Fresnel imager (a few kilometers) require operation by two-vessel formation flying in space at the L2 Sun-Earth Lagrangian point. In this two-spacecraft formation-flying instrument, one spacecraft holds the focussing element: the Fresnel interferometric array; the other spacecraft holds the field optics, focal instrumentation, and detectors. Advantages A Fresnel imager with a sheet of a given size has vision just as sharp as a traditional telescope with a mirror of the same size, though it collects about 10% of the light. The use of vacuum for the individual subapertures eliminates phase defects and spectral limitations, which would result from the use of a transparent or reflective material. It can observe in the ultraviolet and infrared, in addition to visible light. It achieves images of high contrast, enabling observation of a very faint object in the close vicinity of a bright one. Since it is constructed using foil instead of mirrors, it is expected to be more lightweight, and therefore less expensive to launch, than a traditional telescope. A 30-metre Fresnel imager would be powerful enough to see Earth-sized planets within 30 light years of Earth, and measure the planets' light spectrum to look for signs of life, such as atmospheric oxygen. The Fresnel imager could also measure the properties of very young galaxies in the distant universe and take detailed images of objects in the Solar System. Development The concept has been successfully tested in the visible, and awaits testing in the UV. An international interest group is being formed, with specialists of the different science cases. A proposal for a 2025-2030 mission has been submitted to ESA Cosmic Vision call. In 2008 Laurent Koechlin of the Observatoire Midi-Pyrénées in Toulouse, France, and his team planned to construct a small ground-based Fresnel imager telescope by attaching a 20-centimetre patterned sheet to a telescope mount. Koechlin and his team completed the ground-based prototype in 2012. It uses a piece of copper foil 20 cm square with 696 concentric rings as the zone plate. Its focal length is 18 metres. They were able to resolve the moons of Mars from the parent planet with it. See also Augustin-Jean Fresnel Diffraction Fresnel diffraction Fresnel lens Fresnel number Zone plate Photon sieve References Further reading http://www.ast.obs-mip.fr/users/lkoechli/w3/publisenligne/PropalFresnel-CosmicVision_20070706.pdf The Fresnel Interferometric Imager, Proposal to ESA Cosmic Vision 2007 http://www.ast.obs-mip.fr/users/lkoechli/w3/FresnelArraysPosterA4V3.pdf Fresnel interferometric Arrays as Imaging interferometers, L.Koechlin, D.Serre, P.Deba, D.Massonnet http://www.ast.obs-mip.fr/users/lkoechli/w3/publisenligne/aa2880-05.pdf High resolution imaging with Fresnel interferometric arrays: suitability for exoplanet detection, L. Koechlin, D. Serre, and P. Duchon http://www.ast.obs-mip.fr/users/lkoechli/w3/publisenligne/papierFresnelV1.pdf Imageur de Fresnel pour observations à haute Résolution Angulaire et haute dynamique, L.Koechlin, D.Serre, P.Deba European Space Agency Space telescopes Ultraviolet telescopes Interferometric telescopes
Fresnel imager
Astronomy
949
19,760,932
https://en.wikipedia.org/wiki/Sprengel%20pump
The Sprengel pump is a vacuum pump that uses drops of mercury falling through a small-bore capillary tube to trap air from the system to be evacuated. It was invented by Hanover-born chemist Hermann Sprengel in 1865 while he was working in London. The pump created the highest vacuum achievable at that time, less than 1 μPa (approximately 1×10−11 atm). Operation The supply of mercury is contained in the reservoir on the left. It flows over into the bulb B, where it forms drops which fall into the long tube on the right. These drops entrap between them the air in B. The mercury which runs out is collected and poured back into reservoir on the left. In this manner practically all the air can be removed from the bulb B, and hence from any vessel R, which may be connected with B. At M is a manometer which indicates the pressure in the vessel R, which is being exhausted. Falling mercury drops compress the air to atmospheric pressure which is released when the stream reaches a container at the bottom of the tube. As the pressure drops, the cushioning effect of trapped air between the droplets diminishes, so a hammering or knocking sound can be heard, accompanied by flashes of light within the evacuated vessel due to electrostatic effects on the mercury. The speed, simplicity and efficiency of the Sprengel pump made it a popular device with experimenters. Sprengel's earliest model could evacuate a half litre vessel in 20 minutes. Applications William Crookes used the pumps in series in his studies of electric discharges. William Ramsay used them to isolate the noble gases, and Joseph Swan and Thomas Edison used them to evacuate their new carbon filament lamps. The Sprengel pump was the key tool which made it possible in 1879 to sufficiently exhaust the air from a light bulb so a carbon filament incandescent electric light bulb lasted long enough to be commercially practical. Sprengel himself moved on to investigating explosives and was eventually elected as a Fellow of the Royal Society. Notes References Further reading Thompson, Silvanus Phillips, The Development of the Mercurial Air-pump (London, England: E. & F. N. Spon, 1888) pages 14–15. Vacuum pumps
Sprengel pump
Physics,Engineering
465
51,346,297
https://en.wikipedia.org/wiki/Butterfleye
Butterfleye is an American maker of security alarm systems. The company is known for its wireless learning camera technology that prevents false alarms. Butterfleye was founded in 2013 by an engineer named Ben Nader, and is headquartered in San Francisco, California. On December 20, 2017, Butterfleye was acquired by Ooma. On September 22, 2021, Ooma sent the following message to Butterfleye users: "We're sorry to inform you that the operation, maintenance and support of Ooma Butterfleye and Smart Cam security cameras will be ending. As such, the last day of operation of your camera(s) will be October 22, 2021. After this day, your camera(s) will stop working and will no longer record videos. Any videos you currently access through the Smart Cam App will become unavailable." History After losing several bicycles in a series of break-ins at his apartment, Nader—then an engineer at Texas Instruments—tried to install his own security alarm system. Dissatisfied with existing products that took him three days to install, Nader set out to create an alternative. Nader was initially funded by angel investing, raising $1.6 million. In August 2015, he sought backing through the crowdfunding site Indiegogo, setting a goal of $100,000. When the campaign finished in October, the company had raised more than six times its initial goal. Cameras Nader described Butterfleye as "Dropcam meets Nest," a combination of streaming cameras and home automation technology. Butterfleye's cameras are wireless with an estimated two-week battery life, allowing them to continue functioning in case of power interruptions. Butterfleye's cameras are distinguished by their facial recognition technology, which prevents false alarms, and their ability to learn what not to record (for example, footage of the home's residents themselves). The camera also includes iBeacon technology, allowing it recognize certain smartphones, also avoiding false alarms; a thermal imaging sensor; and audio recognition technology, through which the camera can learn to recognize sounds such as children crying or glass breaking. The system sorts and labels archived footage on the basis of these audio and visual cues. Each camera uploads using AES 128bit encryption, and has 12 hours of internal storage in the event of wireless network or Internet connection failure. Owners can watch the camera feed live or archived through an app on Android (in development) or iOS, and can be notified through the app when a person or pet is detected by a given camera. Owners can also activate a camera remotely, in which case a light on the camera signals to those nearby that it is recording. The cameras also feature two-way audio that allows the owner to speak through the camera to communicate with family members or pets. References External links Butterfleye Webpage Alarms Security companies of the United States
Butterfleye
Technology
584
1,977,279
https://en.wikipedia.org/wiki/Optical%20resolution
Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. Each of these contributes (given suitable design, and adequate alignment) to the optical resolution of the system; the environment in which the imaging is done often is a further important factor. Lateral resolution Resolution depends on the distance between two distinguishable radiating points. The sections below describe the theoretical estimates of resolution, but the real values may differ. The results below are based on mathematical models of Airy discs, which assumes an adequate level of contrast. In low-contrast systems, the resolution may be much lower than predicted by the theory outlined below. Real optical systems are complex, and practical difficulties often increase the distance between distinguishable point sources. The resolution of a system is based on the minimum distance at which the points can be distinguished as individuals. Several standards are used to determine, quantitatively, whether or not the points can be distinguished. One of the methods specifies that, on the line between the center of one point and the next, the contrast between the maximum and minimum intensity be at least 26% lower than the maximum. This corresponds to the overlap of one Airy disk on the first dark ring in the other. This standard for separation is also known as the Rayleigh criterion. In symbols, the distance is defined as follows: where is the minimum distance between resolvable points, in the same units as is specified is the wavelength of light, emission wavelength, in the case of fluorescence, is the index of refraction of the media surrounding the radiating points, is the half angle of the pencil of light that enters the objective, and is the numerical aperture This formula is suitable for confocal microscopy, but is also used in traditional microscopy. In confocal laser-scanned microscopes, the full-width half-maximum (FWHM) of the point spread function is often used to avoid the difficulty of measuring the Airy disc. This, combined with the rastered illumination pattern, results in better resolution, but it is still proportional to the Rayleigh-based formula given above. Also common in the microscopy literature is a formula for resolution that treats the above-mentioned concerns about contrast differently. The resolution predicted by this formula is proportional to the Rayleigh-based formula, differing by about 20%. For estimating theoretical resolution, it may be adequate. When a condenser is used to illuminate the sample, the shape of the pencil of light emanating from the condenser must also be included. In a properly configured microscope, . The above estimates of resolution are specific to the case in which two identical very small samples that radiate incoherently in all directions. Other considerations must be taken into account if the sources radiate at different levels of intensity, are coherent, large, or radiate in non-uniform patterns. Lens resolution The ability of a lens to resolve detail is usually determined by the quality of the lens, but is ultimately limited by diffraction. Light coming from a point source in the object diffracts through the lens aperture such that it forms a diffraction pattern in the image, which has a central spot and surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern, and the central bright lobe as an Airy disk. The angular radius of the Airy disk (measured from the center to the first null) is given by: where θ is the angular resolution in radians, λ is the wavelength of light in meters, and D is the diameter of the lens aperture in meters. Two adjacent points in the object give rise to two diffraction patterns. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh defined the somewhat arbitrary "Rayleigh criterion" that two points whose angular separation is equal to the Airy disk radius to first null can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the greater the resolution. Astronomical telescopes have increasingly large lenses so they can 'see' ever finer detail in the stars. Only the very highest quality lenses have diffraction-limited resolution, however, and normally the quality of the lens limits its ability to resolve detail. This ability is expressed by the Optical Transfer Function which describes the spatial (angular) variation of the light signal as a function of spatial (angular) frequency. When the image is projected onto a flat plane, such as photographic film or a solid state detector, spatial frequency is the preferred domain, but when the image is referred to the lens alone, angular frequency is preferred. OTF may be broken down into the magnitude and phase components as follows: where and are spatial frequency in the x- and y-plane, respectively. The OTF accounts for aberration, which the limiting frequency expression above does not. The magnitude is known as the Modulation Transfer Function (MTF) and the phase portion is known as the Phase Transfer Function (PTF). In imaging systems, the phase component is typically not captured by the sensor. Thus, the important measure with respect to imaging systems is the MTF. Phase is critically important to adaptive optics and holographic systems. Sensor resolution (spatial) Some optical sensors are designed to detect spatial differences in electromagnetic energy. These include photographic film, solid-state devices (CCD, CMOS sensors, and infrared detectors like PtSi and InSb), tube detectors (vidicon, plumbicon, and photomultiplier tubes used in night-vision devices), scanning detectors (mainly used for IR), pyroelectric detectors, and microbolometer detectors. The ability of such a detector to resolve those differences depends mostly on the size of the detecting elements. Spatial resolution is typically expressed in line pairs per millimeter (lppmm), lines (of resolution, mostly for analog video), contrast vs. cycles/mm, or MTF (the modulus of OTF). The MTF may be found by taking the two-dimensional Fourier transform of the spatial sampling function. Smaller pixels result in wider MTF curves and thus better detection of higher frequency energy. This is analogous to taking the Fourier transform of a signal sampling function; as in that case, the dominant factor is the sampling period, which is analogous to the size of the picture element (pixel). Other factors include pixel noise, pixel cross-talk, substrate penetration, and fill factor. A common problem among non-technicians is the use of the number of pixels on the detector to describe the resolution. If all sensors were the same size, this would be acceptable. Since they are not, the use of the number of pixels can be misleading. For example, a 2-megapixel camera of 20-micrometre-square pixels will have worse resolution than a 1-megapixel camera with 8-micrometre pixels, all else being equal. For resolution measurement, film manufacturers typically publish a plot of Response (%) vs. Spatial Frequency (cycles per millimeter). The plot is derived experimentally. Solid state sensor and camera manufacturers normally publish specifications from which the user may derive a theoretical MTF according to the procedure outlined below. A few may also publish MTF curves, while others (especially intensifier manufacturers) will publish the response (%) at the Nyquist frequency, or, alternatively, publish the frequency at which the response is 50%. To find a theoretical MTF curve for a sensor, it is necessary to know three characteristics of the sensor: the active sensing area, the area comprising the sensing area and the interconnection and support structures ("real estate"), and the total number of those areas (the pixel count). The total pixel count is almost always given. Sometimes the overall sensor dimensions are given, from which the real estate area can be calculated. Whether the real estate area is given or derived, if the active pixel area is not given, it may be derived from the real estate area and the fill factor, where fill factor is the ratio of the active area to the dedicated real estate area. where the active area of the pixel has dimensions a×b the pixel real estate has dimensions c×d In Gaskill's notation, the sensing area is a 2D comb(x, y) function of the distance between pixels (the pitch), convolved with a 2D rect(x, y) function of the active area of the pixel, bounded by a 2D rect(x, y) function of the overall sensor dimension. The Fourier transform of this is a function governed by the distance between pixels, convolved with a function governed by the number of pixels, and multiplied by the function corresponding to the active area. That last function serves as an overall envelope to the MTF function; so long as the number of pixels is much greater than one, then the active area size dominates the MTF. Sampling function: where the sensor has M×N pixels Sensor resolution (temporal) An imaging system running at 24 frames per second is essentially a discrete sampling system that samples a 2D area. The same limitations described by Nyquist apply to this system as to any signal sampling system. All sensors have a characteristic time response. Film is limited at both the short resolution and the long resolution extremes by reciprocity breakdown. These are typically held to be anything longer than 1 second and shorter than 1/10,000 second. Furthermore, film requires a mechanical system to advance it through the exposure mechanism, or a moving optical system to expose it. These limit the speed at which successive frames may be exposed. CCD and CMOS are the modern preferences for video sensors. CCD is speed-limited by the rate at which the charge can be moved from one site to another. CMOS has the advantage of having individually addressable cells, and this has led to its advantage in the high speed photography industry. Vidicons, Plumbicons, and image intensifiers have specific applications. The speed at which they can be sampled depends upon the decay rate of the phosphor used. For example, the P46 phosphor has a decay time of less than 2 microseconds, while the P43 decay time is on the order of 2-3 milliseconds. The P43 is therefore unusable at frame rates above 1000 frames per second (frame/s). See for links to phosphor information. Pyroelectric detectors respond to changes in temperature. Therefore, a static scene will not be detected, so they require choppers. They also have a decay time, so the pyroelectric system temporal response will be a bandpass, while the other detectors discussed will be a lowpass. If objects within the scene are in motion relative to the imaging system, the resulting motion blur will result in lower spatial resolution. Short integration times will minimize the blur, but integration times are limited by sensor sensitivity. Furthermore, motion between frames in motion pictures will impact digital movie compression schemes (e.g. MPEG-1, MPEG-2). Finally, there are sampling schemes that require real or apparent motion inside the camera (scanning mirrors, rolling shutters) that may result in incorrect rendering of image motion. Therefore, sensor sensitivity and other time-related factors will have a direct impact on spatial resolution. Analog bandwidth effect on resolution The spatial resolution of digital systems (e.g. HDTV and VGA) are fixed independently of the analog bandwidth because each pixel is digitized, transmitted, and stored as a discrete value. Digital cameras, recorders, and displays must be selected so that the resolution is identical from camera to display. However, in analog systems, the resolution of the camera, recorder, cabling, amplifiers, transmitters, receivers, and display may all be independent and the overall system resolution is governed by the bandwidth of the lowest performing component. In analog systems, each horizontal line is transmitted as a high-frequency analog signal. Each picture element (pixel) is therefore converted to an analog electrical value (voltage), and changes in values between pixels therefore become changes in voltage. The transmission standards require that the sampling be done in a fixed time (outlined below), so more pixels per line becomes a requirement for more voltage changes per unit time, i.e. higher frequency. Since such signals are typically band-limited by cables, amplifiers, recorders, transmitters, and receivers, the band-limitation on the analog signal acts as an effective low-pass filter on the spatial resolution. The difference in resolutions between VHS (240 discernible lines per scanline), Betamax (280 lines), and the newer ED Beta format (500 lines) is explained primarily by the difference in the recording bandwidth. In the NTSC transmission standard, each field contains 262.5 lines, and 59.94 fields are transmitted every second. Each line must therefore take 63 microseconds, 10.7 of which are for reset to the next line. Thus, the retrace rate is 15.734 kHz. For the picture to appear to have approximately the same horizontal and vertical resolution (see Kell factor), it should be able to display 228 cycles per line, requiring a bandwidth of 4.28 MHz. If the line (sensor) width is known, this may be converted directly into cycles per millimeter, the unit of spatial resolution. B/G/I/K television system signals (usually used with PAL colour encoding) transmit frames less often (50 Hz), but the frame contains more lines and is wider, so bandwidth requirements are similar. Note that a "discernible line" forms one half of a cycle (a cycle requires a dark and a light line), so "228 cycles" and "456 lines" are equivalent measures. System resolution There are two methods by which to determine "system resolution" (in the sense that omits the eye, or other final reception of the optical information). The first is to perform a series of two-dimensional convolutions, first with the image and the lens, and then, with that procedure's result and a sensor (and so on through all of the components of the system). Not only is this computationally expensive, but normally it also requires repetition of the process, for each additional object that is to be imaged. The other method is to transform each of the components of the system into the spatial frequency domain, and then to multiply the 2-D results. A system response may be determined without reference to an object. Although this method is considerably more difficult to comprehend conceptually, it becomes easier to use computationally, especially when different design iterations or imaged objects are to be tested. The transformation to be used is the Fourier transform. Ocular resolution The human eye is a limiting feature of many systems, when the goal of the system is to present data to humans for processing. For example, in a security or air traffic control function, the display and work station must be constructed so that average humans can detect problems and direct corrective measures. Other examples are when a human is using eyes to carry out a critical task such as flying (piloting by visual reference), driving a vehicle, and so forth. The best visual acuity of the human eye at its optical centre (the fovea) is less than 1 arc minute per line pair, reducing rapidly away from the fovea. The human brain requires more than just a line pair to understand what the eye is imaging. Johnson's criteria defines the number of line pairs of ocular resolution, or sensor resolution, needed to recognize or identify an item. Atmospheric resolution Systems looking through long atmospheric paths may be limited by turbulence. A key measure of the quality of atmospheric turbulence is the seeing diameter, also known as Fried's seeing diameter. A path which is temporally coherent is known as an isoplanatic patch. Large apertures may suffer from aperture averaging, the result of several paths being integrated into one image. Turbulence scales with wavelength at approximately a 6/5 power. Thus, seeing is better at infrared wavelengths than at visible wavelengths. Short exposures suffer from turbulence less than longer exposures due to the "inner" and "outer" scale turbulence; short is considered to be much less than 10 ms for visible imaging (typically, anything less than 2 ms). Inner scale turbulence arises due to the eddies in the turbulent flow, while outer scale turbulence arises from large air mass flow. These masses typically move slowly, and so are reduced by decreasing the integration period. A system limited only by the quality of the optics is said to be diffraction-limited. However, since atmospheric turbulence is normally the limiting factor for visible systems looking through long atmospheric paths, most systems are turbulence-limited. Corrections can be made by using adaptive optics or post-processing techniques. where is the spatial frequency is the wavelength f is the focal length D is the aperture diameter b is a constant (1 for far-field propagation) and is Fried's seeing diameter Measuring optical resolution A variety of measurement systems are available, and use may depend upon the system being tested. Typical test charts for Contrast Transfer Function (CTF) consist of repeated bar patterns (see Discussion below). The limiting resolution is measured by determining the smallest group of bars, both vertically and horizontally, for which the correct number of bars can be seen. By calculating the contrast between the black and white areas at several different frequencies, however, points of the CTF can be determined with the contrast equation. where is the normalized value of the maximum (for example, the voltage or grey value of the white area) is the normalized value of the minimum (for example, the voltage or grey value of the black area) When the system can no longer resolve the bars, the black and white areas have the same value, so Contrast = 0. At very low spatial frequencies, Cmax = 1 and Cmin = 0 so Modulation = 1. Some modulation may be seen above the limiting resolution; these may be aliased and phase-reversed. When using other methods, including the interferogram, sinusoid, and the edge in the ISO 12233 target, it is possible to compute the entire MTF curve. The response to the edge is similar to a step response, and the Fourier Transform of the first difference of the step response yields the MTF. Interferogram An interferogram created between two coherent light sources may be used for at least two resolution-related purposes. The first is to determine the quality of a lens system (see LUPI), and the second is to project a pattern onto a sensor (especially photographic film) to measure resolution. NBS 1010a/ ISO #2 target This 5 bar resolution test chart is often used for evaluation of microfilm systems and scanners. It is convenient for a 1:1 range (typically covering 1-18 cycles/mm) and is marked directly in cycles/mm. Details can be found in ISO-3334. USAF 1951 target The USAF 1951 resolution test target consists of a pattern of 3 bar targets. Often found covering a range of 0.25 to 228 cycles/mm. Each group consists of six elements. The group is designated by a group number (-2, -1, 0, 1, 2, etc.) which is the power to which 2 should be raised to obtain the spatial frequency of the first element (e.g., group -2 is 0.25 line pairs per millimeter). Each element is the 6th root of 2 smaller than the preceding element in the group (e.g. element 1 is 2^0, element 2 is 2^(-1/6), element 3 is 2(-1/3), etc.). By reading off the group and element number of the first element which cannot be resolved, the limiting resolution may be determined by inspection. The complex numbering system and use of a look-up chart can be avoided by use of an improved but not standardized layout chart, which labels the bars and spaces directly in cycles/mm using OCR-A extended font. NBS 1952 target The NBS 1952 target is a 3 bar pattern (long bars). The spatial frequency is printed alongside each triple bar set, so the limiting resolution may be determined by inspection. This frequency is normally only as marked after the chart has been reduced in size (typically 25 times). The original application called for placing the chart at a distance 26 times the focal length of the imaging lens used. The bars above and to the left are in sequence, separated by approximately the square root of two (12, 17, 24, etc.), while the bars below and to the left have the same separation but a different starting point (14, 20, 28, etc.) EIA 1956 video resolution target The EIA 1956 resolution chart was specifically designed to be used with television systems. The gradually expanding lines near the center are marked with periodic indications of the corresponding spatial frequency. The limiting resolution may be determined by inspection. The most important measure is the limiting horizontal resolution, since the vertical resolution is typically determined by the applicable video standard (I/B/G/K/NTSC/NTSC-J). IEEE Std 208-1995 target The IEEE 208-1995 resolution target is similar to the EIA target. Resolution is measured in horizontal and vertical TV lines. ISO 12233 target The ISO 12233 target was developed for digital camera applications, since modern digital camera spatial resolution may exceed the limitations of the older targets. It includes several knife-edge targets for the purpose of computing MTF by Fourier transform. They are offset from the vertical by 5 degrees so that the edges will be sampled in many different phases, which allow estimation of the spatial frequency response beyond the Nyquist frequency of the sampling. Random test patterns The idea is analogous to the use of a white noise pattern in acoustics to determine system frequency response. Monotonically increasing sinusoid patterns The interferogram used to measure film resolution can be synthesized on personal computers and used to generate a pattern for measuring optical resolution. See especially Kodak MTF curves. Multiburst A multiburst signal is an electronic waveform used to test analog transmission, recording, and display systems. The test pattern consists of several short periods of specific frequencies. The contrast of each may be measured by inspection and recorded, giving a plot of attenuation vs. frequency. The NTSC3.58 multiburst pattern consists of 500 kHz, 1 MHz, 2 MHz, 3 MHz, and 3.58 MHz blocks. 3.58 MHz is important because it is the chrominance frequency for NTSC video. Discussion Using a bar target that the resulting measure is the contrast transfer function (CTF) and not the MTF. The difference arises from the subharmonics of the square waves and can be easily computed. See also Angular resolution Display resolution Image resolution, in computing Minimum resolvable contrast Siemens star, a pattern used for resolution testing Spatial resolution Superlens Superresolution References Gaskill, Jack D. (1978), Linear Systems, Fourier Transforms, and Optics, Wiley-Interscience. Goodman, Joseph W. (2004), Introduction to Fourier Optics (Third Edition), Roberts & Company Publishers. Fried, David L. (1966), "Optical resolution through a randomly inhomogeneous medium for very long and very short exposures.", J. Opt. Soc. Amer. 56:1372-9 Robin, Michael, and Poulin, Michael (2000), Digital Television Fundamentals (2nd edition), McGraw-Hill Professional. Smith, Warren J. (2000), Modern Optical Engineering (Third Edition), McGraw-Hill Professional. Accetta, J. S. and Shumaker, D. L. (1993), The Infrared and Electro-optical Systems Handbook, SPIE/ERIM. Roggemann, Michael and Welsh, Byron (1996), Imaging Through Turbulence, CRC Press. Tatarski, V. I. (1961), Wave Propagation in a Turbulent Medium, McGraw-Hill, NY External links Norman Koren's website - includes several downloadable test patterns UC Santa Cruz Prof. Claire Max's lectures and notes from Astronomy 289C, Adaptive Optics George Ou's re-creation of the EIA 1956 chart from a high-resolution scan Do Sensors “Outresolve” Lenses? - on lens and sensor resolution interaction Resolution
Optical resolution
Physics,Chemistry
5,093
58,529,291
https://en.wikipedia.org/wiki/Microascus%20manginii
Microascus manginii is a filamentous fungal species in the genus Microascus. It produces both sexual (teleomorph) and asexual (anamorph) reproductive stages known as M. manginii and Scopulariopsis candida, respectively. Several synonyms appear in the literature because of taxonomic revisions and re-isolation of the species by different researchers. M. manginii is saprotrophic and commonly inhabits soil, indoor environments and decaying plant material. It is distinguishable from closely related species by its light colored and heart-shaped ascospores used for sexual reproduction. Scopulariopsis candida has been identified as the cause of some invasive infections, often in immunocompromised hosts, but is not considered a common human pathogen. There is concern about amphotericin B resistance in S. candida. History and taxonomy The anamorph was first documented, unintentionally, by Professor Fernand-Pierre-Joseph Guéguen in 1899 who mistook it for the species, Monilia candida, previously described in 1851 by Hermann Friedrich Bonorden. In 1911, Jean Paul Vuillemin determined that the two taxa were distinct, noting that the taxon described by Bonorden was a yeast whereas the strain that was the subject of Guéguen's studies was filamentous and produced true conidia. Vuillemin formally described the latter as S. candida. At the same time, he re-described Bonordeon's yeast taxon, Monilia candida, as Monilia bonordenii. Subsequent researchers described taxa that have since been reduced to synonymy with S. candida, including: S. alboflavescens in 1934, S. brevicaulis var. glabra in 1949, Chrysosporium keratinophilum var. denticola in 1969 and Basipetospora denticola in 1971. The teleomorph was discovered by Auguste Loubière in 1923 and named Nephrospora manginii in honour of his mentor, Professor Louis Mangin. It was later transferred to the genus Microascus by Mario Curzi in 1931. Curzi did not provide an explanation for this transfer. S. candida and M. manginii are used in the literature to describe the same species. However, recent changes to the International Code of Nomenclature for algae, fungi and plants have terminated the use of dual nomenclature for fungal species with multiple forms. It is not yet known which name will take priority for this fungus in the future. Growth and morphology Sexual form Colonies of M. manginii are pale, white and rapid growing. Growth is tolerant of cycloheximide and restricted at 37 °C. The vegetative hyphae are septate and appear glassy (hyaline). Ascomata are the sexual structures within which ascospores are produced in sacs called asci. The ascomata of M. manginii are spherical, smooth-walled, dark-brown to black and 100–175 μm in size. These fruiting bodies are also called perithecia because of their flask-like shape wherein asci grow at the base and an opening allows for the release of mature ascospores. They are also papillate with short cone-shaped projections at the opening, sessile, and rich in carbon. Perithecia manifest as small black dots organized in concentric rings. An incubation period of over two weeks may be necessary for the production of perithecia. The asci are shaped similar to an upside-down egg where the apex is broad and thicker than the base. They are 11–16 × 8–13 μm in size and contain 8 ascospores. Ascospores are nonseptate and smooth-walled. They are characteristically uniform in heart-shape and pale, straw-colored when mature - but appear reddish-brown as a mass. They each have a single inconspicuous germ pore, which is a predetermined spot in the spore cell wall where the germ tube emerges during germination. Ascospores are 5–6 × 4.5–5 μm in size. M. manginii is a heterothallic species and as a result, generation of sexual spores requires mating between two compatible individuals. Asexual form S. candida is a hyaline mold with septate hyphae. The white and membranous morphology of S. candida colonies differentiates it from the more common species S. brevicaulis, which is characterized by a sand-coloured and granular colonial morphology. As the colony ages, it becomes slightly yellow. Conidiophores are specialized hyphal stalks that have conidiogenous cells which produce conidia for asexual reproduction. The Latin word for broom, scopula, was chosen as the basis of the generic name due to the broom-like appearance of the conidiophores of Scopulariopsis. In S. candida, these structures are 10–20 μm in length. S. candida sporulates using specialized conidiogenous cells called annellide. The tip of the cell elongates and narrows each time a conidium is formed and results in a series of ring-like scars called annellations near the tip. The annelloconidia are formed in dry chains that eventually break off to allow the dispersal of spores by wind. They are one-celled, smooth- and thick-walled, and round but also broad-based. They resemble simple yeasts. Annelloconidia are hyaline and 6–8 × 5–6 μm in size. The smooth hyaline annelloconidia can also distinguish S. candida from S. brevicaulis, which has conidia that are rough-walled, truncate and covered in tiny, thorny outgrowths. Isolates of S. candida can produce sterile perithecia-like structures. Physiology The optimal growth temperature range for S. candida is , with a minimum of 5 °C and maximum 37 °C. It is a keratinophilic species which may contribute to its role in nail infections. It grows well on protein-rich surfaces and is able to digest α-keratins. In vitro study of antifungal susceptibilities reports S. candida as relatively more resistant to the antifungal drug amphotericin B, and susceptible to Itraconazole and miconazole. Habitat and ecology M. manginii is a saprobic fungus. It has a worldwide distribution. It is often isolated from decaying plant material, soil and indoor environments, but also human skin and nails, dust, chicken litter, atmosphere, book paper and cheese, among other locations. Contaminated dust, soil and air samples are often found in North America and Europe. In Portugal, S. candida was identified as the most prevalent fungal species contaminating the air of three poultry slaughterhouses in 2016. Contamination with fungal pathogens was found on equipment used in physiotherapy clinics in Brazil, specifically electrodes and ultrasound transducers, S. candida was found on several contact electrodes. Pathogenicity Invasive fungal infections are becoming increasingly common in patients who are immunocompromised. M. manginii and S. candida are not traditionally recognized as common human pathogens. However, they were identified as opportunistic human and plant pathogens in a few reported cases. Other Scopulariopsis species have been associated with nail infection and keratitis (S. brevicaulis), and brain abscess and hypersensitivity pneumonitis (S. brumptii). A case of disseminated infection caused by Scopulariopsis species in a 17-year old patient with chronic myelogenous leukemia was described in 1987. After receiving an allogenic bone marrow transplantation for cancer treatment, the patient complained of recurrent fever, nosebleeds, and abnormal sensations of the nose. Amphotericin B therapy was administered but symptoms persisted. Within two months of transplant, the patient experienced a short period of improvement followed by rapid deterioration and death. The autopsy discovered Scopulariopsis species in the lungs, blood, brain and nasal septum, and exhibited high resistance to amphotericin B in vitro. In 1989, the species responsible for the disseminated infection was identified as S. candida. S. candida was identified as the cause of invasive sinusitis in a 12-year old girl undergoing treatment for non-Hodgkin's lymphoma in 1992. This is the second reported case of invasive sinus disease caused by Scopulariopsis species and only reported case due to S. candida. The patient was immunocompromised at the time of fungal infection due to ongoing cancer treatment. The clinical presentation resembled an infection by fungi in the order Mucorales, and involved myalgia, cheek swelling and tenderness, a week-long fever, and extensive necrosis of maxillary sinuses. As a result, the presumed diagnosis was mucormycosis until further examination of patient specimens showed abundant growth of a powdery, tan mold that was distinguished as S. candida by several features (e.g., septate hyphae, round and smooth conidia, broom-shaped conidiophores). The patient immediately received surgical drainage and debridement of damaged tissue, and amphotericin B to treat the fungal infection. Subsequent identification of S. candida as the cause of disease prompted administration of additional antifungal medication, Itraconazole, to address potential amphotericin B resistance. The patient was cured of invasive sinusitis with no signs of progressive sinus disease. This marked the first successful treatment of an invasive infection caused by Scopulariopsis species in an immunocompromised host. Immunosuppression was suspected to play a role in the ability of S. candida to cause invasive infection. The most significant contributor to managing the disease was likely strengthening the patient's immune system by suspending chemotherapy and administrating granulocyte colony-stimulating factor. S. candida and M. manginii have been identified in cases of onychomycoses. They mainly cause tissue damage to the big toe and rarely other nails. Common symptoms include difficulty walking while wearing shoes, thickening and discolouration of nails, and deformation of nails. The infection often begins at the lateral edge of the nail instead of the proximal edge. Patients are typically middle-aged or older. The mechanism of these infections is not well-characterized. In addition, the published cases of onychomycoses caused by these species are not all reliable. References Microascales Fungi described in 1931 Fungus species
Microascus manginii
Biology
2,301
2,229,944
https://en.wikipedia.org/wiki/Pound%E2%80%93Rebka%20experiment
The Pound–Rebka experiment monitored frequency shifts in gamma rays as they rose and fell in the gravitational field of the Earth. The experiment tested Albert Einstein's 1907 and 1911 predictions, based on the equivalence principle, that photons would gain energy when descending a gravitational potential, and would lose energy when rising through a gravitational potential. It was proposed by Robert Pound and his graduate student Glen A. Rebka Jr. in 1959, and was the last of the classical tests of general relativity to be verified. The measurement of gravitational redshift and blueshift by this experiment validated the prediction of the equivalence principle that clocks should be measured as running at different rates in different places of a gravitational field. It is considered to be the experiment that ushered in an era of precision tests of general relativity. Background Equivalence principle argument predicting gravitational red- and blueshift In the decade preceding Einstein's publication of the definitive version of his theory of general relativity, he anticipated several of the results of his final theory with heuristic arguments. One of these concerned the light in a gravitational field. To show that the equivalence principle implies that light is Doppler-shifted in a gravitational field, Einstein considered a light source separated along the z-axis by a distance above a receiver in a homogeneous gravitational field having a force per unit mass of 1 A continuous beam of electromagnetic energy with frequency is emitted by towards According to the equivalence principle, this system is equivalent to a gravitation-free system which moves with uniform acceleration in the direction of the positive z-axis, with separated by a constant distance from In the accelerated system, light emitted from takes (to a first approximation) to arrive at But in this time, the velocity of will have increased by from its velocity when the light was emitted. The frequency of light arriving at will therefore not be the frequency but the greater frequency given by According to the equivalence principle, the same relation holds for the non-accelerated system in a gravitational field, where we replace by the gravitational potential difference between and so that Advent of general relativity In 1916, Einstein used the framework of his newly completed general theory of relativity to update his earlier heuristic arguments predicting gravitational redshift to a more rigorous form. Gravitational redshift and two other predictions from his 1916 paper, the anomalous perihelion precession of Mercury's orbit and the gravitational deflection of light by the Sun, have become known as the "classical tests" of general relativity. The anomalous perihelion precession of Mercury had long been recognized as a problem in celestial mechanics since the 1859 calculations of Urbain Le Verrier. The observation of the deflection of light by the Sun in the 1919 Eddington expedition catapulted Einstein to worldwide fame. Gravitational redshift would prove to be by far the most difficult of the three classical tests to demonstrate. There had been little rush by experimenters to test Einstein's earlier predictions of gravitational time dilation, since the predicted effect was almost immeasurably small. Einstein's predicted displacement for spectral lines of the Sun amounted to only two parts in a million, and would be easily masked by line broadening due to temperature and pressure, and by line asymmetry due to the fact the lines represent the superposition of absorption from many turbulent layers of the solar atmosphere. Several attempts to measure the effect were negative or inconclusive. The first generally accepted claim to have measured gravitational redshift was W.S. Adams's 1925 measurement of shifts in the spectral lines of the white dwarf star Sirius B. However, even Adams's measurements have since been brought into question for various reasons. Mössbauer effect In atomic spectroscopy, visible and ultraviolet photons resulting from electronic transitions of outer shell electrons, when emitted by gaseous atoms in an excited state, are readily absorbed by unexcited atoms of the same species. However, a corresponding absorbance of photons emitted by the nuclei of γ-emitters had never been observed because recoil of the nuclei resulted in so much loss of energy by the emitted photons that they no longer matched the absorbance spectra of the target nuclei. In 1958, Rudolf Mössbauer, who was analyzing the 129 keV transition of Iridium-191, discovered that by lowering the temperature of the emitter to 90K, he could achieve resonant absorbance. Indeed, the energy resolutions that he achieved were of unheard-of sharpness. He had discovered the phenomenon of recoilless γ-emission. In 1959, several research groups, most notably Robert Pound and Glen Rebka in Harvard and a team led by John Paul Schiffer in Harwell (England), announced plans to exploit this recently discovered effect to perform terrestrial tests of gravitational redshift. In February 1960, Schiffer and his team were the first to announce success in measuring the gravitational redshift, but with a rather high error of ±47%. It was to be Pound and Rebka's somewhat later contribution in April 1960, which used a stronger radiation source, longer path length, and several refinements to reduce systematic error, which was to be accepted as having provided a definitive measurement of the redshift. Pound and Rebka's experiment Sources of error After evaluating various γ-emitters for their study, Pound and Rebka chose to use 57Fe because it does not require cryogenic cooling to exhibit recoil-free emission, has a relatively low internal conversion coefficient so that it is relatively free of competing X-ray emissions that would have been difficult to distinguish from the 14.4 keV transition, and its parent 57Co has a usable half-life of 272 days. Pound and Rebka found that a large source of systematic error resulted from temperature variations, which they attributed primarily to a second order relativistic Doppler effect due to lattice vibrations. A mere 1°C difference in temperature between emitter and absorber caused a shift about equal to the predicted effect of gravitational time dilation. They also found frequency offsets between the lines of different combinations of source and absorber stemming from the sensitivity of the nuclear transition to an atom's physical and chemical environment. They therefore needed to adopt methodology which would allow them to distinguish these offsets from their measurement of gravitational redshift. Extreme care was also needed in sample preparation, otherwise inhomogeneities would limit the sharpness of the lines. Experimental setup The experiment was carried out in a tower at Harvard University's Jefferson laboratory that was, for the most part, vibrationally isolated from the rest of the building. An iron disk containing radioactive 57Co diffused into its surface was placed in the center of a ferroelectric or a moving coil magnetic transducer (speaker coil) which was placed near the roof of the building. A 38 cm diameter absorber consisting of thin square foils of iron enriched to a level of 32% 57Fe (as opposed to a 2% natural abundance), which were pasted side by side in a flat pattern on a Mylar sheet, was placed in the basement. The distance between the source and absorber was 22.5 meters (74 ft). The gamma rays traveled through a Mylar bag filled with helium to minimize scattering of the gamma rays. A scintillation counter was placed below the absorber to detect the gamma rays that passed through. The vibrating speaker coil imposed a continuously varying Doppler shift on the gamma ray source. Superimposed on the sinusoidal motions of the transducer was the slow (typically about 0.01 mm/s) constant motion of a slave hydraulic cylinder driven by a small diameter master cylinder controlled by a synchronous motor. The hydraulic cylinder motion was reversed multiple times during each data run after a constant integral number of transducer vibrations. Every several days, the position of the source and absorber would be reversed so that half the data runs would be of blueshift, and half would be of redshift. Three thermocouples mounted on the source in a spiral pattern and three on the absorber were connected to Wheatstone bridges to measure the temperature differences between the source and absorber. The recorded temperature differences were used to correct the data before analysis. Among the other steps used to compensate for possible systematic errors, Pound and Rebka varied the speaker frequency between 10 Hz and 50 Hz and tested different transducers (ferroelectric transducers versus moving coil magnetic speaker coils). A Mössbauer monitor near the source (not illustrated) checked for possible distortions of the source signal resulting from the cylinder/transducer assembly being regularly inverted from facing downwards to facing upwards. Modulation technique to detect small shifts Although the 14.4 keV recoilless emission line of 57Fe had a half-width of 1.13×10−12, the anticipated gravitational frequency shift was only 2.5×10−15. Measurement of this minute amount of frequency shift, 500 times smaller than the half-width, required a sophisticated protocol for data acquisition and data analysis. The best way to measure a small shift is often by "slope detection", measuring the resonance not at its peak, but rather comparing the absorption curve near its points of maximum slope (inflection points) on either side of the peak. The speaker coil typically operated at about 74 Hz with a maximum velocity amplitude corresponding to the maximum change of absorption with velocity of the resonance curve for a given combination of source and absorber (typically around 0.10 mm/s). Counts that were received in the quarter cycles of the oscillation period centered around the velocity maxima were accumulated in two separate registers. Likewise, counts received with the hydraulic cylinder in reverse motion were accumulated in another two separate registers, for a total of four registers of accumulated counts. The combined motions of the vibrating transducer and hydraulic cylinder allowed the incoming photons to be collected in four channels representing source motions of +0.11 mm/s, +0.09 mm/s, −0.11 mm/s, and −0.09 mm/s. They collectively operated at a 50% duty cycle, so that out of, say, 80 million incoming photons, 10 million would fit into the time slots of each of the four recording channels. From these counts, the velocity corresponding to the absorbance maximum could be calculated. The accuracy of determination of the line center depended on (1) the sharpness of the line, (2) the depth of the absorbance maximum, and (3) the total number of counts. They typically achieved a fractional absorbance maximum depth of about 0.3 and recorded about 1×1010 γ-rays, of which most will have been recoilless. Results Each data run yielded eleven numbers, i.e. four absorber register counts, four monitor register counts, and three average temperature differences. The register counts were generally recorded after twelve full back-and-forth cycles of the hydraulic piston, where each reversal of piston motion occurred after 22,000 periods of source vibration. The source and absorber units were interchanged every several days to allow comparison between the results with the γ-rays rising versus the γ-rays falling. Combining data from runs having gravitational frequency shift of equal but opposite sign enabled the fixed frequency shift between a given source/target combination to be eliminated by subtraction. In their 1960 paper, Pound and Rebka presented data from the first four days of counting. Six runs with the source at the bottom, after temperature correction gave a weighted average fractional frequency shift between source and absorber of −(19.7±0.8)×10−15. Eight runs with the source at the top, after temperature correction gave a weighted average fractional frequency shift of −(15.5±0.8)×10−15. The frequency shifts, up and down, were both negative because the magnitude of the inherent frequency difference of the source/absorber combination considerably exceeded the magnitude of the expected gravitational redshifts/blueshifts. Taking half the sum of the weighted averages yielded the inherent frequency difference of the source/absorber combination, −(17.6±0.6)×10−15. Taking half the difference of the weighted averages yielded the net fractional frequency shift due to gravitational time dilation, −(2.1±0.5)×10−15. Over the full ten days of data collection, they calculated a net fractional frequency shift due to gravitational time dilation of −(2.56±0.25)×10−15, which corresponds to the predicted value with an error margin of 10%. In the next several years, the Pound lab published successive refinements of the gravitational redshift measurement, finally reaching the 1% level in 1964. Current status of gravitational redshift In the years subsequent to the series of measurements performed by the Pound lab, various tests using other technologies established the validity of gravitational redshift/time dilation with increasing precision. A notable example was the 1976 Gravity Probe A experiment, which used a space-borne hydrogen maser to increase the accuracy of the measurement to about 0.01%. From an engineering standpoint, after the launch of the Global Positioning System (which depends on general relativity for its proper functioning) and its integration into everyday life, gravitational redshift/time dilation is no longer considered a theoretical phenomenon requiring testing, but rather is considered a practical engineering concern in various fields requiring precision measurement, along with special relativity. From a theoretical standpoint, however, the status of gravitational redshift/time dilation is quite different. It is widely recognized that general relativity, despite accounting for all data gathered to date, cannot represent a final theory of nature. The equivalence principle (EP) lies at the heart of the general theory of relativity. Most proposed alternatives to general relativity predict violation of the EP at some level. The EP includes three hypotheses: Universality of free fall (UFF). This asserts that the acceleration of bodies freely falling bodies in a gravitational field is independent of their compositions. Local Lorentz invariance (LLI). This asserts that the outcome of a local experiment is independent of the velocity and orientation of the apparatus. Local position invariance (LPI). This asserts that clock rates are independent of their spacetime positions. Measurements of differences in the elapsed time displayed by two clocks will depend on their relative positioning in a gravitational field. But the clocks themselves are unaffected by gravitational potential. Gravitational redshift measurements provide a direct measure of LPI. Of the three hypotheses underlying the equivalence principle, LPI has been by far the least accurately determined. There has been considerable incentive, therefore, to improve on gravitational redshift measurements both in the laboratory and using astronomical observations. For example, the much anticipated, and much delayed European Space Agency's Atomic Clock Ensemble in Space (ACES) mission is expected to improve on previous measurements by a factor of 35. Notes Primary sources References External links Tests of general relativity Physics experiments Iron Cobalt
Pound–Rebka experiment
Physics
3,097
67,352,181
https://en.wikipedia.org/wiki/Tenilapine
Tenilapine is an atypical antipsychotic which has never been marketed in the US. Pharmacodynamics Tenilapine has a relatively high affinity for the 5-HT2A receptor, and relatively low (micromolar) affinities for dopamine receptors. The ratio of D2 to D4 bonding is similar to that of clozapine. Like many other atypical antipsychotics, it is a potent 5-HT2C antagonist. References 4-Methylpiperazin-1-yl compounds 5-HT2A antagonists 5-HT2B antagonists Atypical antipsychotics Heterocyclic compounds with 3 rings M1 receptor antagonists M2 receptor antagonists M3 receptor antagonists M4 receptor antagonists M5 receptor antagonists Nitriles Thiophenes
Tenilapine
Chemistry
179
21,780,273
https://en.wikipedia.org/wiki/Smart%20number
A smart number is any synthetic unique identifier that communicates additional information about the entity identified. The smart number is conceptually similar to a superkey as defined in the relational model of database organization, but, is intended to inform end users about status of accounts. The term has fallen out of common usage since using one number to carry so much information is considered bad practice. Common examples of smart numbers in use today include: The US 9133880099 – carries information about place of birth. Credit card numbers, which contain information about the credit issuing company. Auto insurance account numbers – vary by company but may contain information such as billing date, account status, and modification count. The term smart number may also apply to non-geographic telephone numbers, such as Australia's 13, 1300 and 1800 vanity numbers and freephone numbers. See Intelligent Network. References Numbers Identification
Smart number
Mathematics
180
39,323,412
https://en.wikipedia.org/wiki/List%20of%20textbooks%20in%20electromagnetism
The study of electromagnetism in higher education, as a fundamental part of both physics and electrical engineering, is typically accompanied by textbooks devoted to the subject. The American Physical Society and the American Association of Physics Teachers recommend a full year of graduate study in electromagnetism for all physics graduate students. A joint task force by those organizations in 2006 found that in 76 of the 80 US physics departments surveyed, a course using John Jackson's Classical Electrodynamics was required for all first year graduate students. For undergraduates, there are several widely used textbooks, including David Griffiths' Introduction to Electrodynamics and Electricity and Magnetism by Edward Purcell and David Morin. Also at an undergraduate level, Richard Feynman's classic Lectures on Physics is available online to read for free. Physics Undergraduate (introductory and intermediate) There are several widely used undergraduate textbooks in electromagnetism, including David Griffiths' Introduction to Electrodynamics as well as Electricity and Magnetism by Edward Purcell and David Morin. Richard Feynman's Lectures on Physics also include a volume on electromagnetism that is available to read online for free, through the California Institute of Technology. In addition, there are popular physics textbooks that include electricity and magnetism among the material they cover, such as David Halliday and Robert Resnick's Fundamentals of Physics. Feynman RP, Leighton RB, Sands M, Electromagnetism and Matter, Basic Books, 2010. Grant IS, Phillips WR, Electromagnetism, 2nd ed, Wiley, 1990. Griffiths DJ, Introduction to Electrodynamics, 5th ed, Cambridge University, 2024. Halliday D, Resnick R, Walker J, Fundamentals of Physics, Extended 12th ed, Wiley, 2022. Heald MA, Marion JB, Classical Electromagnetic Radiation, 3rd ed, Dover, 2012. Müller-Kirsten HJW, Electrodynamics, 2nd ed, World Scientific, 2011. Ohanian HC, Classical Electrodynamics, 2nd ed, Jones & Bartlett, 2006. Pauli W, Electrodynamics, Dover, 2000. Pollack GL, Stump DR, Electromagnetism, Addison-Wesley, 2002. Purcell EM, Morin DJ, Electricity and Magnetism, 3rd ed, Cambridge University, 2013. Reitz JR, Milford FJ, Christy RW, Foundations of Electromagnetic Theory, 4th ed, Pearson, 2009. Saslow W, Electricity Magnetism and Light, Academic, 2002. Schwartz M, Principles of Electrodynamics, Dover, 1987. Tamm IE, Fundamentals of the Theory of Electricity, Mir, 9th ed, 1979. Wangsness RK, Electromagnetic Fields, 2nd ed, Wiley, 1986. Graduate A 2006 report by a joint taskforce between the American Physical Society and the American Association of Physics Teachers found that 76 of the 80 physics departments surveyed require a first-year graduate course in John Jackson's Classical Electrodynamics. This made Jackson's book the most popular textbook in any field of graduate-level physics, with Herbert Goldstein's Classical Mechanics as the second most popular with adoption at 48 universities. James Russ, professor of physics at Carnegie Mellon University, claims Jackson's textbook has been "[t]he classic electrodynamics text for the past four decades" and that it is "the book from which most current-generation physicists took their first course." In addition to Jackson's textbook there are other classic textbooks like Classical Electricity and Magnetism by Pief Panofsky and Melba Phillips, and Electrodynamics of Continuous Media by Lev Landau, Evgeny Lifshitz, and Lev Pitaevskii, both pre-dating Jackson's book. Among the textbooks published after Jackson's book, Julian Schwinger's 1970s lecture notes is a mentionable book first published in 1998 posthumously. Due to the domination of Jackson's textbook in graduate physics education, even physicists like Schwinger became frustrated competing with Jackson and because of this, the publication of Schwinger's book was postponed so that it was finally completed and published by his colleagues. In addition to the mentioned classic books, in recent years there have been a few well-received electromagnetic textbooks published for graduate studies in physics, with one of the most notable being Modern Electrodynamics by Andrew Zangwill published in 2013, which has been praised by many physicists like John Joannopoulos, Michael Berry, Rob Phillips, Alain Aspect, Roberto Merlin, Shirley Chiang, Roy Schwitters but also well received in the electrical engineering community. Another notable textbook is Classical Electromagnetism in a Nutshell by Anupam Garg published in 2012, which has been also praised by physicists like Anthony Zee, Ramamurti Shankar, Jainendra Jain, John Belcher. Here is the list of some important textbooks that discuss generic physical areas of electromagnetism. Brau CA, Modern Problems in Classical Electrodynamics, Oxford University, 2004. Chaichian M, Merches I, Radu D, Tureanu A, Electrodynamics: An Intensive Course, Springer, 2016. Di Bartolo B, Classical Theory of Electromagnetism, 3rd ed, World Scientific, 2018. Franklin J, Classical Electromagnetism, 2nd ed, Dover, 2017. Freeman R, King J, Lafyatis G, Electromagnetic Radiation, Oxford University, 2019. Garg A, Classical Electromagnetism in a Nutshell, Princeton University, 2012. Good RH, Nelson TJ, Classical Theory of Electric and Magnetic Fields, Academic, 1971. Jackson JD, Classical Electrodynamics, 3rd ed, Wiley, 1999. Landau LD, Lifshitz EM, Pitaevskii LP, Electrodynamics of Continuous Media, 2nd ed, Pergamon, 1984. Milton KA, Schwinger J, Classical Electrodynamics, 2nd ed, CRC, 2024. Panofsky WKH, Phillips M, Classical Electricity and Magnetism, 2nd ed, Dover, 2005. Sommerfeld A, Electrodynamics, Academic, 1952. Wilcox W, Thron C, Macroscopic Electrodynamics: An Introductory Graduate Treatment, 2nd ed, World Scientific, 2024. Zangwill A, Modern Electrodynamics, Cambridge University, 2013. Specialized Here is the list of some important graduate textbooks that discuss particular physical areas of electromagnetism. Barut AO, Electrodynamics and Classical Theory of Fields and Particles, Dover, 1980. Baylis WE, Electrodynamics: A Modern Geometric Approach, Birkhäuser, 1999. Böttcher CJF, Bordewijk P, Van Belle OC, Rip A, Theory of Electric Polarization, 2nd ed, 2 vols, Elsevier, 1973, 1978. Clemmow PC, Dougherty JP, Electrodynamics of Particles and Plasmas, CRC, 2018. Cullity DB, Stock SR, Elements of X-Ray Diffraction, 3rd ed, Pearson, 2014. Eringen AC, Maugin GA, Electrodynamics of Continua, 2 vols, Springer, 1990. Ginzburg VL, The Propagation of Electromagnetic Waves in Plasmas, 2nd ed, Pergamon, 1970. Hehl FW, Obukhov YN, Foundations of Classical Electrodynamics: Charge, Flux, and Metric, Springer, 2003. Landau LD, Lifshitz EM, The Classical Theory of Fields, 4th ed, Pergamon, 1975. Lechner K, Classical Electrodynamics: A Modern Perspective, Springer, 2018. Oppenheimer JR, Lectures on Electrodynamics, Gordon & Breach, 1970. Post EJ, Formal Structure of Electromagnetics: General Covariance and Electromagnetics, Dover, 1997. Rohrlich F, Classical Charged Particles, 3rd ed, World Scientific, 2007. Rybicki GB, Lightman AP, Radiative Processes in Astrophysics, Wiley, 1979. There is a controversy in scientific community about using different units in electromagnetism that have been discussed. Electrical engineering According to a 2011 review of analytical and computational textbooks in electromagnetism by David Davidson, Julius Stratton's Electromagnetic Theory remains the classic text in electromagnetism and is still regularly cited. Davidson goes on to point out that Constantine Balanis' Advanced Engineering Electromagnetics and Roger Harrington's Time-Harmonic Electromagnetic Fields are standard references at the post-graduate level. Also for advanced undergraduate level, the textbook Fields and Waves in Communication Electronics by Simon Ramo, John Whinnery, and Theodore Van Duzer is considered as standard reference. Traditional differences between a physicist's point of view and an electrical engineer's point of view in studying electromagnetism have been noted. According to a 2023 lecture titled What Physicists Don't Know About Electromagnetism given by the theoretical physicist Hans Schantz and based on the comparison of textbooks Electromagnetic Theory by Julius Stratton and Classical Electrodynamics by John Jackson, Schantz argues "today's physicists who are educated using curriculum out of Jackson are less informed about practical electromagnetics than their counterparts of 80 years ago," and says it's because physicists are now shifted from classical electrodynamics to quantum electrodynamics. Schantz also continues that concepts like impedance, Smith chart, antenna, and electromagnetic energy flow, are not appreciated by physicists. Mathematician Sergei Schelkunoff who made many contributions to engineering electromagnetism also noted differences between physicist's and electrical engineer's view in electromagnetism. According to Schelkunoff: The classical physicist, being concerned largely with isolated transmission systems, has emphasized only one wave concept, that of the velocity of propagation or more generally of the propagation constant. But the communication engineer who is interested in "chains" of such systems from the very start is forced to adopt a more general attitude and introduce the second important wave concept, that of the impedance. The physicist concentrates his attention on one particular wave: a wave of force, or a wave of velocity or a wave of displacement. His original differential equations may be of the first order and may involve both force and velocity; but by tradition he eliminates one of these variables, obtains a second order differential equation in the other and calls it the "wave equation." Thus he loses sight of the interdependence of force and velocity waves and he does not stress the difference which may exist between waves in different media even though the velocity of wave propagation is the same. The engineer, on the other hand, thinks in terms of the original "pair of wave equations" and keeps constantly in mind this interdependence between force and velocity waves. The usefulness of electrical engineering's approach to electromagnetic problems has also been noted by other physicists like Robert Dicke and more specially Julian Schwinger. Schwinger's emphasis on using electrical engineering's point of view was even more general than just in electromagnetic phenomena so that he argued for the use of engineering worldview even in pure branches of physics like high-energy physics. Schwinger also said about his transformation from a person who saw electrical engineering problems as a pure physicist to a person who saw pure physical problems as an electrical engineer: "I first approached radar problems as a nuclear physicist; soon I began to think of nuclear physics in the language of electrical engineering." Many of the important and classic graduate electromagnetic textbooks related to electrical engineering listed here are published or reissued by IEEE under the name of The IEEE Press Series on Electromagnetic Wave Theory. Undergraduate (introductory and intermediate) Cheng DK, Field and Wave Electromagnetics, 2nd ed, Addison-Wesley, 1989. Hammond P, Electromagnetism for Engineers: An Introductory Course, 4th ed, Oxford University, 1997. Haus HA, Melcher JR, Electromagnetic Fields and Energy, Prentice Hall, 1989. Hayt WH, Buck JA, Engineering Electromagnetics, 9th ed, McGraw Hill, 2018. Ida N, Engineering Electromagnetics, 4th ed, Springer, 2021. Johnk CTA, Engineering Electromagnetic Fields and Waves, 2nd ed, Wiley, 1991. Jordan EC, Balmain KG, Electromagnetic Waves and Radiating Systems, 2nd ed, Prentice Hall, 1968. Kraus JD, Fleisch DA, Russ SH, Electromagnetics with Applications, 5th ed, McGraw Hill, 1999. Lorrain P, Corson DR, Lorrain F, Electromagnetic Fields and Waves: Including Electric Circuits, 3rd ed, WH Freeman, 1988. Ramo S, Whinnery JR, Van Duzer T, Fields and Waves in Communication Electronics, 3rd ed, Wiley, 1994. Sadiku MNO, Elements of Electromagnetics, 7th ed, Oxford University, 2018. Strangeway RA, Holland SS, Richie JE, Electromagnetics and Transmission Lines: Essentials for Electrical Engineering, 2nd ed, Wiley, 2022. Ulaby FT, Ravaioli U, Fundamentals of Applied Electromagnetics, 8th ed, Pearson, 2020. Graduate Balanis CA, Advanced Engineering Electromagnetics, 3rd ed, Wiley, 2024. Chew WC, Waves and Fields in Inhomogeneous Media, IEEE, 1995. Collin RE, Field Theory of Guided Waves, 2nd ed, Wiley-IEEE, 1991. Felsen LB, Marcuvitz N, Radiation and Scattering of Waves, Wiley-IEEE, 2003. Harrington RF, Time-Harmonic Electromagnetic Fields, Wiley-IEEE, 2001. Ishimaru A, Electromagnetic Wave Propagation, Radiation, and Scattering: From Fundamentals to Applications, 2nd ed, Wiley-IEEE, 2017. Jones DS, The Theory of Electromagnetism, Pergamon, 1964. Kong JA, Electromagnetic Wave Theory, 3rd ed, EMW, 2008. Schelkunoff SA, Electromagnetic Waves, Van Nostrand, 1943. Smythe WR, Static and Dynamic Electricity, 3rd ed, Hemisphere, 1989. Stratton JA, Electromagnetic Theory, Wiley-IEEE, 2007. Van Bladel J, Electromagnetic Fields, 2nd ed, Wiley-IEEE, 2007. Specialized Beckmann P, Spizzichino A, The Scattering of Electromagnetic Waves from Rough Surfaces, Artech House, 1987. Dudley DG, Mathematical Foundations for Electromagnetic Theory, Wiley-IEEE, 1994. Hanson GW, Yakovlev AB, Operator Theory for Electromagnetics: An Introduction, Springer, 2002. Idemen MM, Discontinuities in the Electromagnetic Field, Wiley-IEEE, 2011. Ishimaru A, Wave Propagation and Scattering in Random Media, IEEE-Oxford University, 1997. Kazimierczuk MK, High-Frequency Magnetic Components, 2nd ed, Wiley, 2014. Lindell IV, Methods for Electromagnetic Field Analysis, 2nd ed, Wiley-IEEE, 1996. McNamara DA, Pistotius CWI, Malherbe JAG, Introduction to Uniform Geometrical Theory of Diffraction, Artech House, 1990. Mittra R, Lee SW, Analytical Techniques in the Theory of Guided Waves, Macmillan, 1971. Senior TBA, Volakis JL, Approximate Boundary Conditions in Electromagnetics, IEE 1995. Tai CT, Dyadic Green Functions in Electromagnetic Theory, 2nd ed, IEEE, 1994. Tsang L, Kong JA, Ding KH, Ao CO, Scattering of Electromagnetic Waves, 3 vols, Wiley, 2001. Ufimtsev PY, Fundamentals of the Physical Theory of Diffraction, 2nd ed, Wiley-IEEE, 2014. Van Bladel J, Singular Electromagnetic Fields and Sources, Wiley-IEEE, 1991. Wait JR, Electromagnetic Waves in Stratified Media, 2nd ed, IEEE-Oxford University, 1996. Radio-frequency Balanis CA, Antenna Theory: Analysis and Design, 4th ed, Wiley, 2016. Collin RE, Foundations for Microwave Engineering, 2nd ed, Wiley-IEEE, 2001. Elliott RS, Antenna Theory and Design, Wiley-IEEE, 2003. Garg R, Bhartia P, Bahl I, Ittipiboon A, Microstrip Antenna Design Handbook, Artech House, 2001. Kraus JD, Marhefka RJ, Khan AS, Antennas and Wave Propagation, 5th ed, McGraw Hill, 2017. Marcuvitz N, Waveguide Handbook, IET, 2009. Milligan TA, Modern Antenna Design, 2nd ed, Wiley-IEEE 2005. Paul CR, Scully RC, Steffka MA, Introduction to Electromagnetic Compatibility, 3rd ed, Wiley, 2023. Pozar DM, Microwave Engineering, 4th ed, Wiley, 2012. Rizzi PA, Microwave Engineering: Passive Circuits, Prentice Hall, 1988. Ruck GT, Barrick DE, Stuart WD, Krichbaum CK, Radar Cross Section Handbook, 2 vols, Kluwer-Plenum, 1970. Stutzman WL, Thiele GA, Antenna Theory and Design, 3rd ed, Wiley, 2013. Tsang L, Kong JA, Shin RT, Theory of Microwave Remote Sensing, Wiley, 1985. Ulaby FT, Moore RK, Fung AK, Microwave Remote Sensing: Active and Passive, 3 vols, Artech House, 1981, 1982, 1986. Metamaterials Caloz C, Itoh T, Electromagnetic Metamaterials: Transmission Line Theory and Microwave Applications (The Engineering Approach), Wiley-IEEE, 2006. Capolino F, (Ed), Metamaterials Handbook, 2 vols, CRC, 2009. Cui TJ, Smith DR, Liu R, (Eds), Metamaterials: Theory, Design, and Applications, Springer, 2010. Eleftheriades GV, Balmain KG, (Eds), Negative-Refraction Metamaterials: Fundamental Principles and Applications, Wiley-IEEE, 2005. Engheta N, Ziolkowski RW, (Eds), Metamaterials: Physics and Engineering Explorations, Wiley-IEEE, 2006. Marqués R, Martín F, Sorolla M, Metamaterials with Negative Parameters: Theory, Design, and Microwave Applications, Wiley, 2008. Munk BA, Frequency Selective Surfaces: Theory and Design, Wiley, 2000. Munk BA, Metamaterials: Critique and Alternatives, Wiley, 2009. Ramakrishna SA, Grzegorczyk TM, Physics and Applications of Negative Refractive Index Materials, CRC, 2008. Sarychev AK, Shalaev VM, Electrodynamics of Metamaterials, World Scientific, 2007. Tretyakov S, Analytical Modeling in Applied Electromagnetics, Artech House, 2003. Yang F, Rahmat-Samii Y, Electromagnetic Band Gap Structures in Antenna Engineering, Cambridge University, 2009. Computational Booton RC, Computational Methods for Electromagnetics and Microwaves, Wiley, 1992. Chew WC, Jin JM, Michielssen E, Song J, (Eds), Fast and Efficient Algorithms in Computational Electromagnetics, Artech House, 2001. Gibson WC, The Method of Moments in Electromagnetics, 3rd ed, CRC, 2022. Harrington RF, Field Computation by Moment Methods, Wiley-IEEE, 2000. Itoh T, (Ed), Numerical Techniques for Microwave and Millimeter-Wave Passive Structures, Wiley, 1989. Jin JM, The Finite Element Method in Electromagnetics, 3rd ed, Wiley-IEEE, 2014. Jones DS, Methods in Electromagnetic Wave Propagation, 2nd ed, Wiley-IEEE, 1994. Kunz KS, Luebbers RJ, The Finite Difference Time Domain Method for Electromagnetics, CRC, 1993. Peterson AF, Ray SL, Mittra R, Computational Methods for Electromagnetics, Wiley-IEEE, 1997. Sadiku MNO, Computational Electromagnetics with MATLAB, 4th ed, CRC, 2019. Silvester PP, Ferrari RL, Finite Elements for Electrical Engineers, 3rd ed, Cambridge University, 1996. Taflove A, Hagness SC, (Eds), Computational Electrodynamics: The Finite-Difference Time-Domain Method, 3rd ed, Artech House, 2005. Optics There are also many outstanding and notable textbooks published in optics which is a branch of electromagnetism dealing with interactions of light or visible spectrum electromagnetism with matter. Here is the list of some important textbooks in different areas of classical optics. These textbooks are suitable for both physics and electrical engineering studies depending on the context. Generic Born M, Wolf E, Principles of Optics, 7th ed, Cambridge University, 2019. Fowles GR, Introduction to Modern Optics, 2nd ed, Dover, 1989. Guenther BD, Modern Optics, 2nd ed, Oxford University, 2015. Hecht E, Optics, 5th ed, Pearson, 2017. Iizuka K, Engineering Optics, 4th ed, Springer, 2019. Jenkins FA, White HE, Fundamentals of Optics, 4th ed, McGraw Hill, 2001. Lipson A, Lipson SG, Lipson H, Optical Physics, 4th ed, Cambridge University, 2010. Shiell R, McNab I, Pedrottis' Introduction to Optics, 4th ed, Cambridge University, 2024. Smith WJ, Modern Optical Engineering: The Design of Optical Systems, 4th ed, McGraw Hill, 2008. Sommerfeld A, Optics, Academic, 1954. Specialized Agrawal GP, Fiber-Optic Communication Systems, 5th ed, Wiley, 2021. Agrawal GP, Nonlinear Fiber Optics, 6th ed, Elsevier, 2019. Boyd RW, Nonlinear Optics, 4th ed, Elsevier, 2020. Goodman JW, Introduction to Fourier Optics, 4th ed, WH Freeman, 2017. Goodman JW, Statistical Optics, 2nd ed, Wiley, 2015. Haus HA, Waves and Fields in Optoelectronics, Prentice Hall, 1984. Luneburg RK, Mathematical Theory of Optics, University of California, 1964. Maier SA, Plasmonics: Fundamentals and Applications, Springer, 2007. Novotny L, Hecht B, Principles of Nano-Optics, 2nd ed, Cambridge University, 2012. Saleh BEA, Teich MC, Fundamentals of Photonics, 3rd ed, Wiley, 2019. Shen YR, Principles of Nonlinear Optics, Wiley, 1984. Yariv A, Yeh P, Photonics: Optical Electronics in Modern Communications, 6th ed, Oxford University, 2007. Light scattering Berne BJ, Pecora R, Dynamic Light Scattering: With Applications to Chemistry, Biology, and Physics, Dover, 2000. Bohren CF, Huffman DR, Absorption and Scattering of Light by Small Particles, Wiley, 2004. Kerker M, The Scattering of Light and Other Electromagnetic Radiation, Academic, 1969. Mishchenko MI, Travis LD, Lacis AA, Scattering, Absorption, and Emission of Light by Small Particles, NASA-Cambridge University, 2006. van de Hulst HC, Light Scattering by Small Particles, Dover, 1981. Yeh P, Optical Waves in Layered Media, Wiley, 1988. Magnetism Another branch of electromagnetism that has been developed separately is magnetism, which is about studying magnetic properties of different materials and their interactions with electromagnetic fields. There are also many classic textbooks published in magnetism which some of them are listed here and they could be used in both physics and electrical engineering studies depending on the context. Aharoni A, Introduction to the Theory of Ferromagnetism, 2nd ed, Oxford University, 1996. Blundell S, Magnetism in Condensed Matter, Oxford University, 2001. Bozorth RM, Ferromagnetism, Wiley-IEEE, 2003. Chikazumi S, Physics of Ferromagnetism, 2nd ed, Oxford University, 1997. Coey JMD, Magnetism and Magnetic Materials, Cambridge University, 2009. Cullity BD, Graham CD, Introduction to Magnetic Materials, 2nd ed, Wiley-IEEE, 2009. Dunlop DJ, Özdemir Ö, Rock Magnetism: Fundamentals and Frontiers, Cambridge University, 1997. Jiles D, Introduction to Magnetism and Magnetic Materials, 3rd ed, CRC, 2016. Krishnan KM, Fundamentals and Applications of Magnetic Materials, Oxford University, 2016. Morrish AH, The Physical Principles of Magnetism, Wiley-IEEE, 2001. O'handley RC, Modern Magnetic Materials: Principles and Applications, Wiley, 2000. Spaldin NA, Magnetic Materials: Fundamentals and Applications, 2nd ed, Cambridge University, 2010. Magnetohydrodynamics Magnetohydrodynamics is an interdisciplinary branch of physics that uses continuum mechanics to describe the interaction of electromagnetic fields with fluids that are conductive. It combines classical electromagnetism with fluid mechanics by combination of Maxwell equations with Navier-Stokes equations. This relatively new branch of physics was first developed by Hannes Alfvén in a 1942 paper published in Nature titled Existence of Electromagnetic-Hydrodynamic Waves. In 1950 Alfvén published a textbook titled Cosmical Electrodynamics which considered as the seminal work in the field of magnetohydrodynamics. There are also two closely related fields to the traditional field of magnetohydrodynamics which are called electrohydrodynamics and ferrohydrodynamics. Electrohydrodynamics deals with interaction of electromagnetic fields with weakly conductive fluids and ferrohydrodynamics deals with interaction of electromagnetic fields with magnetic fluids. Today magnetohydrodynamics and its related fields have many applications in plasma physics, electrical engineering, mechanical engineering, astrophysics, geophysics and many other scientific branches. Here is the list of some important textbooks in different areas of electro-magneto-ferro-hydrodynamics. Alfvén H, Fälthammar CG, Cosmical Electrodynamics: Fundamental Principles, 2nd ed, Oxford University, 1963. Biskamp D, Magnetohydrodynamic Turbulence, Cambridge University, 2003. Biskamp D, Nonlinear Magnetohydrodynamics, Cambridge University, 1993. Blums E, Cebers A, Maiorov MM, Magnetic Fluids, De Gruyter, 1996. Castellanos A, (Ed), Electrohydrodynamics, Springer, 1998. Cowling TG, Magnetohydrodynamics, 2nd ed, Adam Hilger, 1976. Davidson PA, Introduction to Magnetohydrodynamics, 2nd ed, Cambridge University, 2017. Moreau R, Magnetohydrodynamics, Springer, 1990. Priest E, Magnetohydrodynamics of the Sun, Cambridge University, 2014. Priest E, Forbes T, Magnetic Reconnection: MHD Theory and Applications, Cambridge University, 2000. Roberts PH, An Introduction to Magnetohydrodynamics, Elsevier, 1967. Rosensweig RE, Ferrohydrodynamics, Dover, 2014. Sutton GW, Sherman A, Engineering Magnetohydrodynamics, Dover, 2006. Historical There are many important books in electromagnetism which are generally considered as historical classics and some of them are listed here. Abraham M, Becker R, The Classical Theory of Electricity and Magnetism, 8th ed, Blackie & Son, 1932. Green G, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, T Wheelhouse, 1828. Heaviside O, Electromagnetic Theory, 3rd ed, 3 vols, The Electrician, 1893, 1899, 1912. Hertz H, Electric Waves: Being Researches on the Propagation of Electric Action with Finite Velocity through Space, Macmillan, 1893. Jeans JH, The Mathematical Theory of Electricity and Magnetism, 5th ed, Cambridge University, 1927. Macdonald HM, Electric Waves, Cambridge University, 1902. Maxwell JC, A Treatise on Electricity and Magnetism, 3rd ed, 2 vols, Clarendon, 1891. Planck M, Theory of Electricity and Magnetism, 2nd ed, Macmillan, 1932. Schott GA, Electromagnetic Radiation and the Mechanical Reactions Arising from It, Cambridge University, 1912. Thomson JJ, Elements of the Mathematical Theory of Electricity and Magnetism, 4th ed, Cambridge University, 1909. Whittaker ET, A History of the Theories of Aether and Electricity, 2nd ed, 2 vols, Thomas Nelson, 1951. See also Maxwell's equations Classical electromagnetism and special relativity History of electromagnetism List of textbooks on classical mechanics and quantum mechanics List of textbooks in thermodynamics and statistical mechanics List of textbooks in general relativity List of textbooks in mathematical physics List of important publications in physics Notes References Electrodynamics Electromagnetism Equations of physics Lists of science textbooks Physics-related lists
List of textbooks in electromagnetism
Physics,Mathematics
5,921
22,633,468
https://en.wikipedia.org/wiki/HAT-P-12
HAT-P-12 is a magnitude 13 low-metallicity K dwarf star approximately 463 light years away in the constellation Canes Venatici, which hosts one known exoplanet. Nomenclature The designation HAT-P-12 indicates that this was the 12th star found to have a planet by the HATNet Project. In August 2022, this planetary system was included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from Hungary, were announced in June 2023. HAT-P-12 is named Komondor and its planet is named Puli, after the Hungarian Komondor and Puli dog breeds. Planetary system In 2009 an exoplanet, HAT-P-12b, was discovered by the HATNet Project orbiting this star. The planet was discovered using the transit method and confirmed by follow up radial velocity measurements. Transit-timing variations suggest the possible presence of additional non-transiting planets in the system. See also List of extrasolar planets References External links K-type main-sequence stars Canes Venatici Planetary systems with one confirmed planet Planetary transit variables J13573347+4329367 Komondor
HAT-P-12
Astronomy
251
52,774,129
https://en.wikipedia.org/wiki/WR%201
WR 1 is a Wolf-Rayet star located around 10,300 light years away from Earth in the constellation of Cassiopeia. It is only slightly more than twice the size of the sun, but due to a temperature over 100,000 K it is over 758,000 times as luminous as the sun. Although WR 1 has been recognised as a Wolf-Rayet star since the 19th century, the WR 1 designation does not indicate that it was the first to be discovered. Ordered by right ascension, WR 1 is the first star in the Seventh Catalogue of galactic Wolf-Rayet stars. WR 1 is a member of the nitrogen sequence of WR stars and has a spectrum with HeII lines much stronger than HeI lines, and NV emission more than twice the strength of NIII, leading to the assignment of a WN4 spectral type. The spectrum has particularly wide HeII, leading to the equivalent classifications of WN4-b (for broad) or WN4-s (for strong). The spectrum also includes CIV and NIV, but no hydrogen lines at all, indicating that WR 1 has already expelled all of its hydrogen through its powerful solar winds. In 1986, Anthony F. J. Moffat and Michael M. Shara announced their discovery that WR 1 is a variable star. It was given its variable star designation, V863 Cassiopeiae, in 2001. The total amplitude of the variations is only 0.09 magnitudes at visual wavelengths. The variations are well-defined with a period of 16.9 days, but the light curve is not sinusoidal and its shape may vary. The variations have been ascribed to a dense asymmetric stellar wind and co-rotating interacting regions in ejected material. It has been suggested that the variability and an infrared excess could be due to a cool companion, but WR 1 is now considered to be a single star. The WN-b subclass of Wolf-Rayet star are generally thought to be all single, in contrast with the WN-A subclass which have narrow emission on a stronger continuum and are thought to be binary systems with a more conventional hot luminous star. WR 1 is a possible member of the Cassiopeia OB7 association at a distance of around , although its Gaia parallax suggests it is more distant. Interstellar extinction is calculated to be 2.1 magnitudes, and at the bolometric luminosity would be . A temperature of is derived from fitting the spectrum, giving a radius of . References Cassiopeia (constellation) Wolf–Rayet stars Cassiopeiae, V863 Durchmusterung objects 004004 003415 TIC objects
WR 1
Astronomy
555
78,082,094
https://en.wikipedia.org/wiki/NGC%205719
NGC 5719 is an intermediate or barred spiral galaxy located in the constellation Virgo. It is located at a distance of 94 million light years from Earth. It was first discovered by William Herschel in April 1787, but also observed by John Herschel in April 1828 and by George Phillips Bond in March 1853, who catalogued the object as NGC 5658 under the New General Catalogue. The luminosity class of NGC 5719 is I-II and it has a broad HI line. Additionally, it is a narrow line active galaxy (NLAGN). In far infrared (40-400 ɥm), the luminosity of NGC 5719 is 1.70 x 1010 Lʘ (1010.23 Lʘ) while its total luminosity in infrared (from the 8-1000 ɥm range) is 2.24 x 10 Lʘ (1010.35 Lʘ). Characteristics NGC 5719 is classified as an Sab galaxy found almost edge-on. It is interacting with a nearby face-on Sbc companion, NGC 5713. The dust lane of the galaxy is tilted, as well as bent significantly and inclined to its major axis. The galaxy has two HI tidal bridges which loop around it and are connecting with NGC 5713. There is a detection of two HI tidal tails leaving NGC 5713. Inside the disk of the galaxy (NGC 5719), ionized and neutral hydrogen are present. Both are counter-rotating in respect to the main stellar disk. When measuring the kinematics of both counter-rotating stellar disks and the ionized-gas disk, they extend by about 40 arcsec (4.3 kpc) from NGC 5719's center. NGC 5746 group NGC 5719 is part of the NGC 5746 group according to A.M. Garcia. In this galaxy group there are 31 members including NGC 5636, NGC 5638, NGC 5668, NGC 5690, NGC 5691, NGC 5692, NGC 5701, NGC 5705, NGC 5713, NGC 5725, NGC 5740, NGC 5746, NGC 5750, IC 1022, IC 1024 and IC 1048. Together the NGC 5719 group is part of the Virgo III cluster, one of the clusters in the Virgo Supercluster. References 5719 Virgo (constellation) Astronomical objects discovered in 1787 Discoveries by William Herschel 14383-0006 +00-37-024 052455 09462 Intermediate spiral galaxies
NGC 5719
Astronomy
511
41,295,500
https://en.wikipedia.org/wiki/C16H13N3O3
{{DISPLAYTITLE:C16H13N3O3}} The molecular formula C16H13N3O3 (molar mass: 295.29 g/mol, exact mass: 295.0957 u) may refer to: Nimetazepam Mebendazole (MBZ) Molecular formulas
C16H13N3O3
Physics,Chemistry
71
68,433,977
https://en.wikipedia.org/wiki/V723%20Monocerotis
V723 Monocerotis is a variable star in the constellation Monoceros. It was proposed in 2021 to be a binary system including a lower mass gap black hole candidate nicknamed "The Unicorn". Located 1,500 light years from Earth, it would be the closest black hole to our planet, and among the smallest ever found. Located in the Monoceros constellation, V723 Monocerotis is an eighth-magnitude ellipsoidal variable yellow giant star roughly the mass of the Sun, but 25 times its radius. The accompanying black hole was proposed to have a mass 3 times the mass of the Sun, corresponding to a Schwarzschild radius of 9 kilometers. Follow-up work in 2022 argued that V723 Monocerotis does not contain a black hole, but is a mass-transfer binary containing a red giant and a subgiant star that has been stripped of much of its mass. See also Stellar black hole List of nearest known black holes References Further reading Monoceros G-type bright giants Binary stars Monocerotis, V723 030891 045762
V723 Monocerotis
Astronomy
233
48,661,480
https://en.wikipedia.org/wiki/Extensible%20Device%20Metadata
The Extensible Device Metadata (XDM) specification is an open file format for embedding device-related metadata in JPEG and other common image files without breaking compatibility with ordinary image viewers. The metadata types include: depth map, camera pose, point cloud, lens model, image reliability data, and identifying info about the hardware components. This metadata can be used, for instance, to create depth effects such as a bokeh filter, recreate the exact location and position of the camera when the picture was taken, or create 3D data models of environments or objects. The format uses XML and is based on the XMP standard. It can support multiple "cameras" (image sources and types) in a single image file, and each can include data about its position and orientation relative to the primary camera. A camera data structure may include an image, depth map, etc. The XDM 1.0 documentation uses JPEG as the basic model, but states that the concepts generally apply to other image-file types supported by XMP, including PNG, TIFF, and GIF. The XDM specification is developed and maintained by a working group that includes engineers from Intel and Google. The version 1.01 specification is posted at the website xdm.org; an earlier 1.0 version was posted at the Intel website in late 2015. XDM builds upon the Depthmap Metadata specification, introduced in 2014 and used in commercial applications including Google Lens Blur and Intel RealSense Depth Enabled Photography (DEP). That original specification was designed only for depth-photography use cases. Due to changes and expansions of the data structure, and the use of different namespaces, the two standards are not compatible. Existing applications that used that older standard will not work with XDM without modifications. See also Extensible Metadata Platform References External links XDM 1.01 beta documentation XDM 1.0 beta documentation Adobe XMP Main Page XMP Specification Depthmap Metadata specification Digital photography Metadata Computer vision
Extensible Device Metadata
Technology,Engineering
410
1,135,199
https://en.wikipedia.org/wiki/Stroboscopic%20effect
The stroboscopic effect is a visual phenomenon caused by aliasing that occurs when continuous rotational or other cyclic motion is represented by a series of short or instantaneous samples (as opposed to a continuous view) at a sampling rate close to the period of the motion. It accounts for the "wagon-wheel effect", so-called because in video, spoked wheels (such as on horse-drawn wagons) sometimes appear to be turning backwards. A strobe fountain, a stream of water droplets falling at regular intervals lit with a strobe light, is an example of the stroboscopic effect being applied to a cyclic motion that is not rotational. When viewed under normal light, this is a normal water fountain. When viewed under a strobe light with its frequency tuned to the rate at which the droplets fall, the droplets appear to be suspended in mid-air. Adjusting the strobe frequency can make the droplets seemingly move slowly up or down. Depending upon the frequency of illumination there are different names for the visual effect. Up to about 80 Hertz or the flicker fusion threshold it is called visible flicker. From about 80 Hertz to 2000 Hertz it is called the stroboscopic effect (this article). Overlapping in frequency, but from 80 Hertz up to about 6500 Hertz a third effect exists called the phantom array effect or the ghosting effect, an optical phenomenon caused by rapid eye movements (saccades) of the observer. Simon Stampfer, who coined the term in his 1833 patent application for his stroboscopische Scheiben (better known as the "phenakistiscope"), explained how the illusion of motion occurs when during unnoticed regular and very short interruptions of light, one figure gets replaced by a similar figure in a slightly different position. Any series of figures can thus be manipulated to show movements in any desired direction. Explanation Consider the stroboscope as used in mechanical analysis. This may be a "strobe light" that is fired at an adjustable rate. For example, an object is rotating at 60 revolutions per second: if it is viewed with a series of short flashes at 60 times per second, each flash illuminates the object at the same position in its rotational cycle, so it appears that the object is stationary. Furthermore, at a frequency of 60 flashes per second, persistence of vision smooths out the sequence of flashes so that the perceived image is continuous. If the same rotating object is viewed at 61 flashes per second, each flash will illuminate it at a slightly earlier part of its rotational cycle. Sixty-one flashes will occur before the object is seen in the same position again, and the series of images will be perceived as if it is rotating backwards once per second. The same effect occurs if the object is viewed at 59 flashes per second, except that each flash illuminates it a little later in its rotational cycle and so, the object will seem to be rotating forwards. The same could be applied at other frequencies like the 50 Hz characteristic of electric distribution grids of most of countries in the world. In the case of motion pictures, action is captured as a rapid series of still images and the same stroboscopic effect can occur. Audio conversion from light patterns The stroboscopic effect also plays a role in audio playback. Compact discs rely on strobing reflections of the laser from the surface of the disc in order to be processed (it is also used for computer data). DVDs and Blu-ray Discs have similar functions. The stroboscopic effect also plays a role for laser microphones. Wagon-wheel effect Motion-picture cameras conventionally film at 24 frames per second. Although the wheels of a vehicle are not likely to be turning at 24 revolutions per second (as that would be extremely fast), suppose each wheel has 12 spokes and rotates at only two revolutions per second. Filmed at 24 frames per second, the spokes in each frame will appear in exactly the same position. Hence, the wheel will be perceived to be stationary. In fact, each photographically captured spoke in any one position will be a different actual spoke in each successive frame, but since the spokes are close to identical in shape and color, no difference will be perceived. Thus, as long as the number of times the wheel rotates per second is a factor of 24 and 12, the wheel will appear to be stationary. If the wheel rotates a little more slowly than two revolutions per second, the position of the spokes is seen to fall a little further behind in each successive frame and therefore, the wheel will seem to be turning backwards. Beneficial effects Stroboscopic principles, and their ability to create an illusion of motion, underlie the theory behind animation, film, and other moving pictures. In some special applications, stroboscopic pulsations have benefits. For instance, a stroboscope is tool that produces short repetitive flashes of light that can be used for measurement of movement frequencies or for analysis or timing of moving objects. An automotive timing light is a specialized stroboscope used to manually set the ignition timing of an internal combustion engine. Stroboscopic visual training (SVT) is a recent tool aimed at improving visual and perceptual performance of sporters by executing activities under conditions of modulated lighting or intermittent vision. Unwanted effects in common lighting Stroboscopic effect is one of the particular temporal light artefacts. In common lighting applications, the stroboscopic effect is an unwanted effect which may become visible if a person is looking at a moving or rotating object which is illuminated by a time-modulated light source. The temporal light modulation may come from fluctuations of the light source itself or may be due to the application of certain dimming or light level regulation technologies. Another cause of light modulations may be lamps with unfiltered pulse-width modulation type external dimmers. Whether this is so may be tested with a rotating fidget spinner. Effects Various scientific committees have assessed the potential health, performance and safety-related aspects resulting from temporal light modulations (TLMs) including stroboscopic effect. Adverse effects in common lighting application areas include annoyance, reduced task performance, visual fatigue and headache. The visibility aspects of stroboscopic effect are given in a technical note of CIE, see CIE TN 006:2016 and in the thesis of Perz. Stroboscopic effects may also lead to unsafe situations in workplaces with fast moving or rotating machinery. If the frequency of fast rotating machinery or moving parts coincides with the frequency, or multiples of the frequency, of the light modulation, the machinery can appear to be stationary, or to move with another speed, potentially leading to hazardous situations. Stroboscopic effects that become visible in rotating objects are also referred to as the wagon-wheel effect. In general, undesired effects in the visual perception of a human observer induced by light intensity fluctuations are called Temporal Light Artefacts (TLAs). Further background and explanations on the different TLA phenomena including stroboscopic effect is given in a recorded webinar “Is it all just flicker?”. Possible stroboscopic induced medical issues in some people include migraines & headaches, autistic repetitive behaviors, eye strain & fatigue, reduced visual task performance, anxiety and (rarer) epileptic seizures. Root causes Light emitted from lighting equipment such as luminaires and lamps may vary in strength as function of time, either intentionally or unintentionally. Intentional light variations are applied for warning, signalling (e.g. traffic-light signalling, flashing aviation light signals), entertainment (like stage lighting) with the purpose that flicker is perceived by people. Generally, the light output of lighting equipment may also have residual unintentional light level modulations due to the lighting equipment technology in connection with the type of electrical mains connection. For example, lighting equipment connected to a single-phase mains supply will typically have residual TLMs of twice the mains frequency, either at 100 or 120 Hz (depending on country). The magnitude, shape, periodicity and frequency of the TLMs will depend on many factors such as the type of light source, the electrical mains-supply frequency, the driver or ballast technology and type of light regulation technology applied (e.g. pulse-width modulation). If the modulation frequency is below the flicker fusion threshold and if the magnitude of the TLM exceeds a certain level, then such TLMs are perceived as flicker. Light modulations with modulation frequencies beyond the flicker fusion threshold are not directly perceived, but illusions in the form of stroboscopic effect may become visible (example see Figure 1). LEDs do not intrinsically produce temporal modulations; they just reproduce the input current waveform very well, and any ripple in the current waveform is reproduced by a light ripple because LEDs have a fast response; therefore, compared to conventional lighting technologies (incandescent, fluorescent), for LED lighting more variety in the TLA properties is seen. Many types and topologies of LED driver circuits are applied; simpler electronics and limited or no buffer capacitors often result in larger residual current ripple and thus larger temporal light modulation. Dimming technologies of either externally applied dimmers (incompatible dimmers) or internal light-level regulators may have additional impact on the level of stroboscopic effect; the level of temporal light modulation generally increases at lower light levels. NOTE – The root cause temporal light modulation is often referred to as flicker. Also, stroboscopic effect is often referred to as flicker. Flicker is however a directly visible effect resulting from light modulations at relatively low modulation frequencies, typically below 80 Hz, whereas stroboscopic effect in common (residential) applications may become visible if light modulations are present with modulation frequencies, typically above 80 Hz. Mitigation Generally, undesirable stroboscopic effect can be avoided by reducing the level of TLMs. Design of lighting equipment to reduce the TLMs of the light sources is typically a tradeoff for other product properties and generally increases cost and size, shortens lifetime or lowers energy efficiency. For instance, to reduce the modulation in the current to drive LEDs, which also reduces the visibility of TLAs, a large storage capacitor, such as electrolytic capacitor, is required. However, use of such capacitors significantly shortens the lifetime of the LED, as they are found to have the highest failure rate among all components. Another solution to lower the visibility of TLAs is to increase the frequency of the driving current, however this decreases the efficiency of the system and it increases its overall size. Visibility Stroboscopic effect becomes visible if the modulation frequency of the TLM is in the range of 80 Hz to 2000 Hz and if the magnitude of the TLM exceeds a certain level. Other important factors that determine the visibility of TLMs as stroboscopic effect are: The shape of the temporary modulated light waveform (e.g. sinusoidal, rectangular pulse and its duty cycle); The illumination level of the light source; The speed of movement of the moving objects observed; Physiological factors such as age and fatigue. All observer-related influence quantities are stochastic parameters, because not all humans perceive the effect of same light ripple in the same way. That is why perception of stroboscopic effect is always expressed with a certain probability. For light levels encountered in common applications and for moderate speeds of movement of objects (connected to speeds that can be made by humans), an average sensitivity curve has been derived based on perception studies. The average sensitivity curve for sinusoidal modulated light waveforms, also called the stroboscopic effect contrast threshold function, as a function of frequency f is as follows: The contrast threshold function is depicted in Figure 2. Stroboscopic effect becomes visible if the modulation frequency of the TLM is in the region between approximately 10 Hz to 2000 Hz and if the magnitude of the TLM exceeds a certain level. The contrast threshold function shows that at modulation frequencies near 100 Hz, stroboscopic effect will be visible at relatively low magnitudes of modulation. Although stroboscopic effect in theory is also visible in the frequency range below 100 Hz, in practice visibility of flicker will dominate over stroboscopic effect in the frequency range up to 60 Hz. Moreover, large magnitudes of intentional repetitive TLMs with frequencies below 100 Hz are unlikely to occur in practice because residual TLMs generally occur at modulation frequencies that are twice the mains frequency (100 Hz or 120 Hz). Detailed explanations on the visibility of stroboscopic effect and other temporal light artefacts are also given in CIE TN 006:2016 and in a recorded webinar “Is it all just flicker?”. Objective assessment of stroboscopic effect Stroboscopic effect visibility meter For objective assessment of stroboscopic effect the stroboscopic effect visibility measure (SVM) has been developed.  The specification of the stroboscopic effect visibility meter and the test method for objective assessment of lighting equipment is published in IEC technical report IEC TR 63158. SVM is calculated using the following summation formula: where Cm is the relative amplitude of the m-th Fourier component (trigonometric Fourier series representation) of the relative illuminance (relative to the DC-level); Tm is the stroboscopic effect contrast threshold function for visibility of stroboscopic effect of a sine wave at the frequency of the m-th Fourier component (see ). SVM can be used for objective assessment by a human observer of visible stroboscopic effects of temporal light modulation of lighting equipment in general indoor applications, with typical indoor light levels (> 100 lx) and with moderate movements of an observer or a nearby handled object (< 4 m/s). For assessing unwanted stroboscopic effects in other applications, such as the misperception of rapidly rotating or moving machinery in a workshop for example, other metrics and methods can be required or the assessment can be done by subjective testing (observation). NOTE – Several alternative metrics such as modulation depth, flicker percentage or flicker index are being applied for specifying the stroboscopic effect performance of lighting equipment. None of these metrics are suitable to predict actual human perception because human perception is impacted by modulation depth, modulation frequency, wave shape and if applicable the duty cycle of the TLM. Matlab toolbox A Matlab stroboscopic effect visibility measure toolbox including a function for calculating SVM and some application examples are available on the Matlab Central via the Mathworks Community. Acceptance criterion If the value of SVM equals one, the input modulation of the light waveform produces a stroboscopic effect that is just visible, i.e. at the visibility threshold. This means that an average observer will be able to detect the artefact with a probability of 50%. If the value of the visibility measure is above unity, the effect has a probability of detection of more than 50%. If the value of the visibility measure is smaller than unity, the probability of detection is less than 50%. These visibility thresholds show the average detection of an average human observer in a population. This does not, however, guarantee acceptability. For some less critical applications, the acceptability level of an artefact might be well above the visibility threshold. For other applications, the acceptable levels might be below the visibility threshold. NEMA 77-2017 amongst others gives guidance for acceptance criteria in different applications. Test and measurement applications A typical test setup for stroboscopic effect testing is shown in Figure 3. The stroboscopic effect visibility meter can be applied for different purposes (see IEC TR 63158): Measurement of the intrinsic stroboscopic-effect performance of lighting equipment when supplied with a stable mains voltage; Testing the effect of light regulation of lighting equipment or the effect of an external dimmer (dimmer compatibility). Publication of standards development organisations CIE TN 006:2016: introduces terms, definitions, methodologies and measures for quantification of TLAs including stroboscopic effect. IEC TR 63158:2018: includes the stroboscopic effect visibility meter specification and verification method, and test procedures a.o. for dimmer compatibility. NEMA 77-2017: amongst others, flicker test Methods and guidance for acceptance criteria. Dangers in workplaces Stroboscopic effect may lead to unsafe situations in workplaces with fast moving or rotating machinery. If the frequency of fast rotating machinery or moving parts coincides with the frequency, or multiples of the frequency, of the light modulation, the machinery can appear to be stationary, or to move with another speed, potentially leading to hazardous situations. Because of the illusion that the stroboscopic effect can give to moving machinery, it is advised that single-phase lighting is avoided. For example, a factory that is lit from a single-phase supply with basic lighting will have a flicker of 100 or 120 Hz (depending on country, 50 Hz x 2 in Europe, 60 Hz x 2 in US, double the nominal frequency), thus any machinery rotating at multiples of 50 or 60 Hz (3000–3600rpm) may appear to not be turning, increasing the risk of injury to an operator. Solutions include deploying the lighting over a full 3-phase supply, or by using high-frequency controllers that drive the lights at safer frequencies or direct current lighting. The 100/120 Hertz stroboscopic effect in commercial lighting may lead to disruptive issues and non-productive results in workspaces such as hospitals & medical facilities, industrial facilities, offices, schools or video conferencing rooms. See also 3D zoetrope Temporal light artefacts Temporal light effects Flicker (light) Flicker fusion threshold References External links https://www.youtube.com/watch?v=3_vVB9u-07I A clear example of this effect. Interactive Strobe Fountain – lets you adjust the strobe frequency to control the apparent movement of falling droplets. Yutaka Nishiyama (2012), "Mathematics of Fans" (PDF), International Journal of Pure and Applied Mathematics, 78 (5): 669–678. Film and video technology Optical illusions Articles containing video clips
Stroboscopic effect
Physics
3,788
10,443,327
https://en.wikipedia.org/wiki/Medea%20gene
Medea is a gene from the fruit fly Drosophila melanogaster that was one of the first two Smad genes discovered. For both genes, the maternal effect lethality was the basis for the selection of their names. Medea was named for the mythological Greek Medea, who killed her progeny fathered by Jason. Both Medea and Mothers against dpp were identified in a genetic screen for maternal effect mutations that caused lethality of heterozygous decapentaplegic progeny. Because decapentaplegic is a bone morphogenetic protein in the transforming growth factor beta superfamily, identification of the fly Smad genes provided a much-needed clue to understand the signal transduction pathway for this diverse family of extracellular proteins. Humans, mice, and other vertebrates have a gene with the same function as Medea, called SMAD4. An overview of the biology of Medea is found at The Interactive Fly, and the details of Medeas genetics and molecular biology are curated on FlyBase. Another laboratory used Medea as an acronym to describe a synthetic gene causing maternal effect dominant embryonic arrest. The formal genetic designation for maternal effect dominant embryonic arrest is P{Medea.myd88}; more details are in FlyBase. References Transcription factors Proteins Medea
Medea gene
Chemistry,Biology
271
272,065
https://en.wikipedia.org/wiki/Al-Kindi
Abū Yūsuf Yaʻqūb ibn ʼIsḥāq aṣ-Ṣabbāḥ al-Kindī (; ; ; ) was an Arab Muslim polymath active as a philosopher, mathematician, physician, and music theorist. Al-Kindi was the first of the Islamic peripatetic philosophers, and is hailed as the "father of Arab philosophy". Al-Kindi was born in Kufa and educated in Baghdad. He became a prominent figure in the House of Wisdom, and a number of Abbasid Caliphs appointed him to oversee the translation of Greek scientific and philosophical texts into the Arabic language. This contact with "the philosophy of the ancients" (as Hellenistic philosophy was often referred to by Muslim scholars) had a profound effect on him, as he synthesized, adapted and promoted Hellenistic and Peripatetic philosophy in the Muslim world. He subsequently wrote hundreds of original treatises of his own on a range of subjects ranging from metaphysics, ethics, logic and psychology, to medicine, pharmacology, mathematics, astronomy, astrology and optics, and further afield to more practical topics like perfumes, swords, jewels, glass, dyes, zoology, tides, mirrors, meteorology and earthquakes. In the field of mathematics, al-Kindi played an important role in introducing Hindu numerals to the Islamic world, and their further development into Arabic numerals along with al-Khwarizmi which eventually was adopted by the rest of the world. Al-Kindi was also one of the fathers of cryptography. Building on the work of al-Khalil (717–786), Al-Kindi's book entitled Manuscript on Deciphering Cryptographic Messages gave rise to the birth of cryptanalysis, was the earliest known use of statistical inference, and introduced several new methods of breaking ciphers, notably frequency analysis. He was able to create a scale that would enable doctors to gauge the effectiveness of their medication by combining his knowledge of mathematics and medicine. The central theme underpinning al-Kindi's philosophical writings is the compatibility between philosophy and other "orthodox" Islamic sciences, particularly theology, and many of his works deal with subjects that theology had an immediate interest in. These include the nature of God, the soul and prophetic knowledge. Early life Al-Kindi was born in Kufa to an aristocratic family of the Arabian tribe of the Kinda, descended from the chieftain al-Ash'ath ibn Qays, a contemporary of Muhammad. The family belonged to the most prominent families of the tribal nobility of Kufa in the early Islamic period, until it lost much of its power following the revolt of Abd al-Rahman ibn Muhammad ibn al-Ash'ath. His father Ishaq was the governor of Basra and al-Kindi received his preliminary education there. He later went to complete his studies in Baghdad, where he was patronized by the Abbasid caliphs al-Ma'mun () and al-Mu'tasim (). On account of his learning and aptitude for study, al-Ma'mun appointed him to the House of Wisdom, a recently established center for the translation of Greek philosophical and scientific texts, in Baghdad. He was also well known for his beautiful calligraphy, and at one point was employed as a calligrapher by Caliph al-Mutawakkil (). When al-Ma'mun died, his brother, al-Mu'tasim became caliph. Al-Kindi's position would be enhanced under al-Mu'tasim, who appointed him as a tutor to his son. But on the accession of al-Wathiq (), and especially of al-Mutawakkil, al-Kindi's star waned. There are various theories concerning this: some attribute al-Kindi's downfall to scholarly rivalries at the House of Wisdom; others refer to al-Mutawakkil's often violent persecution of unorthodox Muslims (as well as of non-Muslims); at one point al-Kindi was beaten and his library temporarily confiscated. Henry Corbin, an authority on Islamic studies, says that in 873, al-Kindi died "a lonely man", in Baghdad during the reign of al-Mu'tamid (). After his death, al-Kindi's philosophical works quickly fell into obscurity; many were lost even to later Islamic scholars and historians. Felix Klein-Franke suggests several reasons for this: aside from the militant orthodoxy of al-Mutawakkil, the Mongols also destroyed countless libraries during their invasion of Persia and Mesopotamia. However, he says the most probable cause of this was that his writings never found popularity amongst subsequent influential philosophers such as al-Farabi and Avicenna, who ultimately overshadowed him. His philosophical career peaked under al-Mu'tasim, to whom al-Kindi dedicated his most famous work, On First Philosophy, and whose son Ahmad was tutored by al-Kindi. Accomplishments According to Arab bibliographer Ibn al-Nadim, al-Kindi wrote at least two hundred and sixty books, contributing heavily to geometry (thirty-two books), medicine and philosophy (twenty-two books each), logic (nine books), and physics (twelve books). Although most of his books have been lost over the centuries, a few have survived in the form of Latin translations by Gerard of Cremona, and others have been rediscovered in Arabic manuscripts; most importantly, twenty-four of his lost works were located in the mid-twentieth century in a Turkish library. Philosophy His greatest contribution to the development of Islamic philosophy was his efforts to make Greek thought both accessible and acceptable to a Muslim audience. Al-Kindi carried out this mission from the House of Wisdom (Bayt al-Hikma), an institute of translation and learning patronized by the Abbasid Caliphs, in Baghdad. As well as translating many important texts, much of what was to become standard Arabic philosophical vocabulary originated with al-Kindi; indeed, if it had not been for him, the work of philosophers like al-Farabi, Avicenna, and al-Ghazali might not have been possible. In his writings, one of al-Kindi's central concerns was to demonstrate the compatibility between philosophy and natural theology on the one hand, and revealed or speculative theology on the other (though in fact he rejected speculative theology). Despite this, he did make clear that he believed revelation was a superior source of knowledge to reason because it guaranteed matters of faith that reason could not uncover. And while his philosophical approach was not always original, and was even considered clumsy by later thinkers (mainly because he was the first philosopher writing in the Arabic language), he successfully incorporated Aristotelian and (especially) neo-Platonist thought into an Islamic philosophical framework. This was an important factor in the introduction and popularization of Greek philosophy in the Muslim intellectual world. Astronomy Al-Kindi took his view of the solar system from Ptolemy, who placed the Earth at the centre of a series of concentric spheres, in which the known heavenly bodies (the Moon, Mercury, Venus, the Sun, Mars, Jupiter, and the stars) are embedded. In one of his treatises on the subject, he says that these bodies are rational entities, whose circular motion is in obedience to and worship of God. Their role, al-Kindi believes, is to act as instruments for divine providence. He furnishes empirical evidence as proof for this assertion; different seasons are marked by particular arrangements of the planets and stars (most notably the sun); the appearance and manner of people varies according to the arrangement of heavenly bodies situated above their homeland. However, he is ambiguous when it comes to the actual process by which the heavenly bodies affect the material world. One theory he posits in his works is from Aristotle, who conceived that the movement of these bodies causes friction in the sub-lunar region, which stirs up the primary elements of earth, fire, air and water, and these combine to produce everything in the material world. An alternative view found in the treatise On Rays (De radiis) is that the planets exercise their influence in straight lines; but this treatise, written by a Latin author, probably around the middle of the 13th century, is apocryphal. In each of these, two fundamentally different views of physical interaction are presented; action by contact and action at a distance. This dichotomy is duplicated in his writings on optics. Some of the notable astrological works by al-Kindi include: The Book of the Judgement of the Stars, including The Forty Chapters, on questions and elections. On the Stellar Rays (spurious) Several epistles on weather and meteorology, including De mutatione temporum, ("On the Changing of the Weather"). Treatise on the Judgement of Eclipses. Treatise on the Dominion of the Arabs and its Duration (used to predict the end of Arab rule). The Choices of Days (on elections). On the Revolutions of the Years (on mundane astrology and natal revolutions). De Signis Astronomiae Applicitis as Mediciam 'On the Signs of Astronomy as applied to Medicine' Treatise on the Spirituality of the Planets. Optics Al-Kindi was the first major writer on optics since antiquity. Roger Bacon placed him in the first rank after Ptolemy as a writer on the topic. In the apocryphal work known as De radiis stellarum, is developed the theory "that everything in the world ... emits rays in every direction, which fill the whole world." This theory of the active power of rays had an influence on later scholars such as Ibn al-Haytham, Robert Grosseteste and Roger Bacon. Two major theories of optics appear in the writings of al-Kindi: Aristotelian and Euclidean. Aristotle had believed that in order for the eye to perceive an object, both the eye and the object must be in contact with a transparent medium (such as air) that is filled with light. When these criteria are met, the "sensible form" of the object is transmitted through the medium to the eye. On the other hand, Euclid proposed that vision occurred in straight lines when "rays" from the eye reached an illuminated object and were reflected back. As with his theories on Astrology, the dichotomy of contact and distance is present in al-Kindi's writings on this subject as well. The factor which al-Kindi relied upon to determine which of these theories was most correct was how adequately each one explained the experience of seeing. For example, Aristotle's theory was unable to account for why the angle at which an individual sees an object affects his perception of it. For example, why a circle viewed from the side will appear as a line. According to Aristotle, the complete sensible form of a circle should be transmitted to the eye and it should appear as a circle. On the other hand, Euclidean optics provided a geometric model that was able to account for this, as well as the length of shadows and reflections in mirrors, because Euclid believed that the visual "rays" could only travel in straight lines. For this reason, al-Kindi considered the latter preponderant. Al-Kindi's primary optical treatise "De aspectibus" was later translated into Latin. This work, along with Alhazen's Optics and the Arabic translations of Ptolemy and Euclid's Optics, were the main Arabic texts to affect the development of optical investigations in Europe, most notably those of Robert Grosseteste, Vitello and Roger Bacon. Medicine There are more than thirty treatises attributed to al-Kindi in the field of medicine, in which he was chiefly influenced by the ideas of Galen. His most important work in this field is probably De Gradibus, in which he demonstrates the application of mathematics to medicine, particularly in the field of pharmacology. For example, he developed a mathematical scale to quantify the strength of a drug, and a system (based on the phases of the moon) that would allow a doctor to determine in advance the most critical days of a patient's illness. According to Plinio Prioreschi, this was the first attempt at serious quantification in medicine. Chemistry Al-Kindi denied the possibility of transmuting base metals into precious metals such as gold and silver, a position that was later attacked by the Persian alchemist and physician Abu Bakr al-Razi (). One work attributed to al-Kindi, variously known as the Kitāb al-Taraffuq fī l-ʿiṭr ("The Book of Gentleness on Perfume") or the Kitāb Kīmiyāʾ al-ʿiṭr wa-l-taṣʿīdāt ("The Book of the Chemistry of Perfume and Distillations"), contains one of the earliest known references to the distillation of wine. The work also describes the distillation process for extracting rose oils, and provides recipes for 107 different kinds of perfumes. Mathematics Al-Kindi authored works on a number of important mathematical subjects, including arithmetic, geometry, the Hindu numbers, the harmony of numbers, lines and multiplication with numbers, relative quantities, measuring proportion and time, and numerical procedures and cancellation. He also wrote four volumes, On the Use of the Hindu Numerals ( Kitāb fī Isti`māl al-'A`dād al-Hindīyyah) which contributed greatly to diffusion of the Hindu system of numeration in the Middle-East and the West. In geometry, among other works, he wrote on the theory of parallels. Also related to geometry were two works on optics. One of the ways in which he made use of mathematics as a philosopher was to attempt to disprove the eternity of the world by demonstrating that actual infinity is a mathematical and logical absurdity. Cryptography Al-Kindi is credited with developing a method whereby variations in the frequency of the occurrence of letters could be analyzed and exploited to break ciphers (i.e. cryptanalysis by frequency analysis). His book on this topic is Risāla fī Istikhrāj al-Kutub al-Mu'ammāh (رسالة في استخراج الكتب المعماة; literally: On Extracting Obscured Correspondence, more contemporarily: On Decrypting Encrypted Correspondence). In his treatise on cryptanalysis, he wrote:One way to solve an encrypted message, if we know its language, is to find a different plaintext of the same language long enough to fill one sheet or so, and then we count the occurrences of each letter. We call the most frequently occurring letter the "first", the next most occurring letter the "second", the following most occurring letter the "third", and so on, until we account for all the different letters in the plaintext sample. Then we look at the cipher text we want to solve and we also classify its symbols. We find the most occurring symbol and change it to the form of the "first" letter of the plaintext sample, the next most common symbol is changed to the form of the "second" letter, and the following most common symbol is changed to the form of the "third" letter, and so on, until we account for all symbols of the cryptogram we want to solve. Al-Kindi was influenced by the work of al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Meteorology In a treatise entitled as Risala fi l-Illa al-Failali l-Madd wa l-Fazr (Treatise on the Efficient Cause of the Flow and Ebb), al-Kindi presents a theory on tides which "depends on the changes which take place in bodies owing to the rise and fall of temperature." In order to support his argument, he gave a description of a scientific experiment as follows: One can also observe by the senses... how in consequence of extreme cold air changes into water. To do this, one takes a glass bottle, fills it completely with snow, and closes its end carefully. Then one determines its weight by weighing. One places it in a container... which has previously been weighed. On the surface of the bottle the air changes into water, and appears upon it like the drops on large porous pitchers, so that a considerable amount of water gradually collects inside the container. One then weighs the bottle, the water and the container, and finds their weight greater than previously, which proves the change. [...] Some foolish persons are of opinion that the snow exudes through the glass. This is impossible. There is no process by which water or snow can be made to pass through glass. In explaining the natural cause of the wind, and the difference for its directions based on time and location, he wrote: When the sun is in its northern declination northerly places will heat up and it will be cold towards the south. Then the northern air will expand in a southerly direction because of the heat due to the contraction of the southern air. Therefore most of the summer winds are merits and most of the winter winds are not. Music theory Al-Kindi was the first great theoretician of music in the Arab-Islamic world. Al-Kindi was the first to use musical notation, a music writing system, to write down music. He named his musical notes using literal syllables instead of letters, a process called solmization. He is known to have written fifteen treatises on music theory, but only five have survived. He added a fifth string to the 'ud. His works include discussions on the therapeutic value of music and what he regarded as "cosmological connections" of music. Philosophical thought Influences While Muslim intellectuals were already acquainted with Greek philosophy (especially logic), al-Kindi is credited with being the first real Muslim philosopher. His own thought was largely influenced by the Neo-Platonic philosophy of Proclus, Plotinus and John Philoponus, amongst others, although he does appear to have borrowed ideas from other Hellenistic schools as well. He makes many references to Aristotle in his writings, but these are often unwittingly re-interpreted in a Neo-Platonic framework. This trend is most obvious in areas such as metaphysics and the nature of God as a causal entity. Experts have suggested that he was influenced by the Mutazilite school of theology, because of the mutual concern both he and they demonstrated for maintaining the singularity (tawhid) of God. A minority view however holds that such agreements are considered incidental. Metaphysics According to al-Kindi, the goal of metaphysics is knowledge of God. For this reason, he does not make a clear distinction between philosophy and theology, because he believes they are both concerned with the same subject. Later philosophers, particularly al-Farabi and Avicenna, would strongly disagree with him on this issue, by saying that metaphysics is actually concerned with being qua being, and as such, the nature of God is purely incidental. Central to al-Kindi's understanding of metaphysics is God's absolute oneness, which he considers an attribute uniquely associated with God (and therefore not shared with anything else). By this he means that while we may think of any existent thing as being "one", it is in fact both "one" and many". For example, he says that while a body is one, it is also composed of many different parts. A person might say "I see an elephant", by which he means "I see one elephant", but the term 'elephant' refers to a species of animal that contains many. Therefore, only God is absolutely one, both in being and in concept, lacking any multiplicity whatsoever. Some feel this understanding entails a very rigorous negative theology because it implies that any description which can be predicated to anything else, cannot be said about God. In addition to absolute oneness, al-Kindi also described God as the Creator. This means that He acts as both a final and efficient cause. Unlike later Muslim Neo-Platonic philosophers (who asserted that the universe existed as a result of God's existence "overflowing", which is a passive act), al-Kindi conceived of God as an active agent. In fact, of God as the agent, because all other intermediary agencies are contingent upon Him. The key idea here is that God "acts" through created intermediaries, which in turn "act" on one another – through a chain of cause and effect – to produce the desired result. In reality, these intermediary agents do not "act" at all, they are merely a conduit for God's own action. This is especially significant in the development of Islamic philosophy, as it portrayed the "first cause" and "unmoved mover" of Aristotelian philosophy as compatible with the concept of God according to Islamic revelation. Epistemology Al-Kindi theorized that there was a separate, incorporeal and universal intellect (known as the "First Intellect"). It was the first of God's creation and the intermediary through which all other things came into creation. Aside from its obvious metaphysical importance, it was also crucial to al-Kindi's epistemology, which was influenced by Platonic realism. According to Plato, everything that exists in the material world corresponds to certain universal forms in the heavenly realm. These forms are really abstract concepts such as a species, quality or relation, which apply to all physical objects and beings. For example, a red apple has the quality of "redness" derived from the appropriate universal. However, al-Kindi says that human intellects are only potentially able to comprehend these. This potential is actualized by the First Intellect, which is perpetually thinking about all of the universals. He argues that the external agency of this intellect is necessary by saying that human beings cannot arrive at a universal concept merely through perception. In other words, an intellect cannot understand the species of a thing simply by examining one or more of its instances. According to him, this will only yield an inferior "sensible form", and not the universal form which we desire. The universal form can only be attained through contemplation and actualization by the First Intellect. The analogy he provides to explain his theory is that of wood and fire. Wood, he argues, is potentially hot (just as a human is potentially thinking about a universal), and therefore requires something else which is already hot (such as fire) to actualize this. This means that for the human intellect to think about something, the First Intellect must already be thinking about it. Therefore, he says that the First Intellect must always be thinking about everything. Once the human intellect comprehends a universal by this process, it becomes part of the individual's "acquired intellect" and can be thought about whenever he or she wishes. The soul and the afterlife Al-Kindi says that the soul is a simple, immaterial substance, which is related to the material world only because of its faculties which operate through the physical body. To explain the nature of our worldly existence, he (borrowing from Epictetus) compares it to a ship which has, during the course of its ocean voyage, temporarily anchored itself at an island and allowed its passengers to disembark. The implicit warning is that those passengers who linger too long on the island may be left behind when the ship sets sail again. Here, al-Kindi displays a stoic concept, that we must not become attached to material things (represented by the island), as they will invariably be taken away from us (when the ship sets sail again). He then connects this with a Neo-Platonist idea, by saying that our soul can be directed towards the pursuit of desire or the pursuit of intellect; the former will tie it to the body, so that when the body dies, it will also die, but the latter will free it from the body and allow it to survive "in the light of the Creator" in a realm of pure intelligence. The relationship between revelation and philosophy In the view of al-Kindi, prophecy and philosophy were two different routes to arrive at the truth. He contrasts the two positions in four ways. Firstly, while a person must undergo a long period of training and study to become a philosopher, prophecy is bestowed upon someone by God. Secondly, the philosopher must arrive at the truth by his own devices (and with great difficulty), whereas the prophet has the truth revealed to him by God. Thirdly, the understanding of the prophet – being divinely revealed – is clearer and more comprehensive than that of the philosopher. Fourthly, the way in which the prophet is able to express this understanding to the ordinary people is superior. Therefore, al-Kindi says the prophet is superior in two fields: the ease and certainty with which he receives the truth, and the way in which he presents it. However, the crucial implication is that the content of the prophet's and the philosopher's knowledge is the same. This, says Adamson, demonstrates how limited the superiority al-Kindi afforded to prophecy was. In addition to this, al-Kindi adopted a naturalistic view of prophetic visions. He argued that, through the faculty of "imagination" as conceived of in Aristotelian philosophy, certain "pure" and well-prepared souls, were able to receive information about future events. Significantly, he does not attribute such visions or dreams to revelation from God, but instead explains that imagination enables human beings to receive the "form" of something without needing to perceive the physical entity to which it refers. Therefore, it would seem to imply that anyone who has purified themselves would be able to receive such visions. It is precisely this idea, amongst other naturalistic explanations of prophetic miracles that al-Ghazali attacks in his Incoherence of the Philosophers. Critics and patrons While al-Kindi appreciated the usefulness of philosophy in answering questions of a religious nature, there would be many Islamic thinkers who were not as enthusiastic about its potential. But it would be incorrect to assume that they opposed philosophy simply because it was a "foreign science". Oliver Leaman, an expert on Islamic philosophy, points out that the objections of notable theologians are rarely directed at philosophy itself, but rather at the conclusions the philosophers arrived at. Even al-Ghazali, who is famous for his critique of the philosophers, was himself an expert in philosophy and logic. And his criticism was that they arrived at theologically erroneous conclusions. The three most serious of these, in his view, were believing in the co-eternity of the universe with God, denying the bodily resurrection, and asserting that God only has knowledge of abstract universals, not of particular things (not all philosophers subscribed to these same views). During his life, al-Kindi was fortunate enough to enjoy the patronage of the pro-Mutazilite Caliphs al-Ma'mun and al-Mu'tasim, which meant he could carry out his philosophical speculations with relative ease. In his own time, al-Kindi would be criticized for extolling the "intellect" as being the most immanent creation in proximity to God, which was commonly held to be the position of the angels. He also engaged in disputations with certain Mutazilites, whom he attacked for their belief in atoms, as not all Mutazilites accepted the belief of atomism. But the real role of al-Kindi in the conflict between philosophers and theologians would be to prepare the ground for debate. His works, says Deborah Black, contained all the seeds of future controversy that would be fully realized in al-Ghazali's Incoherence of the Philosophers. Legacy Al-Kindi was a master of many different areas of thought and was held to be one of the greatest philosophers. His influence in the fields of physics, mathematics, medicine, philosophy, and music were far-reaching and lasted for several centuries. Ibn al-Nadim in his praised al-Kindi and his work stating: The best man of his time, unique in his knowledge of all the ancient sciences. He is called the Philosopher of the Arabs. His books deal with different sciences, such as logic, philosophy, geometry, arithmetic, astronomy, etc. We have connected him with the natural philosophers because of his prominence in Science. Al-Kindi's major contribution was his establishment of philosophy in the Islamic world and his efforts in trying to harmonize the philosophical investigation along with the Islamic theology and creed. The philosophical texts which were translated under his supervision would become the standard texts in the Islamic world for centuries to come, even after his influence has been eclipsed by later Philosophers. Al-Kindi was also an important figure in medieval Europe. Several of his books got translated into Latin influencing Western authors like Robert Grosseteste and Roger Bacon. The Italian Renaissance scholar Geralomo Cardano (1501–1575) considered him one of the twelve greatest minds. In 1986, the Royal Commission for Riyadh City inaugurated the Al Kindi Plaza in the Diplomatic Quarter district of Riyadh, Saudi Arabia. References Bibliography English translations Works about al-Kindi External links (PDF version) Alkindus (Bibliotheca Augustana) Al-Kindi – Famous Muslims Al-Kindi's website – Islamic Philosophy Online – Three texts by Al Kindi in the Islamic Philosophy section Benjamnin N. Dyke's translation of Al-Kindi's Forty Chapters with PDF extracts from the Introduction and main text 800s births 873 deaths 9th-century Arab people 9th-century astrologers 9th-century mathematicians 9th-century people from the Abbasid Caliphate 9th-century philosophers 9th-century physicians 9th-century writers Alchemists of the medieval Islamic world Arabic-language commentators on Aristotle Aristotelian philosophers Epistemologists History of cryptography History of medicine History of pharmacy Intellectual history Islamic philosophers Kinda Astrologers of the medieval Islamic world Astronomers from the Abbasid Caliphate Metaphysicians Metaphysics writers Music theorists of the medieval Islamic world Ontologists People from Kufa Philosophers of art Philosophers of education Philosophers of logic Philosophers of mathematics Philosophers of medicine Philosophers of psychology Philosophers of religion Philosophers of science Philosophers from the Abbasid Caliphate Mathematicians from the Abbasid Caliphate Physicians from the Abbasid Caliphate Medieval cryptographers
Al-Kindi
Mathematics
6,369
132,471
https://en.wikipedia.org/wiki/Auguste%20Comte
Isidore Auguste Marie François Xavier Comte (; ; 19 January 1798 – 5 September 1857) was a French philosopher, mathematician and writer who formulated the doctrine of positivism. He is often regarded as the first philosopher of science in the modern sense of the term. Comte's ideas were also fundamental to the development of sociology, with him inventing the very term and treating the discipline as the crowning achievement of the sciences. Influenced by Henri de Saint-Simon, Comte's work attempted to remedy the social disorder caused by the French Revolution, which he believed indicated imminent transition to a new form of society. He sought to establish a new social doctrine based on science, which he labelled positivism. He had a major impact on 19th-century thought, influencing the work of social thinkers such as John Stuart Mill and George Eliot. His concept of Sociologie and social evolutionism set the tone for early social theorists and anthropologists such as Harriet Martineau and Herbert Spencer, evolving into modern academic sociology presented by Émile Durkheim as practical and objective social research. Comte's social theories culminated in his "Religion of Humanity", which presaged the development of non-theistic religious humanist and secular humanist organisations in the 19th century. He may also have coined the word altruisme (altruism). Life Auguste Comte was born in Montpellier, Hérault on 19 January 1798, at the time under the rule of the newly founded French First Republic. After attending the Lycée Joffre and then the University of Montpellier, Comte was admitted to École Polytechnique in Paris. The École Polytechnique was notable for its adherence to the French ideals of republicanism and progress. The École closed in 1816 for reorganization, and Comte continued his studies at the medical school at Montpellier. When the École Polytechnique reopened, he did not request readmission. Following his return to Montpellier, Comte soon came to see unbridgeable differences with his Catholic and monarchist family and set off again for Paris, earning money by small jobs. Comte had abandoned Catholicism under the influence of his first teacher and protestant pastor Daniel Encontre. In August 1817 he found an apartment at 36 Rue Bonaparte in Paris's 6th arrondissement (where he lived until 1822) and later that year he became a student and secretary to Henri de Saint-Simon, who brought Comte into contact with intellectual society and greatly influenced his thought therefrom. During that time, Comte published his first essays in the various publications headed by Saint-Simon, L'Industrie, Le Politique, and L'Organisateur (Charles Dunoyer and Charles Comte's Le Censeur Européen), although he would not publish under his own name until 1819's "La séparation générale entre les opinions et les désirs" ("The general separation of opinions and desires"). In 1824, Comte left Saint-Simon, again because of unbridgeable differences. Comte published a Plan de travaux scientifiques nécessaires pour réorganiser la société (1822) (Plan of scientific studies necessary for the reorganization of society). But he failed to get an academic post. His day-to-day life depended on sponsors and financial help from friends. Debates rage as to how much Comte appropriated the work of Saint-Simon. Comte married Caroline Massin in 1825. In 1826, he was taken to a mental health hospital, but left without being cured – only stabilized by French alienist Jean-Étienne Dominique Esquirol – so that he could work again on his plan (he would later attempt suicide in 1827 by jumping off the Pont des Arts). In the time between this and their divorce in 1842, he published the six volumes of his Cours. Comte developed a close friendship with John Stuart Mill. From 1844, he fell deeply in love with the Catholic Clotilde de Vaux, although because she was not divorced from her first husband, their love was never consummated. After her death in 1846 this love became quasi-religious, and Comte, working closely with Mill (who was refining his own such system) developed a new "Religion of Humanity". John Kells Ingram, an adherent of Comte, visited him in Paris in 1855. He published four volumes of Système de politique positive (1851–1854). His final work, the first volume of La Synthèse Subjective ("The Subjective Synthesis"), was published in 1856. Comte died in Paris on 5 September 1857 from stomach cancer and was buried in the famous Père Lachaise Cemetery, surrounded by cenotaphs in memory of his mother, Rosalie Boyer, and of Clotilde de Vaux. His apartment from 1841 to 1857 is now conserved as the Maison d'Auguste Comte and is located at 10 rue Monsieur-le-Prince, in Paris' 6th arrondissement. Work Comte's positivism Comte first described the epistemological perspective of positivism in The Course in Positive Philosophy, a series of texts published between 1830 and 1842. These texts were followed by the 1848 work, A General View of Positivism (published in English in 1865). The first 3 volumes of the Course dealt chiefly with the physical sciences already in existence (mathematics, astronomy, physics, chemistry, biology), whereas the latter two emphasised the inevitable coming of social science. Observing the circular dependence of theory and observation in science, and classifying the sciences in this way, Comte may be regarded as the first philosopher of science in the modern sense of the term. Comte was also the first to distinguish natural philosophy from science explicitly. For him, the physical sciences had necessarily to arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. His work View of Positivism would therefore set out to define, in more detail, the empirical goals of the sociological method. Comte offered an account of social evolution, proposing that society undergoes three phases in its quest for the truth according to a general law of three stages. Comte's stages were (1) the theological stage, (2) the metaphysical stage, and (3) the positive stage. The Theological stage was seen from the perspective of 19th century France as preceding the Age of Enlightenment, in which man's place in society and society's restrictions upon man were referenced to God. Man blindly believed in whatever he was taught by his ancestors. He believed in supernatural power. Fetishism played a significant role during this time. By the "Metaphysical" stage, Comte referred not to the Metaphysics of Aristotle or other ancient Greek philosophers. Rather, the idea was rooted in the problems of French society subsequent to the French Revolution of 1789. This Metaphysical stage involved the justification of universal rights as being on a vaunted higher plane than the authority of any human ruler to countermand, although said rights were not referenced to the sacred beyond mere metaphor. This stage is known as the stage of the investigation, because people started reasoning and questioning, although no solid evidence was laid. The stage of the investigation was the beginning of a world that questioned authority and religion. In the Scientific stage, which came into being after the failure of the revolution and of Napoleon, people could find solutions to social problems and bring them into force despite the proclamations of human rights or prophecy of the will of God. Science started to answer questions in full stretch. In this regard, he was similar to Karl Marx and Jeremy Bentham. For its time, this idea of a Scientific stage was considered up-to-date, although, from a later standpoint, it is too derivative of classical physics and academic history. Comte's law of three stages was one of the first theories of social evolutionism. He once wrote: 'It is evident, the Solar System is badly designed' The other universal law he called the "encyclopedic law". By combining these laws, Comte developed a systematic and hierarchical classification of all sciences, including inorganic physics (astronomy, earth science and chemistry) and organic physics (biology and, for the first time, physique sociale, later renamed Sociologie). Independently from Emmanuel Joseph Sieyès's introduction of the term in 1780, Comte re-invented "sociologie", and introduced the term as a neologism, in 1838. Comte had earlier used the term "social physics", but that term had been appropriated by others, notably by Adolphe Quetelet. This idea of a special science (not the humanities, not metaphysics) for the social was prominent in the 19th century and not unique to Comte. It has recently been discovered that the term "sociology" (as a term considered coined by Comte) had already been introduced in 1780, albeit with a different meaning, by the French essayist Emmanuel Joseph Sieyès (1748–1836). The ambitious (or many would say 'grandiose') ways that Comte conceived of this special science of the social, however, was unique. Comte saw this new science, sociology, as the last and greatest of all sciences, one which would include all other sciences and integrate and relate their findings into a cohesive whole. It has to be pointed out, however, that he noted a seventh science, one even greater than sociology. Namely, Comte considered "Anthropology, or true science of Man [to be] the last gradation in the Grand Hierarchy of Abstract Science." Comte's explanation of the Positive philosophy introduced the important relationship between theory, practice, and human understanding of the world. On page 27 of the 1855 printing of Harriet Martineau's translation of The Positive Philosophy of Auguste Comte, we see his observation that, "If it is true that every theory must be based upon observed facts, it is equally true that facts can not be observed without the guidance of some theories. Without such guidance, our facts would be desultory and fruitless; we could not retain them: for the most part, we could not even perceive them." Comte's emphasis on the interconnectedness of social elements was a forerunner of modern functionalism. Nevertheless, as with many others of Comte's time, certain elements of his work are now viewed as eccentric and unscientific, and his grand vision of sociology as the centerpiece of all the sciences has not come to fruition. His emphasis on a quantitative, mathematical basis for decision-making remains with us today. It is a foundation of the modern notion of Positivism, modern quantitative statistical analysis, and business decision-making. His description of the continuing cyclical relationship between theory and practice is seen in modern business systems of Total Quality Management (TQM) and Continuous Quality Improvement where advocates describe a continuous cycle of theory and practice through the four-part cycle of Plan-Do-Check-Act (PDCA, the Shewhart cycle). Despite his advocacy of quantitative analysis, Comte saw a limit in its ability to help explain social phenomena. The early sociology of Herbert Spencer came about broadly as a reaction to Comte; writing after various developments in evolutionary biology, Spencer attempted to reformulate the discipline in what we might now describe as socially Darwinistic terms. Comte's fame today owes in part to Émile Littré, who founded The Positivist Review in 1867. Auguste Comte did not create the idea of Sociology, the study of society, patterns of social relationships, social interaction, and culture, but instead, he expanded it greatly. Positivism, the principle of conducting sociology through empiricism and the scientific method, was the primary way that Comte studied sociology. He split sociology into two different areas of study. One, social statics, how society holds itself together, and two, social dynamics, the study of the causes of societal changes. He saw these areas as parts of the same system. Comte compared society and sociology to the human body and anatomy. "Comte ascribed the functions of connection and boundaries to the social structures of language, religion, and division of labor." Through language, everybody in society, both past, and present, can communicate with each other. Religion unites society under a common belief system and functions in harmony under a system. Finally, the division of labor allows everyone in society to depend upon each other. The Utopian Project Comte is often disregarded when talking about utopia. However, he made many contributions to utopian literature and influenced the modern-day debate. Some intellectuals allude to the fact that the utopian system of modern life "served as a catalyst for various world-making activities during the nineteenth and early twentieth centuries" (Willson, M. 2019) . In this utopian project, Comte introduces three major concepts: altruism, sociocracy, and the religion of Humanity. In the 19th century, Comte coined altruism as "a theory of conduct that regards the good of others as the end of moral action." (Britannica, T, 2013). Furthermore, Comte explains sociocracy as the governance by people who know each other, friends, or allies. After the French revolution, Comte was looking for a rational basis for government, after developing the Positivism philosophy he developed sociocracy to the "scientific method" of the government. The religion of humanity In later years, Comte developed the Religion of Humanity for positivist societies to fulfil the cohesive function once held by traditional worship. In 1849, he proposed a calendar reform called the 'positivist calendar'. For close associate John Stuart Mill, it was possible to distinguish between a "good Comte" (the author of the Course in Positive Philosophy) and a "bad Comte" (the author of the secular-religious system). The system was unsuccessful but met with the publication of Darwin's On the Origin of Species (1859) to influence the proliferation of various Secular Humanist organizations in the 19th century, especially through the work of secularists such as George Holyoake and Richard Congreve. Although Comte's English followers, including George Eliot and Harriet Martineau, for the most part rejected the full gloomy panoply of his system, they liked the idea of a religion of humanity and his injunction to "vivre pour autrui" ("live for others"), from which comes the word "altruism". Law of three stages Comte was agitated by the fact that no one had synthesized physics, chemistry, and biology into a coherent system of ideas, so he began an attempt to reasonably deduce facts about the social world from the use of the sciences. Through his studies, he concluded that the growth of the human mind progresses in stages, and so must societies. He claimed the history of society could be divided into three different stages: theological, metaphysical, and positive. The Law of three Stages, an evolutionary theory, describes how the history of societies is split into three sections due to new thoughts on philosophy. Comte believed that evolution was the growth of the human mind, splitting into stages and evolving through these stages. Comte concluded that society acts similarly to the mind. The Law of Three Stages is the evolution of society in which the stages have already occurred or are currently developing. The reason why there are newly developed stages after a certain time period is that the system "has lost its power" and is preventing the progression of civilization, causing complicated situations in society. (Lenzer 1975, pg 10) The only way to escape the situation is for people within the civilized nations to turn towards an "organic" new social system. Comte refers to kings to show the complications of re-establishment in society. Kings feel the need to reorganize their kingdom, but many fail to succeed because they do not consider that the progress of civilization needs reform, not perceiving that there is nothing more perfect than inserting a new, more harmonious system. Kings fail to see the effectiveness of abandoning old systems because they do not understand the nature of the present crisis. But in order to progress, there need to be the necessary consequences that come with it, which is caused by a "series of modifications, independent of the human will, to which all classes of society contributed, and of which kings themselves have often been the first agents and most eager promoters". The people themselves have the ability to produce a new system. This pattern is shown through the theological stage, metaphysical stage, and positive stage. The Law of Three Stages is split into stages, much like how the human mind changes from stage to stage. The three stages are the theological stage, the metaphysical stage, and the positive stage, also known as the Law of Three Stages. The theological stage happened before the 1300s, in which all societies lived a life that was completely theocentric. The metaphysical stage was when the society seeks universal rights and freedom. With the third and final stage, the positive stage, Comte takes a stand on the question, "how should the relations among philosophy of science, history of science, and sociology of science be seen." He says that sociology and history are not mutually exclusive, but that history is the method of sociology, thus he calls sociology the "final science". This positive stage was to solve social problems and forcing these social problems to be fixed without care for "the will of God" or "human rights". Comte finds that these stages can be seen across different societies across all of history. Theological stage The first stage, the theological stage, relies on supernatural or religious explanations of the phenomena of human behavior because "the human mind, in its search for the primary and final causes of phenomena, explains the apparent anomalies in the universe as interventions of supernatural agents". The Theological Stage is the "necessary starting point of human intelligence" when humans turn to supernatural agents as the cause of all phenomena. In this stage, humans focus on discovering absolute knowledge. Comte disapproved of this stage because it turned to simple explanation humans created in their minds that all phenomena were caused by supernatural agents, rather than human reason and experience. Comte refers to Bacon's philosophy that "there can be no real knowledge except that which rests upon observed facts", but he observes that the primitive mind could not have thought that way because it would have only created a vicious circle between observations and theories. "For if, on the one hand, every positive theory must necessarily be founded upon observations, it is, on the other hand, no less true that, in order to observe, our mind has need of some theory or other". Because the human mind could not have thought in that way in the origin of human knowledge, Comte claims that humans would have been "incapable of remembering facts", and would not have escaped the circle if it were not for theological conceptions, which were less complicated explanations to human life. Although Comte disliked this stage, he explains that theology was necessary at the beginning of the developing primitive mind. The first theological state is the necessary starting point of human intelligence. The human mind primarily focuses its attention on the "inner nature of beings and to the first and final causes of all phenomena it observes." (Ferre 2) This means that the mind is looking for the cause and effect of an action that will govern the social world. Therefore, it "represents these phenomena as being produced by a direct and continuous action of more or less numerous supernatural agents, whose arbitrary interventions explain all the apparent anomalies of the universe." (Ferre 2) This primary subset of the theological state is known as fetishism, where the phenomena must be caused and created by a theological supernatural being such as God, making humans view every event in the universe as a direct will from these supernatural agents. Some people believed in souls or spirits that possessed inanimate objects and practiced Animism. These natural spiritual beings who possessed souls and may exist apart from the material bodies were capable of interacting with humans, therefore requiring sacrifices and worship to please the agents. With all these new reasons behind phenomena, numerous fetishisms occur, needing several gods to continue to explain events. People begin to believe that every object or event has a unique god attached to it. This belief is called polytheism. The mind "substituted the providential action of a single being for the varied play of numerous independent gods which have been imagined by the primitive mind." These Gods often took on both human and animal resemblance. In Egypt, there were multiple gods with animal body parts such as Ra, who had the head of a hawk and had sun associations with the Egyptians. The polytheistic Greeks had several gods such as Poseidon who controlled the sea and Demeter who was the goddess of fertility. However, with all these new gods governing the phenomena of society, the brain can get confused with the numerous gods it needs to remember. The human mind eliminates this problem by believing in a sub-stage called monotheism. Rather than having multiple gods, there is simply one all-knowing and omnipotent God who is the center of power controlling the world. This creates harmony with the universe because everything is under one ruler. This leaves no confusion of how to act or who is the superior ruler out of the several gods seen in polytheism. The theological state functions well as the first state of the mind when making a belief about an event because it creates a temporary placeholder for the cause of the action which can later be replaced. By allowing the brain to think of the reason behind phenomena, the polytheistic gods are fillers that can be replaced by monotheistic gods. The theological stage shows how the primitive mind views supernatural phenomena and how it defines and sorts the causes. "The earliest progress of the human mind could only have been produced by the theological method, the only method which can develop spontaneously. It alone has the important property of offering us a provisional theory,… which immediately groups the first facts, with its help, by cultivating our capacity for observation, we were able to prepare the age of a wholly positive philosophy." (Comte 149) Comte believed the theological stage was necessary because of the foundational belief that man's earliest philosophy of explanation is the act of connecting phenomena around him to his own actions; that man may "apply the study of external nature to his own". This first stage is necessary to remove mankind from the "vicious circle in which it was confined by the two necessities of observing first, in order to form conceptions, and of forming theories first, in order to observe". Additionally, the theological stage is able to organize society by directing "the first social organization, as it first forms a system of common opinions, and by forming such a system". Though, according to Comte, it could not last, this stage was able to establish an intellectual unity that made an impressive political system. The theological state was also necessary for human progress on account that it creates a class in a society dedicated to "speculative activity". It is in this way that Comte sees the theological stage continue to exist into the Enlightenment. Comte momentarily admires the theological stage for its remarkable ability to enact this activity amidst a time when it was argued to be impractical. It is to this stage that the human mind owes "the first effectual separation between theory and practice, which could take place in no other manner" other than through the institution provided by the theological stage. The Theological Stage is the stage that was seen primarily among the civilizations in the distant past. Having been used before the 1300s, this is a very basic view of the world with little to no involvement in the world of science, and a world of illusions and delusions, as Freud would put it. To seek the nature of all beings, mankind puts its focus on sentiments, feelings, and emotions. This turned mankind towards theology and the creation of gods to answer all their questions. The Theological Stage is broken into three sections: Fetishism is the philosophy in which mankind puts the power of a god into an inanimate object. Every object could hold this power of a god, so it started to confuse those who believed in Fetishism and created multiple gods. Polytheism is, in basic terms, the belief in an order of multiple gods who rule over the universe. Within polytheism, each god is assigned a specific thing in which they are the good of. Examples of this would be the Greek god, Zeus, the god of the sky/lightning, or Ra, the sun god, in Egyptian mythology. A group of priests was often assigned to these gods to offer sacrifices and receive blessings from those gods, but once again, because of the innumerable number of gods, it got confusing, so civilization turned to Monotheism. Monotheism is the belief in one, all-powerful God who rules over every aspect of the universe. The removal of an emotional and imaginational aspects of both Fetishism and Polytheism resulted in intellectual awakening. This removal allowed for the Enlightenment to occur as well as the expansion of the scientific world. With the Enlightenment came many famous philosophers who brought about a great change in the world. This is the reason why "Monotheism is the climax of the theological stage of thinking." Metaphysical or abstract stage The second stage, the metaphysical stage, is merely a modification of the first because a supernatural cause is replaced by an "abstract entity"; it is meant to be a transitional stage, where there is the belief that abstract forces control the behavior of human beings. Because it is a transitional stage between the theological stage and the positive stage, Comte deemed it the least important of the three stages and was only necessary because the human mind cannot make the jump from the theological to the positive stage on its own. The metaphysical stage is the transitional stage. Because "Theology and physics are so profoundly incompatible", and their "conceptions are so radically opposed in character", human intelligence must have a gradual transition. Other than this, Comte says that there is no other use for this stage. Although it is the least important stage, it is necessary because humans could not handle the significant change in thought from theological to positivity. The metaphysical stage is just a slight modification of the previous stage when people believed in the abstract forces rather than the supernatural. The mind begins to notice the facts themselves, caused by the emptiness of the metaphysical agents through "over subtle qualification that all right-minded persons considered them to be only the abstract names of the phenomena in question". The mind becomes familiar with concepts, wanting to seek more, and therefore is prepared to move into the positive stage. In understanding Comte's argument, it is important to note that Comte explains the theological and positive stages first and only then returns to explain the metaphysical stage. His rationale in this decision is that "any intermediate state can be judged only after a precise analysis of two extremes". Only upon arrival to the rational positive state can the metaphysical state be analyzed, serving only a purpose of aiding in the transition from the theological to a positive state. Furthermore, this state "reconciles, for a time, the radical opposition of the other two, adapting itself to the gradual decline of the one and the preparatory rise of the other". Therefore, the transition between the two states is almost unperceivable. Unlike its predecessor and successor, the metaphysical state does not have a strong intellectual foundation nor social power for a political organization. Rather it simply serves to guide man until the transition from imaginative theological state to rational positive state is complete. Positive stage The last stage – the positive stage – is when the mind stops searching for the cause of phenomena and realizes that laws exist to govern human behavior and that this stage can be explained rationally with the use of reason and observation, both of which are used to study the social world. This stage relies on science, rational thought, and empirical laws. Comte believed that this study of sociology he created was "the science that [came] after all the others; and as the final science, it must assume the task of coordinating the development of the whole of knowledge" because it organized all of human behavior. The final, most evolved stage is the positivist stage, the stage when humans give up on discovering absolute truth, and turn towards discovering, through reasoning and observation, actual laws of phenomena. Humans realize that laws exist and that the world can be rationally explained through science, rational thought, laws, and observation. Comte was a positivist, believing in the natural rather than the supernatural, and so he claimed that his time period, the 1800s, was in the positivist stage. He believed that within this stage, there is a hierarchy of sciences: mathematics, astronomy, terrestrial physics, chemistry, and physiology. Mathematics, the "science that relates to the measurement of magnitudes", is the most perfect science of all, and is applied to the most important laws of the universe. Astronomy is the most simple science and is the first "to be subjected to positive theories". Physics is less satisfactory than astronomy, because it is more complex, having less pure and systemized theories. Physics, as well as chemistry, are the "general laws of the inorganic world", and are harder to distinguish. Physiology completes the system of natural sciences and is the most important of all sciences because it is the "only solid basis of the social reorganization that must terminate the crisis in which the most civilized nations have found themselves". This stage will fix the problems in current nations, allowing progression and peace. It is through observation that humanity is able to gather knowledge. The only way within society to gather evidence and build upon what we do not already know to strengthen society is to observe and experience our situational surroundings. "In the positive state, the mind stops looking for causes of phenomena, and limits itself strictly to laws governing them; likewise, absolute notions are replaced by relative ones," The imperfection of humanity is not a result of the way we think, rather our perspective that guides the way we think. Comte expresses the idea that we have to open our eyes to different ideas and ways to evaluate our surroundings such as focusing outside of the simple facts and abstract ideas but instead dive into the supernatural. This does not make mean that what is around us is not critical to look out for as our observations are critical assets to our thinking. The things that are "lost" or knowledge that is in the past are still relevant to recent knowledge. It is what is before our time that guides why things are the way they are today. We would always be relying on our own facts and would never hypothesize to reveal the supernatural if we do not observe. Observing strives to further our thinking processes. According to Comte, "'The dead govern the living,' which is likely a reference to the cumulative nature of positivism and the fact that our current world is shaped by the actions and discoveries of those who came before us," As this is true, the observations only relevant to humanity and not abstractly related to humanity are distinct and seen situationally. The situation leads to human observation as a reflection of the tension in society can be reviewed, overall helping to enhance knowledge development. Upon our observation skills, our thinking shifts. As thinkers and observers, we switch from trying to identify truth and turn toward the rationality and reason nature brings, giving us the ability to observe. This distinct switch takes on the transition from the abstract to the supernatural. "Comte's classification of the sciences was based upon the hypothesis that the sciences had developed from the understanding of simple and abstract principles to the understanding of complex and concrete phenomena." Instead of taking what we believe to be true we turn it around to use the phenomena of science and the observation of natural law to justify what we believe to be true within society. The condensing and formulation of human knowledge is what Comte drives us toward to ultimately build the strongest society possible. If scientists do not take the chance to research why a certain animal species are going distinct and their facts researched by those in the past are no longer true of the present, how is the data supposed to grow? How are we to gain more knowledge? These facts of life are valuable, but it is beyond these facts that Comte gestures us to look to. Instead of the culmination of facts with little sufficiency, knowledge altogether takes on its role in the realm of science. In connection to science, Comte relates to science in two specific fields to rebuild the construction of human knowledge. As science is broad, Comte reveals this scientific classification for the sake of thinking and the future organization of society. "Comte divided sociology into two main fields, or branches: social statistics, or the study of the forces that hold society together; and social dynamics, or the study of the causes of social change," In doing this, society is reconstructed. By reconstructing human thinking and observation, societal operation alters. The attention is drawn to science, hypothesis', natural law, and supernatural ideas, allows sociology to be divided into these two categories. By combining the simple facts from the abstract to the supernatural and switching our thinking towards hypothetical observation, the sciences culminate in order to formulate sociology and this new societal division. "Every social system… aims definitively at directing all special forces towards a general result, for the exercise of a general and combined activity is the essence of the society," Social phenomena Comte believed can be transferred into laws and that systemization could become the prime guide to sociology so that all can maintain knowledge to continue building a strong intellectual society. To continue building a strong intellectual society, Comte believed the building or reformation requires intricate steps to achieve success. First, the new society must be created after the old society is destroyed because "without…destruction no adequate conception could be formed of what must be done,". Essentially a new society cannot be formed if it is constantly hindered by the ghost of its past. On the same terms, there will be no room for progress if the new society continues to compare itself to the old society. If humanity does not destroy the old society, the old society will destroy humanity. Or on the other hand, if one destroys the old society, "without ever replacing it, the people march onwards towards total anarchy,".  If the society is continuously chipped away without being replaced with new ideal societal structures, then society will fall deeper back into its old faults. The burdens will grow deep and entangle the platforms for the new society, thus prohibiting progress, and ultimately fulfilling the cursed seesaw of remodeling and destroying society. Hence, according to Comte, to design a successful new society, one must keep the balance of reconstruction and deconstruction. This balance allows for progress to continue without fault. Predictions Auguste Comte is well known for writing in his book The Positive Philosophy that people would never learn the chemical composition of the stars. This has been called a very poor prediction regarding human limits in science. In thirty years people were beginning to learn the composition of stars through spectroscopy. Auguste Comte and reflexivity Beyond Comte's substantive theoretical corpus, a less well-known yet interesting aspect of his work is his reflections upon the relation between self and knowledge production. Comte was troubled by the problem of how an individual that is the product of actually existing society could produce science aimed at transforming said society, and speaks in Positive Polity of a process of self-transformation aimed at improving himself as a knowledge producer. As the methodologist Audrey Alejandro has elaborated, these considerations by Comte foreshadow key concerns in contemporary social science regarding the importance of reflexivity, meaning by this the necessity to be critically aware and to assess the ways personal dispositions and unconscious discourses shape the production of knowledge. Moving forward, Alejandro has seen in Comte a foundation to develop a reflexive discourse analysis (RDA) framework, so as to provide social scientists with applicable tools from discourse analysis for the task of implementing reflexivity in practice. Bibliography A general view of positivism [Discours sur l'ensemble du positivisme 1848] London, 1856 Internet Archive Bridges, J.H. (tr.); A General View of Positivism; Trubner and Co., 1865 (reissued by Cambridge University Press, 2009; ) Congreve, R. (tr.); The Catechism of Positive Religion; Kegan Paul, Trench, Trübner and Co., 1891 (reissued by Cambridge University Press, 2009; ) with Gertrud Lenzer. Auguste Comte and Positivism the Essential Writings. Transaction Publishers, 1998. Martineau, H. (tr.); The Positive Philosophy of Auguste Comte; 2 volumes; Chapman, 1853 (reissued by Cambridge University Press, 2009; ) (but note that Cambridge University Press said "Martineau's abridged and more easily digestible version of Comte's work was intended to be readily accessible to a wide general readership, particularly those she felt to be morally and intellectually adrift", so this is not really Comte's own writings) Jones, H.S. (ed.); Comte: Early Political Writings; Cambridge University Press, 1998; System of Positive Polity; various publishers Cours de Philosophie Positive, Tome II; Bachelier, Paris, 1835, The Project Gutenberg eBook of Cours de philosophie positive (2/6), par Auguste Comte; scans of the six volumes are at Projet Gallica with Ferré Frederick. Introduction to Positive Philosophy. Hackett Pub. Co., 1988. with H. S. Jones. Early Political Writings. Cambridge University Press, 2003. Notes Sources Mary Pickering, Auguste Comte, Volume 1: An Intellectual Biography, Cambridge University Press (1993), Paperback, 2006. Mary Pickering, Auguste Comte, Volume 2: An Intellectual Biography, Cambridge University Press, 2009a. Mary Pickering, Auguste Comte, Volume 3: An Intellectual Biography, Cambridge University Press, 2009b. Further reading Henri Gouhier, La vie d'Auguste Comte, Gallimard, 1931 lah Jean Delvolvé, Réflexions sur la pensée comtienne, Félix Alcan, 1932 John Stuart Mill, Auguste Comte and Positivism, Trübner, 1865 Laurent Fedi, Comte, Les Belles Lettres, 2000, réédition 2005 Laurent Fedi, L'organicisme de Comte, in Auguste Comte aujourd'hui, M. Bourdeau, J.-F. Braunstein, A. Petit (dir), Kimé, 2003, pp. 111–132 Laurent Fedi, Auguste Comte, la disjonction de l'idéologie et de l'État, Cahiers philosophiques, n°94, 2003, pp. 99–110 Laurent Fedi, Le monde clos contre l'univers infini : Auguste Comte et les enjeux humains de l'astronomie, La Mazarine, n°13, juin 2000, pp. 12–15 Laurent Fedi, La contestation du miracle grec chez Auguste Comte, in L'Antiquité grecque au XIXè siècle : un exemplum contesté ?, C. Avlami (dir.), L'Harmattan, 2000, pp. 157–192 Laurent Fedi, Auguste Comte et la technique, Revue d'histoire des sciences 53/2, 1999, pp. 265–293 Mike Gane, Auguste Comte, London, Routledge, 2006. Henri Gouhier, La jeunesse d'Auguste Comte et la formation du positivisme, tome 1 : sous le signe de la liberté, Vrin, 1932 Henri Gouhier, La jeunesse d'Auguste Comte et la formation du positivisme, tome 2 : Saint-Simon jusqu'à la restauration, Vrin Henri Gouhier, La jeunesse d'Auguste Comte et la formation du positivisme, tome 3 : Auguste Comte et Saint-Simon, Vrin, 1941 Henri Gouhier, Oeuvres choisies avec introduction et notes, Aubier, 1941 Georges Canguilhem, « Histoire des religions et histoire des sciences dans la théorie du fétichisme chez Auguste Comte », Études d'histoire et de philosophie des sciences, Vrin, 1968 H.S. Jones, ed., Comte: Early Political Writings, Cambridge University Press, 1998 Angèle Kremer-Marietti, Auguste Comte et la théorie sociale du positivisme, Seghers, 1972 Angèle Kremer-Marietti, Auguste Comte, la science sociale, Gallimard, 1972 Angèle Kremer-Marietti, Le projet anthropologique d'Auguste Comte, SEDES, 1980, réédition L'Harmattan, 1999 Angèle Kremer-Marietti, L'anthropologie positiviste d'Auguste Comte, Lib. Honoré Champion, 1980 Angèle Kremer-Marietti, Entre le signe et l'histoire. L'anthropologie positiviste d'Auguste Comte, Klincksieck, 1982, réédition L'Harmattan, 1999 Angèle Kremer-Marietti, Le positivisme, Coll."Que sais-je?", PUF, 1982 Angèle Kremer-Marietti, Le concept de science positive. Ses tenants et ses aboutissants dans les structures anthropologiques du positivisme, Méridiens Klincksieck, 1983 Angèle Kremer-Marietti, Le positivisme d'Auguste Comte, L'Harmattan, 2006 Angèle Kremer-Marietti, Auguste Comte et la science politique, in Auguste Comte, Plan des travaux scientifiques nécessaires pour réorganiserla société, L'Harmattan, 2001 Angèle Kremer-Marietti, Auguste Comte et l'histoire générale, in Auguste Comte, Sommaire appréciation de l'ensemble du passé moderne, L'Harmattan, 2006 Angèle Kremer-Marietti, Auguste Comte et la science politique, L'Harmattan, 2007 Angèle Kremer-Marietti, Le kaléidoscope épistémologique d'Auguste Comte. Sentiments Images Signes, L'Harmattan, 2007 Realino Marra, La proprietà in Auguste Comte. Dall'ordine fisico alla circolazione morale della ricchezza, in «Sociologia del diritto», XII-2, 1985, pp. 21–53 Pierre Macherey, Comte. La philosophie et les sciences, PUF, 1989 Thomas Meaney, The Religion of Science and Its High PriestThe Religion of Science and Its High Priest, The New York Review of Books, 2012 Jacques Muglioni, Auguste Comte: un philosophe pour notre temps, Kimé, Paris, 1995 Annie Petit, Le Système d'Auguste Comte. De la science à la religion par la philosophie, 2016, Vrin, Paris Gertrud Lenzer, Auguste Comte: Essential Writings (1975), New York Harper, Paperback, 1997 Raquel Capurro, Le positivisme est un culte des morts: Auguste Comte, Epel, 1999 (traduit en français en 2001) : l'étude la plus récente sur la vie d'Auguste Comte, la vision sans complaisance d'une psychanalyste de l'école de Lacan Auguste Comte, Positive Philosophy of Auguste Comte (1855), translated by Harriet Martineau, Kessinger Publishing, Paperback, 2003; also available from the McMaster Archive for the History of Economic Thought : Volume One , Volume Two , Volume Three Pierre Laffitte (1823–1903): Autour d'un centenaire, in Revue des Sciences et des Techniques en perspective, 2ème série, vol. 8, n°2, 2004, Brepols Publishers, 2005 Zeïneb Ben Saïd Cherni, Auguste Comte, postérité épistémologique et ralliement des nations, L'Harmattan, 2005 Wolf Lepenies, Auguste Comte: die Macht der Zeichen, Carl Hanser, Munich, 2010 Oséias Faustino Valentim, O Brasil e o Positivismo, Publit, Rio de Janeiro, 2010. . Jean-François Eugène Robinet, Notice sur l'oeuvre et sur la vie d'Auguste Comte, par le Dr Robinet, son médecin et l'un de ses treize exécuteurs testamentaires, Paris : au siège de la Société positiviste, 1891. 3e éd. Jean-François Eugène Robinet, La philosophie positive: Auguste Comte et M. Pierre Laffitte, Paris : G. Baillière, [ca 1881]. Auguste Comte Sociology Theory Explained Andrew Wernick, Auguste Comte and the Religion of Humanity, Cambridge University Press, 2001. External links Auguste Comte: Stanford Encyclopaedia of Philosophy Review materials for studying Auguste Comte J.H. Bridges, The Seven New Thoughts of the Positive Polity 1915 Henri Gouhier, "Final Chapter – Life in the anticipation of the Grave", from The Life of Auguste Comte (1931). In Comte's last years, practicing his own religion. Auguste Comte quotes Positivist Church of Brazil The Three Cs and the Notion of Progress: Copernicus, Condorcet, Comte by Caspar J M Hewett The positive philosophy, Auguste Comte / freely translated and selected by Harriet Martineau, Cornell University Library Historical Monographs Collection – downloadable version Some selections from first lecture of Course of Positive Philosophy Auguste Comte – High Priest of Positivism by Caspar Hewett Maison d'Auguste Comte 1798 births 1857 deaths 19th-century French economists 19th-century French essayists 19th-century French mathematicians 19th-century French non-fiction writers 19th-century French philosophers Burials at Père Lachaise Cemetery Consequentialists French critics of religions École Polytechnique alumni Founders of religions French agnostics French ethicists French male non-fiction writers French sociologists Materialists Writers from Montpellier French philosophers of culture Philosophers of economics French philosophers of education French philosophers of history Philosophers of religion French philosophers of science Positivists Saint-Simonists Secular humanists Structural functionalism Theoretical historians Utilitarians University of Montpellier alumni
Auguste Comte
Physics
9,545
1,375,104
https://en.wikipedia.org/wiki/Workspace
Workspace is a term used in various branches of engineering and economic development. Business development Workspace refers to small premises provided, often by local authorities or economic development agencies, to help new businesses to establish themselves. These typically provide not only physical space and utilities but also administrative services and links to support and finance organizations, as well as peer support among the tenants. A continuum of sophistication ranges through categories such as 'managed workspaces', 'business incubators' and 'business and employment co-operatives'. In cities, they are often set up in buildings that are disused but which the local authority wishes to retain as a landmark. At the larger end of the spectrum are business parks, virtual offices, technology parks and science parks. Technology and software In technology and software, "workspace" is a term used for several different purposes. Software development A workspace is (often) a file or directory that allows a user to gather various source code files and resources and work with them as a cohesive unit. Often these files and resources represent the complete state of an integrated development environment (IDE) at a given time, a snapshot. Workspaces are very helpful in cases of complex projects when maintenance can be challenging. Good examples of environments that allow users to create and use workspaces are Microsoft Visual Studio and Eclipse. In configuration management, "workspace" takes on a different but related meaning; it is a part of the file system where the files of interest (for a given task like debugging, development, etc.) are located. It stores the user's view of the files stored in the configuration management's repository. In either case, workspace acts as an environment where a programmer can work, isolated from the outside world, for the task duration. Graphical interfaces Additionally, workspaces refer to the grouping of windows in some window managers. Grouping applications in this way is meant to reduce clutter and make the desktop easier to navigate. Multiple workspaces are prevalent on Unix-like operating systems and certain operating system shells. Mac OS X 10.5 and later macOS releases include an equivalent feature called "Spaces". Windows 10 now offers a similar feature called 'Task View'. Windows XP PowerToy is available to bring this functionality to Windows XP. Most systems with support for workspaces provide keyboard shortcuts to switch between them. Many also include some form of workspace switcher to change between them and sometimes to move windows between them as well. Workspaces are visualized in different ways. For example, on Linux computers using Compiz or Beryl with the Cube and Rotate Cube plugins enabled, each workspace is rendered as a face of an on-screen cube, and switching between workspaces is visualized by zooming out from the current face, rotating the cube to the new face, and zooming back in. On macOS, the old set of windows slides off the screen and the new set slides on. Window managers without "eye candy" often simply remove the old windows and display the new ones without any sort of intermediate effect. Computer-supported cooperative work In the context of computer-supported cooperative work (CSCW) a shared workspace is a place of collaboration that enables group awareness. "A shared workspace provides a sense of place where collaboration takes place. It is generally associated with some part of the screen real estate of the user’s computer where the user ‘‘goes’’ to work on shared artifacts, discovers work status, and interacts with his/her collaborators." Online applications In the context of software as a service, "workspace" is a term used by software vendors for applications that allow users to exchange and organize files over the Internet. Such applications have several advantages over traditional FTP clients or virtual folder offerings, including: Ability to capture task performance data and version data Organization of information in a more user-friendly interface than a traditional file-based structure Secure storage and upload/download of data (many FTP clients are unsecured, susceptible to eavesdropping, or open to other abuse) Compatible with virtually all web browsers and computer operating systems. Updated on the server-side, meaning that a user will never have to update the software. Beyond organizing and sharing files, these applications can often also be used as a business communication tool for assigning tasks, scheduling meetings, and maintaining contact information. Robotics In robotics, the workspace of a robot manipulator is often defined as the set of points that can be reached by its end-effector or, in other words, it is the space in which the robot works and it can be either a 3D space or a 2D surface. Mobile or unified workspace A mobile or unified workspace allows enterprise IT to have a trusted space on any device where IT can deliver business applications and data. Ever since the iPad was released by Apple in 2009, bring your own device (BYOD) has become an increasingly more important problem for IT. Until now, IT has purchased, provisioned, and managed all enterprise desktops which run the Microsoft Windows software. There are nearly 500 million enterprise desktops in the world. However, with the introduction of smartphones and tablets, there are far more devices that are owned by the end-user - 750 million PCs and Macs, 1.5 billion smartphones, and 500 million tablets. These also run different operating systems, like iOS, Android, Windows, and macOS. How does deliver business applications and data to end-users on these heterogeneous operating systems and form factors? Federica Troni and Mark Margevicius introduced the concept of Workspace Aggregator to solve the problem of BYOD. According to Gartner, a workspace aggregator unifies five capabilities: (1) Application Delivery: The ability to orchestrate provisioning and de-provisioning of mobile, PC and Web applications (2) Data: The secure delivery of corporate data (3) Management: Management of application life cycle, metering, and monitoring features (4) Security: Provision of context-aware security (5) User Experience: A superior user experience through the delivery of a unified workspace References Business terms Collaborative projects Computer programming Corporate jargon Enterprise resource planning terminology Graphical control elements Graphical user interface elements Logistics
Workspace
Technology,Engineering
1,288
9,938,600
https://en.wikipedia.org/wiki/Gubb%20%28application%29
gubb was a Web-based list application that required no downloaded software. With gubb, users could create, manage and share an unlimited number of lists. gubb also offered a fully functional mobile Web application. gubb was created in 2006 by former White House intern and Walt Disney Company executive Josh Weinstein and Peppercoin founding member Joe Bergeron. As of July 2022, the website appeared to be defunct. References External links Archived Official website Web applications Personal information managers
Gubb (application)
Technology
98
9,908
https://en.wikipedia.org/wiki/Equation%20of%20state
In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars. Overview At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid. The general form of an equation of state may be written as where is the pressure, the volume, and the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system. An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology. Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry. Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero. , number of moles of a substance , , molar volume, the volume of 1 mole of gas or liquid , ideal gas constant ≈ 8.3144621J/mol·K , pressure at the critical point , molar volume at the critical point , absolute temperature at the critical point Historical background Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676. In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature:Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone. Mathematically, this can be represented for species as:In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the ideal gas law. Initially, the law was formulated as pVm = R(TC + 267) (with temperature expressed in degrees Celsius), where R is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with , giving:In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong. The van der Waals equation of state can be written as where is a parameter describing the attractive energy between particles and is a parameter describing the volume of the particles. Ideal gas law Classical ideal gas law The classical ideal gas law may be written In the form shown above, the equation of state is thus If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows where is the number density of the gas (number of atoms/molecules per unit volume), is the (constant) adiabatic index (ratio of specific heats), is the internal energy per unit mass (the "specific internal energy"), is the specific heat capacity at constant volume, and is the specific heat capacity at constant pressure. Quantum ideal gas law Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass and spin that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with particles occupying a volume with temperature and pressure is given by where is the Boltzmann constant and the chemical potential is given by the following implicit function In the limiting case where , this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit reduces to With a fixed number density , decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ. Cubic equations of state Cubic equations of state are called such because they can be rewritten as a cubic function of . Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state. Virial equations of state Virial equation of state Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. A is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient B corresponds to interactions between pairs of molecules, C to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients B, C, D, etc. are functions of temperature only. The BWR equation of state where is pressure is molar density Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available. The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered. The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state. Physically based equations of state There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid. Perturbation theory-based models Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory. Statistical associating fluid theory (SAFT) An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al. Multiparameter equations of state Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density: with The reduced density and reduced temperature are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times. One example of such an equation of state is the form proposed by Span and Wagner. This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms. List of further equations of state Stiffened equation of state When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used: where is the internal energy per unit mass, is an empirically determined constant typically taken to be about 6.1, and is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres). The equation is stated in this form because the speed of sound in water is given by . Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa). This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks. Morse oscillator equation of state An equation of state of Morse oscillator has been derived, and it has the following form: Where is the first order virial parameter and it depends on the temperature, is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature. is the fractional volume of the system. Ultrarelativistic equation of state An ultrarelativistic fluid has equation of state where is the pressure, is the mass density, and is the speed of sound. Ideal Bose equation of state The equation of state for an ideal Bose gas is where α is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), z is exp(μ/kBT) where μ is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and Tc is the critical temperature at which a Bose–Einstein condensate begins to form. Jones–Wilkins–Lee equation of state for explosives (JWL equation) The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives. The ratio is defined by using , which is the density of the explosive (solid part) and , which is the density of the detonation products. The parameters , , , and are given by several references. In addition, the initial density (solid part) , speed of detonation , Chapman–Jouguet pressure and the chemical energy per unit volume of the explosive are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below. Others Tait equation for water and other liquids. Several equations are referred to as the Tait equation. Murnaghan equation of state Birch–Murnaghan equation of state Stacey–Brennan–Irvine equation of state Modified Rydberg equation of state Adapted polynomial equation of state Johnson–Holmquist equation of state Mie–Grüneisen equation of state Anton-Schmidt equation of state State-transition equation See also Gas laws Departure function Table of thermodynamic equations Real gas Cluster expansion Polytrope References External links Equations of physics Engineering thermodynamics Mechanical engineering Fluid mechanics Thermodynamic models
Equation of state
Physics,Chemistry,Mathematics,Engineering
3,273
1,891,001
https://en.wikipedia.org/wiki/Pye%20%28electronics%20company%29
Pye Ltd was an electronics company founded in 1896 in Cambridge, England, as a manufacturer of scientific instruments. The company merged with EKCO in 1960. Philips of the Netherlands acquired a majority shareholding in 1967, and later gained full ownership. Early growth W. G. Pye & Co. Ltd was founded in 1896 in Cambridge by William Pye, superintendent of the Cavendish Laboratory workshop, as a part-time business making scientific instruments. By the outbreak of World War I in 1914, the company employed 40 people manufacturing instruments used for teaching and research. The war increased demand for such instruments and the War Office needed experimental thermionic valves. The manufacture of such components afforded the company the technical knowledge needed to develop the first wireless receiver when the first UK broadcasts were made by the British Broadcasting Company in 1922. Instruments continued to be designed and manufactured under W G Pye Ltd, later situated in York Street Cambridge, while a separate company was started to build wireless components in a factory to become known as Cambridge Works at Church Path, Chesterton. A series of receivers made at Church Path were given positive reviews by Popular Wireless magazine. In 1924, Harold Pye, the son of the founder, and Edward Appleton, his former tutor at St John's College, Cambridge, designed a new series of receivers which proved even more saleable. In 1928 William Pye sold the company, now renamed Pye Radio Limited, to C. O. Stanley, who established a chain of small component-manufacturing factories across East Anglia. When the BBC started to explore television broadcasting, Pye found that the closest of their East Anglian offices was 25 miles outside the estimated effective 25-mile radius of the Alexandra Palace transmitter. Stanley was fascinated by the new technology and on his instructions the company built a high gain receiver that could pick up these transmissions. In 1937, a five-inch Pye television receiver was priced at 21 guineas (£22.05) and within two years the company had sold 2,000 sets at an average price of £34 (). The new EF50 valve from Philips enabled Pye to build this high-gain receiver, which was a tuned radio frequency (TRF) type and not a superhet type. With the outbreak of World War II, the Pye receiver using EF50 valves became a key component of many radar receivers, forming the 45 MHz Intermediate Amplifier (IF) section of the equipment. Pye went on to design and manufacture radio equipment for the British Army, including Wireless Sets No. 10, 18, 19, 22, 62 and 68. Pye was also responsible for the early development work on the proximity fuze for anti-aircraft shells. In February 1944, Pye formed a subsidiary called Pye Telecommunications Ltd, which it intended would design and produce radio communications equipment when the war ended. This company grew to become the leading UK producer of mobile radio equipment for commercial, business, industrial, police and government purposes. Popular products included the Reporter, Cambridge, and Westminster series of VHF radio transceivers. The company also produced the PF8 UHF hand-held radios featured in episodes of The Professionals television series. After the war, Pye's B16T nine-inch table television was designed around the 12-year-old EF50 valve. It was soon superseded by the B18T, which used an extra high tension (EHT) transformer developed by German companies before the war to produce the high voltage required by the cathode-ray tube. In 1955, the company diversified into music production with Pye Records. The Independent Television Authority (ITA) started public transmissions in the same year, so Pye produced new televisions that could receive ITV, and the availability of a second channel introduced the need for tuners. Pye's VT4 tunable television was launched in March 1954 and was followed by the V14. The V14 proved to be technically unreliable and so tarnished the Pye name that many dealers transferred their allegiance to other manufacturers. This failure so damaged corporate confidence that Pye avoided being first-to-market thereafter, although they developed the first British transistor in 1956. Pye TVT Ltd was formed to produce broadcast television equipment, including cameras, which were popular with British broadcasters including the BBC as well as achieving international sales. The early cameras were called "Photicon" and the later models by their Mk number: 2, 3, etc. The Mk7/8 solid-state monochrome cameras were the last to be produced. The Pye Mk6 image orthicon camera, known as the PC60, was the last version supplied to BBC Outside Broadcasts in 1963 for a new fleet of eight outside broadcast vans. These cameras were the first generation of outside broadcast cameras to feature a zoom lens, rather than a turret system. These three-tubed cameras were known for their reliability but were so heavy and unwieldy that they required a stretcher to carry them around the OB site. The Pye PC60 was eventually replaced by the EMI 2001 on BBC outside broadcasts but, during its lifespan, it was used on numerous high-profile productions including Wimbledon tennis and Open golf. The ITV companies purchased the Pye Mk3s, and to a lesser extent the Mk4s and Mk7s. Pye TVT never produced a colour broadcast television camera, but there was an abortive colour telecine camera; few if any were sold. The reason for this was probably the financial difficulties the company was in. In 1960, Pye acquired the Telephone Manufacturing Company. Decline and sale Not wishing to risk further damage to their fragile brand, Pye first used transistors in a product sold as a subsidiary brand: the Pam 710 radio (1956), with the transistors themselves labelled Newmarket Transistors (another subsidiary). When this proved acceptable the company launched the Pye 123 radio (1957, still with the Newmarket label on the novel internal components). Products such as these reversed the decline but the arrival of Japanese competition reduced demand to a level that threatened the viability of the manufacturing plants. In 1960, Pye merged with its rival EKCO to form British Electronic Industries Ltd, with C. O. Stanley as chairman and E. K. Cole as vice-chairman. The company, like most of its domestic competitors, attempted to restore demand with price competition and, where viable production exceeded demand, sold excess stock at loss-making clearance prices. By 1966 Pye was in such difficulties that they started to reduce their manufacturing capacity with closure of the EKCO factory in Southend-on-Sea. Philips attempted to buy out the ailing Pye in 1966. The Minister of Technology Tony Benn determined that a complete sale would create a de facto monopoly so he permitted the transfer of only a 60% shareholding, with an undertaking that the Lowestoft factory would continue to manufacture televisions. On 20 April 1964, BBC2 was launched, broadcasting entirely on the new television standard of 625-line UHF, but BBC1 and ITV would remain in 405-line VHF until 1969, when they began UHF broadcasting. During this transition, television receivers in the UK had to handle both the VHF and UHF wavebands, which added to the cost of producing the sets. The price of a dual-standard set, combined with the limited coverage of BBC2, meant that initial sales of dual-standard sets were slow. PAL colour test signals began in 1966 and scheduled transmissions commenced on BBC2 on 1 July 1967, with a full colour service beginning on that channel on 2 December 1967. BBC1 and ITV followed suit on 15 November 1969. Colour broadcasting added further to the cost and complexity of producing television sets. The resulting high price and low coverage areas of the new technology delayed consumer adoption further: it wasn't until 1977 that the number of colour licences sold outnumbered those of black and white. In the early 1970s, Sony and Hitachi launched UK colour televisions that cost less than £200. Domestic manufacturers attempted to compete, but were handicapped by outdated manufacturing techniques and an inflexible workforce. Pye found themselves with high stocks and low cash flow at a time of poor industrial relations, a low-growth economy and limited scope for reducing costs. The Pye group of companies was bought outright by Philips in 1976. The Lowestoft factory was subsequently sold to Sanyo and Philips moved the manufacture of Pye televisions to Singapore. Prior to the manufacturing offshoring, the company produced a range of televisions branded 'Pye Chelsea'. The range were teak-clad with stainless steel 'feet' and sported three large channel selectors. While unsuitable for the reception of the forthcoming Channel 4, the equipment would operate through early video recorders, machines with larger channel capability. The Chelsea range were popular with TV rental companies such as Radio Rentals, Rumbelows and Wigfalls. Maintenance of these sets continued well into the 1980s, with the northern rental chain Wigfalls being the last to withdraw them in 1988. In 1979 Pye were implicated in an episode of Granada's World in Action in relation to the sale of UHF and VHF radios as well as telephone intercept equipment which was used by the Ugandan Public Safety Unit, the secret police of Idi Amin's rule responsible for killing perhaps several hundred thousand Ugandans. Pye had been supplying Uganda through Wilken Telecommunications, its East Africa distributor. The Pye brand enjoyed a short-lived renaissance in audio equipment (known as music centres) during the 1970s, and in the late 1980s with televisions. The brand later appeared on DVD recorders. In 2022, it appears that the Pye brand and symbol has been purchased by broadcast audio manufacturer Alice Ltd. References Further reading Discussion and demonstration of the Pye PC60 camera by former BBC Outside Broadcast camera operator Discussion and comparison of Pye PC60 and EMI 2001 cameras 'Pye', East Anglia Network (1997) Retrieved 15 May 2005 Pye Telecom History G8EPR Pye Museum Photographs of a demo of Pye TV in Mons (Belgium) in 1947 can be seen here The Pye 1005 'Achiphon' stereo record player held at the British Library Pye Story – Waihi, New Zealand The Museum of the Broadcast Television Camera, Pye pages The Pye Museum Defunct manufacturing companies of the United Kingdom Electronics companies established in 1896 Electronics companies of the United Kingdom Companies based in Cambridge Philips 1896 establishments in England Electronics companies disestablished in 1988 Radio manufacturers
Pye (electronics company)
Engineering
2,136
68,502,903
https://en.wikipedia.org/wiki/NGC%206337
NGC 6337, the Ghostly Cheerio or Cheerio Nebula, is a toroidal planetary nebula in the constellation Scorpius. It appears as a ring-shaped (annular) transparent nebula resembling a piece of the breakfast cereal Cheerios, hence the name. Filament and knots, and a faint shell surround the ring. Its magnitude is 11.90; its position in Scorpius is right ascension 17h 22m15.67s, declination -38° 29' 01.73". The Ghostly Cheerio has a redshift value of -0.000236. There is convincing evidence that a binary nucleus exists at the center of the nebula, with masses of 0.6 and 0.3 M⊙, and a separation of ≤ 1.26 R⊙, indicating a probable common envelope phase. The Ghostly Cheerio's projected radial expansion is slow, averaging . See also List of NGC objects (6001–7000) References Scorpius 6337 Planetary nebulae
NGC 6337
Astronomy
209
1,811,881
https://en.wikipedia.org/wiki/Remote%20pickup%20unit
A remote pickup unit or RPU is a radio system using special radio frequencies set aside for electronic news-gathering (ENG) and remote broadcasting. It can also be used for other types of point-to-point radio links. An RPU is used to send program material from a remote location to the broadcast station or network. Usually these systems use specialized high audio fidelity radio equipment. One manufacturer, Marti, was best known for manufacturing remote pickup equipment, so much so that the name is usually used to refer to a remote pickup unit regardless of who the actual equipment manufacturer actually is. Today much of the remote broadcast use digital audio system fed over ISDN telephone lines. This method is favored because of reliability of telephone lines versus a radio link back to the station. The radio RPU remains much more favored for ENG however, because of its flexibility. Footnotes Broadcast engineering
Remote pickup unit
Engineering
178
12,533,665
https://en.wikipedia.org/wiki/Wildlife%20of%20R%C3%A9union
The wildlife of Réunion is composed of its flora, fauna and funga. Being a small island, it only has nine native species of mammals, but ninety-one species of birds. Fauna Birds Mammals Mauritian flying fox, Pteropus niger VU Small Mauritian flying fox, Pteropus subniger EX Lesser yellow bat, Scotophilus borbonicus CR Natal free-tailed bat, Mormopterus acetabulosus VU Mauritian tomb bat, Taphozous mauritianus LC Southern right whale, Eubalaena australis LC, rarer in today's Réunion Humpback whale, Megaptera novaeangliae LC Sei whale, Balaenoptera borealis EN Southern fin whale, Balaenoptera physalus quoyi EN Sperm whale, Physeter macrocephalus VU Dwarf sperm whale, Kogia sima LR/lc Blainville's beaked whale, Mesoplodon densirostris DD Gray's beaked whale, Mesoplodon grayi DD Short-finned pilot whale, Globicephala macrorhynchus DD Those mammals not native to Réunion include the tailless tenrec, dog, cat, pig, goat, sheep, rat and cattle. Reptiles Geckos Seven species of day geckos and four species of night geckos: Réunion Island day gecko, Phelsuma borbonica borbonica, endemic Réunion Island ornate day gecko, Phelsuma inexpectata, endemic Gold dust day gecko, Phelsuma laticauda, introduced from Madagascar Lined day gecko, Phelsuma lineata, introduced from Madagascar Blue-tailed day gecko, Phelsuma cepediana, introduced from Mauritius around 1960 Indopacific tree gecko, Hemiphyllodactylus typus, introduced Tropical house gecko, Hemidactylus mabouia, introduced Common house gecko, Hemidactylus frenatus, introduced Phelsuma madagascariensis, introduced from Madagascar in 1970 by a veterinarian Pacific gecko, Gehyra mutilata, introduced Agamid lizards Oriental garden lizard, Calotes versicolor, introduced Rainbow agama, Agama agama, introduced around 1999/2000 Scincidae Bojer's skink, Gongylomorphus bojerii, introduced (originally endemic to Mauritius) Mauritius skink, Leiolopisma mauritiana, extinct around 1600 Chameleons Panther chameleon, Furcifer pardalis, introduced from Madagascar Snakes Two introduced species; Brahminy blind snake, Typhlops braminus Indian wolf snake, Lycodon aulicus, introduced from Mauritius around 1850 Turtles Marine turtles see also: Green sea turtle, Chelonia mydas Hawksbill sea turtle, Eretmochelys imbricata Loggerhead sea turtle, Caretta caretta Olive ridley sea turtle, Lepidochelys olivacea Leatherback sea turtle, Dermochelys coriacea Land turtles Réunion giant tortoise, Cylindraspis indica, became extinct 1800 Radiated tortoise, Astrochelys radiata, introduced from Madagascar Molluscs Fungi See also List of birds of Réunion List of mammals of Réunion List of extinct animals of Réunion References Reunion Biota of Réunion
Wildlife of Réunion
Biology
698
1,300,358
https://en.wikipedia.org/wiki/Fock%20matrix
In the Hartree–Fock method of quantum mechanics, the Fock matrix is a matrix approximating the single-electron energy operator of a given quantum system in a given set of basis vectors. It is most often formed in computational chemistry when attempting to solve the Roothaan equations for an atomic or molecular system. The Fock matrix is actually an approximation to the true Hamiltonian operator of the quantum system. It includes the effects of electron-electron repulsion only in an average way. Because the Fock operator is a one-electron operator, it does not include the electron correlation energy. The Fock matrix is defined by the Fock operator. In its general form the Fock operator writes: Where i runs over the total N spin orbitals. In the closed-shell case, it can be simplified by considering only the spatial orbitals. Noting that the terms are duplicated and the exchange terms are null between different spins. For the restricted case which assumes closed-shell orbitals and single- determinantal wavefunctions, the Fock operator for the i-th electron is given by: where: is the Fock operator for the i-th electron in the system, is the one-electron Hamiltonian for the i-th electron, is the number of electrons and is the number of occupied orbitals in the closed-shell system, is the Coulomb operator, defining the repulsive force between the j-th and i-th electrons in the system, is the exchange operator, defining the quantum effect produced by exchanging two electrons. The Coulomb operator is multiplied by two since there are two electrons in each occupied orbital. The exchange operator is not multiplied by two since it has a non-zero result only for electrons which have the same spin as the i-th electron. For systems with unpaired electrons there are many choices of Fock matrices. See also Hartree–Fock method Unrestricted Hartree–Fock Restricted open-shell Hartree–Fock References Atomic, molecular, and optical physics Quantum chemistry Matrices
Fock matrix
Physics,Chemistry,Mathematics
420
49,667,015
https://en.wikipedia.org/wiki/Carbene%20radical
Carbene radicals are a special class of organometallic carbenes. The carbene radical can be formed by one-electron reduction of Fischer-type carbenes using an external reducing agent, or directly upon carbene formation at an open-shell transition metal complex (in particular low-spin cobalt(II) complexes) using diazo compounds and related carbene precursors. Cobalt(III)-carbene radicals have found catalytic applications in cyclopropanation reactions, as well as in a variety of other catalytic radical-type ring-closing reactions. Theoretical calculations and EPR studies confirmed their radical-type behaviour and explained the bonding interactions underlying the stability of the carbene radical. Stable carbene radicals of other metals are known, but the catalytically relevant cobalt(III)-carbene radicals have thus far only been synthesized as long-lived reactive intermediates. Bonding interactions and radical reactivity The chemical bond present in carbene radicals is surprising in that it possesses aspects of both Fischer and Schrock type carbenes. As a result, the cobalt carbene radical complexes have discrete radical-character at their carbon atom, thus giving rise to interesting catalytic radical-type reaction pathways. The mechanism of formation of a carbene radical at cobalt(II) typically involves carbene generation at the metal with simultaneous intramolecular electron transfer from the metal into the metal-carbene π* anti-bonding molecular orbital constructed from the metal d-orbital and the carbene p-orbital. As such, carbene radicals are perhaps best described as 'one-electron reduced Fischer-type carbenes'. Discrete electron transfer from a sigma-type metal d-orbital (typically the dz2 orbital) occurs, leads the typical radical character of the carbene carbon. This behaviour not only explains the carbon-centered radical-type reactivity of these complexes, but also their reduced electrophilicity (suppressing carbene-carbene dimerisation side reactions) as well as their enhanced reactivity to electron-deficient substrates. Furthermore, second coordination sphere hydrogen-bonding interactions give rise to faster reactions because H-bonds are stronger to the reduced carbene as compared to the precursor. Such H-bonding interactions can also facilitate chirality transfer in enantioselective carbene-transfer reactions. In order for the σ bond to be stabilized (typically with a bond order slightly less than 1), a back-bonding action from the π molecular orbital to the anti-bonding π* molecular orbital is necessary and the porphyrin ring serves as an electron π-symmetry "buffer" to ensure this interaction is obtained. The back-donation to the π* orbital would result in unfavorable excess electron density on the carbene carbon but the presence of adjacent functional groups (carbonyl or sulfonyl groups have the desired electronegativity) relieve this electron build-up and yield the final radical electron, which occupies a single p atomic orbital state on the carbon. See also References Carbenes Organometallic chemistry Functional groups Organic compounds
Carbene radical
Chemistry
647
25,151,171
https://en.wikipedia.org/wiki/Zeta2%20Scorpii
{{DISPLAYTITLE:Zeta2 Scorpii}} Zeta2 Scorpii (Zeta2 Sco, ζ2 Scorpii, ζ2 Sco) is a K-type orange giant star in the constellation of Scorpius. It has an apparent visual magnitude which varies between 3.59 and 3.65, and is located near the blue-white supergiant star ζ1 Scorpii in Earth's sky. In astronomical terms, ζ2 is much closer to the Sun and unrelated to ζ1 except for line-of sight co-incidence. ζ1 is about 6,000 light-years away and probably an outlying member of open star cluster NGC 6231 (also known as the "northern jewel box" cluster), whereas ζ2 is a mere 135 light-years distant and thus much less luminous in real terms. ζ2 can also be distinguished from its optical partner, ζ1, because of its orangish colour especially in long-exposure astrophotographs. References Scorpius K-type giants Scorpii, Zeta2 152334 082729 Suspected variables 6271 Durchmusterung objects
Zeta2 Scorpii
Astronomy
245
15,030,937
https://en.wikipedia.org/wiki/P2RX2
P2X purinoceptor 2 is a protein that in humans is encoded by the P2RX2 gene. The product of this gene belongs to the family of purinoceptors for ATP. This receptor functions as a cation conducting ligand-gated ion channel. Binding to ATP mediates synaptic transmission between neurons and from neurons to smooth muscle. Six transcript variants encoding six distinct isoforms have been identified for this gene. References Further reading External links Ion channels
P2RX2
Chemistry
101
18,842,002
https://en.wikipedia.org/wiki/Shampoo
Shampoo () is a hair care product, typically in the form of a viscous liquid, that is formulated to be used for cleaning (scalp) hair. Less commonly, it is available in solid bar format. ("Dry shampoo" is a separate product.) Shampoo is used by applying it to wet hair, massaging the product in the hair, roots and scalp, and then rinsing it out. Some users may follow a shampooing with the use of hair conditioner. Shampoo is typically used to remove the unwanted build-up of sebum (natural oils) in the hair without stripping out so much as to make hair unmanageable. Shampoo is generally made by combining a surfactant, most often sodium lauryl sulfate or sodium laureth sulfate, with a co-surfactant, most often cocamidopropyl betaine in water. The sulfate ingredient acts as a surfactant, trapping oils and other contaminants, similarly to soap. Shampoos are marketed to people with hair. There are also shampoos intended for animals that may contain insecticides or other medications to treat skin conditions or parasite infestations such as fleas. History Indian subcontinent In the Indian subcontinent, a variety of herbs and their extracts have been used as shampoos since ancient times. The first origin of shampoo came from the Indus Valley Civilization. A very effective early shampoo was made by boiling Sapindus with dried Indian gooseberry (amla) and a selection of other herbs, using the strained extract. Sapindus, also known as soapberries or soapnuts, a tropical tree widespread in India, is called ksuna (Sanskrit: क्षुण) in ancient Indian texts and its fruit pulp contains saponins which are a natural surfactant. The extract of soapberries creates a lather which Indian texts called phenaka (Sanskrit: फेनक). It leaves the hair soft, shiny and manageable. Other products used for hair cleansing were shikakai (Acacia concinna), hibiscus flowers, ritha (Sapindus mukorossi) and arappu (Albizzia amara). Guru Nanak, the founder and the first Guru of Sikhism, made references to soapberry tree and soap in the 16th century. Cleansing the hair and body massage (champu) during one's daily bath was an indulgence of early colonial traders in India. When they returned to Europe, they introduced the newly learned habits, including the hair treatment they called shampoo. The word shampoo entered the English language from the Indian subcontinent during the colonial era. It dated to 1762 and was derived from the Hindi word (, ), itself derived from the Sanskrit root (), which means 'to press, knead, or soothe'. Europe Sake Dean Mahomed, an Indian traveller, surgeon, and entrepreneur, is credited with introducing the practice of shampoo or "shampooing" to Britain. In 1814, Mahomed, with his Irish wife Jane Daly, opened the first commercial "shampooing" vapour masseur bath in England, in Brighton. He described the treatment in a local paper as "The Indian Medicated Vapour Bath (type of Turkish bath), a cure to many diseases and giving full relief when everything fails; particularly Rheumatic and paralytic, gout, stiff joints, old sprains, lame legs, aches and pains in the joints". This medical work featured testimonies from his patients, as well as the details of the treatment made him famous. The book acted as a marketing tool for his unique baths in Brighton and capitalised on the early 19th-century trend for seaside spa treatments. During the early stages of shampoo in Europe, English hair stylists boiled shaved soap in water and added herbs to give the hair shine and fragrance. Commercially made shampoo was available from the turn of the 20th century. A 1914 advertisement for Canthrox Shampoo in American Magazine showed young women at camp washing their hair with Canthrox in a lake; magazine advertisements in 1914 by Rexall featured Harmony Hair Beautifier and Shampoo.<ref>Victoria Sherrow, Encyclopedia of hair: a cultural history, 2007 s.v. "Advertising" p. 7.</ref> In 1900, German perfumer and hair-stylist Josef Wilhelm Rausch developed the first liquid hair washing soap and named it "Champooing" in Emmishofen, Switzerland. Later, in 1919, J.W. Rausch developed an antiseptic chamomile shampooing with a pH of 8.5. In 1927, liquid shampoo was improved for mass production by German inventor Hans Schwarzkopf in Berlin; his name became a shampoo brand sold in Europe. Originally, soap and shampoo were very similar products; both containing the same naturally derived surfactants, a type of detergent. Modern shampoo as it is known today was first introduced in the 1930s with Drene, the first shampoo using synthetic surfactants instead of soap. Indonesia Early shampoos used in Indonesia were made from the husk and straw (merang) of rice. The husks and straws were burned into ash, and the ashes (which have alkaline properties) are mixed with water to form lather. The ashes and lather were scrubbed into the hair and rinsed out, leaving the hair clean, but very dry. Afterwards, coconut oil was applied to the hair in order to moisturize it. Philippines Filipinos have been traditionally using gugo before commercial shampoos were sold in stores. The shampoo is obtained by soaking and rubbing the bark of the vine Gugo (Entada phaseoloides), producing a lather that cleanses the scalp effectively. Gugo is also used as an ingredient in hair tonics. Pre-Columbian North America Certain Native American tribes used extracts from North American plants as hair shampoo; for example the Costanoans of present-day coastal California used extracts from the coastal woodfern, Dryopteris expansa. Pre-Columbian South America Before quinoa can be eaten the saponin must be washed out from the grain prior to cooking. Pre-Columbian Andean civilizations used this soapy by-product as a shampoo. Types Shampoos can be classified into four main categories: deep cleansing shampoos, sometimes marketed under descriptions such as volumizing, clarifying, balancing, oil control, or thickening, which have a slightly higher amount of detergent and create a lot of foam; conditioning shampoos, sometimes marketed under descriptions such as moisturizing, 2-in-1, smoothing, anti-frizz, color care, and hydrating, which contain an ingredient like silicone or polyquaternium-10 to smooth the hair; baby shampoos, sometimes marketed as tear-free, which contain less detergent and produce less foam; and anti-dandruff shampoos, which are medicated to reduce dandruff. Composition Shampoo is generally made by combining a surfactant, most often sodium lauryl sulfate or sodium laureth sulfate, with a co-surfactant, most often cocamidopropyl betaine in water to form a thick, viscous liquid. Other essential ingredients include salt (sodium chloride), which is used to adjust the viscosity, a preservative and fragrance. Other ingredients are generally included in shampoo formulations to maximize the following qualities: pleasing foam ease of rinsing minimal skin and eye irritation thick or creamy feeling pleasant fragrance low toxicity good biodegradability slight acidity (pH less than 7) no damage to hair repair of damage already done to hair Many shampoos are pearlescent. This effect is achieved by the addition of tiny flakes of suitable materials, e.g. glycol distearate, chemically derived from stearic acid, which may have either animal or vegetable origins. Glycol distearate is a wax. Many shampoos also include silicone to provide conditioning benefits. Commonly used ingredients Ammonium chloride Ammonium lauryl sulfate Glycol Sodium laureth sulfate is derived from coconut oils and is used to soften water and create a lather. Hypromellose cellulose ethers are widely used as thickeners, rheology modifiers, emulsifiers and dispersants in Shampoo products. Sodium lauroamphoacetate is naturally derived from coconut oils and is used as a cleanser and counter-irritant. This is the ingredient that makes the product tear-free. Polysorbate 20 (abbreviated as PEG(20)) is a mild glycol-based surfactant that is used to solubilize fragrance oils and essential oils, meaning it causes liquid to spread across and penetrate the surface of a solid (i.e. hair). Polysorbate 80 (abbreviated as PEG(80)) is a glycol used to emulsify (or disperse) oils in water so the oils do not float on top. PEG-150 distearate is a simple thickener. Citric acid is produced biochemically and is used as an antioxidant to preserve the oils in the product. While it is a severe eye-irritant, the sodium lauroamphoacetate counteracts that property. Citric acid is used to adjust the pH down to approximately 5.5. It is a fairly weak acid which makes the adjustment easier. Shampoos usually are at pH 5.5 because at slightly acidic pH, the scales on a hair follicle lie flat, making the hair feel smooth and look shiny. It also has a small amount of preservative action. Citric acid, as opposed to any other acid, will prevent bacterial growth. Quaternium-15 is used as a bacterial and fungicidal preservative. Polyquaternium-10 acts as the conditioning ingredient, providing moisture and fullness to the hair. Di-PPG-2 myreth-10 adipate is a water-dispersible emollient that forms clear solutions with surfactant systems. Chloromethylisothiazolinone, or CMIT, is a powerful biocide and preservative. Benefit claims regarding ingredients In the United States, the Food and Drug Administration (FDA) mandates that shampoo containers accurately list ingredients on the products container. The government further regulates what shampoo manufacturers can and cannot claim as any associated benefit. Shampoo producers often use these regulations to challenge marketing claims made by competitors, helping to enforce these regulations. While the claims may be substantiated, however, the testing methods and details of such claims are not as straightforward. For example, many products are purported to protect hair from damage due to ultraviolet radiation. While the ingredient responsible for this protection does block UV, it is not often present in a high enough concentration to be effective. The North American Hair Research Society has a program to certify functional claims based on third-party testing. Shampoos made for treating medical conditions such as dandruff or itchy scalp are regulated as OTC drugs in the US marketplace. In the European Union, there is a requirement for the anti-dandruff claim to be substantiated as with any other advertising claim, but it is not considered to be a medical problem. Health risks A number of contact allergens are used as ingredients in shampoos, and contact allergy caused by shampoos is well known. Patch testing can identify ingredients to which patients are allergic, after which a physician can help the patient find a shampoo that is free of the ingredient to which they are allergic. The US bans 11 ingredients from shampoos, Canada bans 587, and the EU bans 1328. Specialized shampoos Dandruff Cosmetic companies have developed shampoos specifically for those who have dandruff. These contain fungicides such as ketoconazole, zinc pyrithione and selenium disulfide, which reduce loose dander by killing fungi like Malassezia furfur''. Coal tar and salicylate derivatives are often used as well. Alternatives to medicated shampoos are available for people who wish to avoid synthetic fungicides. Such shampoos often use tea tree oil, essential oils or herbal extracts. Colored hair Many companies have also developed color-protection shampoos suitable for colored hair; some of these shampoos contain gentle cleansers according to their manufacturers. Shampoos for color-treated hair are a type of moisturizing shampoo. Baby Shampoo for infants and young children is formulated so that it is less irritating and usually less prone to produce a stinging or burning sensation if it were to get into the eyes. For example, Johnson's Baby Shampoo advertises under the premise of "No More Tears". This is accomplished by one or more of the following formulation strategies. dilution, in case the product comes in contact with eyes after running off the top of the head with minimal further dilution adjusting pH to that of non-stress tears, approximately 7, which may be a higher pH than that of shampoos which are pH adjusted for skin or hair effects, and lower than that of shampoo made of soap Use of surfactants which, alone or in combination, are less irritating than those used in other shampoos (e.g. Sodium lauroamphoacetate) use of nonionic surfactants of the form of polyethoxylated synthetic glycolipids and polyethoxylated synthetic monoglycerides, which counteract the eye sting of other surfactants without producing the anesthetizing effect of alkyl polyethoxylates or alkylphenol polyethoxylates The distinction in 4 above does not completely surmount the controversy over the use of shampoo ingredients to mitigate eye sting produced by other ingredients, or the use of the products so formulated. The considerations in 3 and 4 frequently result in a much greater multiplicity of surfactants being used in individual baby shampoos than in other shampoos, and the detergency or foaming of such products may be compromised thereby. The monoanionic sulfonated surfactants and viscosity-increasing or foam stabilizing alkanolamides seen so frequently in other shampoos are much less common in the better baby shampoos. Sulfate-free shampoos Sulfate-free shampoos are composed of natural ingredients and free from both sodium lauryl sulfate and sodium laureth sulfate. These shampoos use alternative surfactants to cleanse the hair. Animal Shampoo intended for animals may contain insecticides or other medications for treatment of skin conditions or parasite infestations such as fleas or mange. These must never be used on humans. While some human shampoos may be harmful when used on animals, any human haircare products that contain active ingredients or drugs (such as zinc in anti-dandruff shampoos) are potentially toxic when ingested by animals. Special care must be taken not to use those products on pets. Cats are at particular risk due to their instinctive method of grooming their fur with their tongues. Shampoos that are especially designed to be used on pets, commonly dogs and cats, are normally intended to do more than just clean the pet's coat or skin. Most of these shampoos contain ingredients which act different and are meant to treat a skin condition or an allergy or to fight against fleas. The main ingredients contained by pet shampoos can be grouped in insecticidals, antiseborrheic, antibacterials, antifungals, emollients, emulsifiers and humectants. Whereas some of these ingredients may be efficient in treating some conditions, pet owners are recommended to use them according to their veterinarian's indications because many of them cannot be used on cats or can harm the pet if it is misused. Generally, insecticidal pet shampoos contain pyrethrin, pyrethroids (such as permethrin and which may not be used on cats) and carbaryl. These ingredients are mostly found in shampoos that are meant to fight against parasite infestations. Antifungal shampoos are used on pets with yeast or ringworm infections. These might contain ingredients such as miconazole, chlorhexidine, providone iodine, ketoconazole or selenium sulfide (which cannot be used on cats). Bacterial infections in pets are sometimes treated with antibacterial shampoos. They commonly contain benzoyl peroxide, chlorhexidine, povidone iodine, triclosan, ethyl lactate, or sulfur. Antipruritic shampoos are intended to provide relief of itching due to conditions such as atopy and other allergies. These usually contain colloidal oatmeal, hydrocortisone, Aloe vera, pramoxine hydrochloride, menthol, diphenhydramine, sulfur or salicylic acid. These ingredients are aimed to reduce the inflammation, cure the condition and ease the symptoms at the same time while providing comfort to the pet. Antiseborrheic shampoos are those especially designed for pets with scales or those with excessive oily coats. These shampoos are made of sulfur, salicylic acid, refined tar (which cannot be used on cats), selenium sulfide (cannot be used on cats) and benzoyl peroxide. All these are meant to treat or prevent seborrhea oleosa, which is a condition characterized by excess oils. Dry scales can be prevented and treated with shampoos that contain sulfur or salicylic acid and which can be used on both cats and dogs. Emollient shampoos are efficient in adding oils to the skin and relieving the symptoms of a dry and itchy skin. They usually contain oils such as almond, corn, cottonseed, coconut, olive, peanut, Persia, safflower, sesame, lanolin, mineral or paraffin oil. The emollient shampoos are typically used with emulsifiers as they help distributing the emollients. These include ingredients such as cetyl alcohol, laureth-5, lecithin, PEG-4 dilaurate, stearic acid, stearyl alcohol, carboxylic acid, lactic acid, urea, sodium lactate, propylene glycol, glycerin, or polyvinylpyrrolidone. Although some of the pet shampoos are highly effective, some others may be less effective for some condition than another. Yet, although natural pet shampoos exist, it has been brought to attention that some of these might cause irritation to the skin of the pet. Natural ingredients that might be potential allergens for some pets include eucalyptus, lemon or orange extracts and tea tree oil. On the contrary, oatmeal appears to be one of the most widely skin-tolerated ingredients that is found in pet shampoos. Most ingredients found in a shampoo meant to be used on animals are safe for the pet as there is a high likelihood that the pets will lick their coats, especially in the case of cats. Pet shampoos which include fragrances, deodorants or colors may harm the skin of the pet by causing inflammations or irritation. Shampoos that do not contain any unnatural additives are known as hypoallergenic shampoos and are increasing in popularity. Solid shampoo bars Solid shampoos or shampoo bars can either be soap-based or use other plant-based surfactants, such as sodium cocoyl isethionate or sodium coco-sulfate combined with oils and waxes. Soap-based shampoo bars are high in pH (alkaline) compared to human hair and scalps, which are slightly acidic. Alkaline pH increases the friction of the hair fibres which may cause damage to the hair cuticle, making it feel rough and drying out the scalp. Jelly and gel Stiff, non-pourable clear gels to be squeezed from a tube were once popular forms of shampoo, and can be produced by increasing a shampoo's viscosity. This type of shampoo cannot be spilled, but unlike a solid, it can still be lost down the drain by sliding off wet skin or hair. Paste and cream Shampoos in the form of pastes or creams were formerly marketed in jars or tubes. The contents were wet but not completely dissolved. They would apply faster than solids and dissolve quickly. Antibacterial Antibacterial shampoos are often used in veterinary medicine for various conditions, as well as in humans before some surgical procedures. No Poo Movement Closely associated with environmentalism, the "no poo" movement consists of people rejecting the societal norm of frequent shampoo use. Some adherents of the no poo movement use baking soda or vinegar to wash their hair, while others use diluted honey. Further methods include the use of raw eggs (potentially mixed with salt water), rye flour, or chickpea flour dissolved in water. Other people use nothing or rinse their hair only with conditioner. Theory In the 1970s, ads featuring Farrah Fawcett and Christie Brinkley asserted that it was unhealthy not to shampoo several times a week. This mindset is reinforced by the greasy feeling of the scalp after a day or two of not shampooing. Using shampoo every day removes sebum, the oil produced by the scalp. This causes the sebaceous glands to produce oil at a higher rate, to compensate for what is lost during shampooing. According to Michelle Hanjani, a dermatologist at Columbia University, a gradual reduction in shampoo use will cause the sebum glands to produce at a slower rate, resulting in less grease in the scalp. Although this approach might seem unappealing to some individuals, many people try alternate shampooing techniques like baking soda and vinegar in order to avoid ingredients used in many shampoos that make hair greasy over time. Whereas the use of baking soda for hair cleansing has been associated with hair damage and skin irritation, likely due to its high pH value and exfoliating properties, honey, egg, rye flour, and chickpea flour hair washes seem gentler for long-term use. See also Soap Dry shampoo Baby shampoo Hair conditioner Exfoliant No Poo References External links Drug delivery devices Hairdressing Indian inventions Personal hygiene products Toiletry
Shampoo
Chemistry
4,757
4,177,264
https://en.wikipedia.org/wiki/XMK%20%28operating%20system%29
The eXtreme Minimal Kernel (XMK) is a real-time operating system (RTOS) that is designed for minimal RAM/ROM use. It achieves this goal, though it is almost entirely written in the C programming language. As a consequence it can be easily ported to any 8-, 16-, or 32-bit microcontroller. XMK comes as two independent packages: the XMK Scheduler that contains the core kernel, everything necessary to run a multithreaded embedded application, and the Application Programming Layer (APL) that provides higher level functions atop the XMK Scheduler API. The XMK distribution contains no standard libraries such as libc that should be part of the development tools for target systems. External links XMK: eXtreme Minimal Kernel project home page (broken link) Windows Evolution Over Timeline Real-time operating systems Embedded operating systems
XMK (operating system)
Technology
184
47,261,590
https://en.wikipedia.org/wiki/Vivado
Vivado Design Suite is a software suite for synthesis and analysis of hardware description language (HDL) designs, superseding Xilinx ISE with additional features for system on a chip development and high-level synthesis (HLS). Vivado represents a ground-up rewrite and re-thinking of the entire design flow (compared to ISE). Like the later versions of ISE, Vivado includes the in-built logic simulator. Vivado also introduces high-level synthesis, with a toolchain that converts C code into programmable logic. Replacing the 15 year old ISE with Vivado Design Suite took 1000 man-years and cost US$200 million. Features Vivado was introduced in April 2012, and is an integrated design environment (IDE) with system-to-IC level tools built on a shared scalable data model and a common debug environment. Vivado includes electronic system level (ESL) design tools for synthesizing and verifying C-based algorithmic IP; standards based packaging of both algorithmic and RTL IP for reuse; standards based IP stitching and systems integration of all types of system building blocks; and the verification of blocks and systems. A free version WebPACK Edition of Vivado provides designers with a limited version of the design environment. Components The Vivado High-Level Synthesis compiler enables C, C++ and SystemC programs to be directly targeted into Xilinx devices without the need to manually create RTL. Vivado HLS is widely reviewed to increase developer productivity, and is confirmed to support C++ classes, templates, functions and operator overloading. Vivado 2014.1 introduced support for automatically converting OpenCL kernels to IP for Xilinx devices. OpenCL kernels are programs that execute across various CPU, GPU and FPGA platforms. The Vivado Simulator is a component of the Vivado Design Suite. It is a compiled-language simulator that supports mixed-language, Tcl scripts, encrypted IP and enhanced verification. The Vivado IP Integrator allows engineers to quickly integrate and configure IP from the large Xilinx IP library. The Integrator is also tuned for MathWorks Simulink designs built with Xilinx's System Generator and Vivado High-Level Synthesis. The Vivado Tcl Store is a scripting system for developing add-ons to Vivado, and can be used to add and modify Vivado's capabilities. Tcl is the scripting language on which Vivado itself is based. All of Vivado's underlying functions can be invoked and controlled via Tcl scripts. Device support Vivado supports Xilinx's 7-series and all the newer devices (UltraScale and UltraScale+ series). For development targeting older Xilinx's devices and CPLDs, the already discontinued Xilinx ISE has to be used. See also Xilinx ISE Intel Quartus Prime ModelSim References Computer-aided design software Electronic design automation software Digital electronics AMD software
Vivado
Engineering
632
7,088,921
https://en.wikipedia.org/wiki/Abel%27s%20test
In mathematics, Abel's test (also known as Abel's criterion) is a method of testing for the convergence of an infinite series. The test is named after mathematician Niels Henrik Abel, who proved it in 1826. There are two slightly different versions of Abel's test – one is used with series of real numbers, and the other is used with power series in complex analysis. Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions dependent on parameters. Abel's test in real analysis Suppose the following statements are true: is a convergent series, is a monotone sequence, and is bounded. Then is also convergent. It is important to understand that this test is mainly pertinent and useful in the context of non absolutely convergent series . For absolutely convergent series, this theorem, albeit true, is almost self evident. This theorem can be proved directly using summation by parts. Abel's test in complex analysis A closely related convergence test, also known as Abel's test, can often be used to establish the convergence of a power series on the boundary of its circle of convergence. Specifically, Abel's test states that if a sequence of positive real numbers is decreasing monotonically (or at least that for all n greater than some natural number m, we have ) with then the power series converges everywhere on the closed unit circle, except when z = 1. Abel's test cannot be applied when z = 1, so convergence at that single point must be investigated separately. Notice that Abel's test implies in particular that the radius of convergence is at least 1. It can also be applied to a power series with radius of convergence R ≠ 1 by a simple change of variables ζ = z/R. Notice that Abel's test is a generalization of the Leibniz Criterion by taking z = −1. Proof of Abel's test: Suppose that z is a point on the unit circle, z ≠ 1. For each , we define By multiplying this function by (1 − z), we obtain The first summand is constant, the second converges uniformly to zero (since by assumption the sequence converges to zero). It only remains to show that the series converges. We will show this by showing that it even converges absolutely: where the last sum is a converging telescoping sum. The absolute value vanished because the sequence is decreasing by assumption. Hence, the sequence converges (even uniformly) on the closed unit disc. If , we may divide by (1 − z) and obtain the result. Another way to obtain the result is to apply the Dirichlet's test. Indeed, for holds , hence the assumptions of the Dirichlet's test are fulfilled. Abel's uniform convergence test Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions or an improper integration of functions dependent on parameters. It is related to Abel's test for the convergence of an ordinary series of real numbers, and the proof relies on the same technique of summation by parts. The test is as follows. Let {gn} be a uniformly bounded sequence of real-valued continuous functions on a set E such that gn+1(x) ≤ gn(x) for all x ∈ E and positive integers n, and let {fn} be a sequence of real-valued functions such that the series Σfn(x) converges uniformly on E. Then Σfn(x)gn(x) converges uniformly on E. Notes References Gino Moretti, Functions of a Complex Variable, Prentice-Hall, Inc., 1964 External links Proof (for real series) at PlanetMath.org Convergence tests Articles containing proofs
Abel's test
Mathematics
781
49,669,587
https://en.wikipedia.org/wiki/Capitalism%20Nature%20Socialism
Capitalism Nature Socialism is an academic journal founded by James O'Connor and Barbara Laurence in 1988. It is published by Taylor and Francis. It publishes articles on political ecology, with an ecosocialist perspective. References Further reading Delayed open access journals Eco-socialism Environmental humanities journals Environmental social science journals Academic journals established in 1988 Socialist academic journal Taylor & Francis academic journals Quarterly journals
Capitalism Nature Socialism
Environmental_science
76
53,325
https://en.wikipedia.org/wiki/List%20of%20civil%20engineers
This list of civil engineers is a list of notable people who have been trained in or have practiced civil engineering. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z References Civil engineers de:Liste bekannter Ingenieure
List of civil engineers
Technology,Engineering
65
63,030,231
https://en.wikipedia.org/wiki/COVID-19
Coronavirus disease 2019 (COVID-19) is a contagious disease caused by the coronavirus SARS-CoV-2. In January 2020 the disease spread worldwide, resulting in the COVID-19 pandemic. The symptoms of COVID‑19 can vary but often include fever, fatigue, cough, breathing difficulties, loss of smell, and loss of taste. Symptoms may begin one to fourteen days after exposure to the virus. At least a third of people who are infected do not develop noticeable symptoms. Of those who develop symptoms noticeable enough to be classified as patients, most (81%) develop mild to moderate symptoms (up to mild pneumonia), while 14% develop severe symptoms (dyspnea, hypoxia, or more than 50% lung involvement on imaging), and 5% develop critical symptoms (respiratory failure, shock, or multiorgan dysfunction). Older people have a higher risk of developing severe symptoms. Some complications result in death. Some people continue to experience a range of effects (long COVID) for months or years after infection, and damage to organs has been observed. Multi-year studies on the long-term effects are ongoing. COVID‑19 transmission occurs when infectious particles are breathed in or come into contact with the eyes, nose, or mouth. The risk is highest when people are in close proximity, but small airborne particles containing the virus can remain suspended in the air and travel over longer distances, particularly indoors. Transmission can also occur when people touch their eyes, nose or mouth after touching surfaces or objects that have been contaminated by the virus. People remain contagious for up to 20 days and can spread the virus even if they do not develop symptoms. Testing methods for COVID-19 to detect the virus's nucleic acid include real-time reverse transcription polymerase chain reaction (RTPCR), transcription-mediated amplification, and reverse transcription loop-mediated isothermal amplification (RTLAMP) from a nasopharyngeal swab. Several COVID-19 vaccines have been approved and distributed in various countries, many of which have initiated mass vaccination campaigns. Other preventive measures include physical or social distancing, quarantining, ventilation of indoor spaces, use of face masks or coverings in public, covering coughs and sneezes, hand washing, and keeping unwashed hands away from the face. While drugs have been developed to inhibit the virus, the primary treatment is still symptomatic, managing the disease through supportive care, isolation, and experimental measures. The first known case was identified in Wuhan, China, in December 2019. Most scientists believe the SARS-CoV-2 virus entered into human populations through natural zoonosis, similar to the SARS-CoV-1 and MERS-CoV outbreaks, and consistent with other pandemics in human history. Social and environmental factors including climate change, natural ecosystem destruction and wildlife trade increased the likelihood of such zoonotic spillover. Nomenclature During the initial outbreak in Wuhan, the virus and disease were commonly referred to as "coronavirus" and "Wuhan coronavirus", with the disease sometimes called "Wuhan pneumonia". In the past, many diseases have been named after geographical locations, such as the Spanish flu, Middle East respiratory syndrome, and Zika virus. In January 2020, the World Health Organization (WHO) recommended 2019-nCoV and 2019-nCoV acute respiratory disease as interim names for the virus and disease per 2015 guidance and international guidelines against using geographical locations or groups of people in disease and virus names to prevent social stigma. The official names COVID‑19 and SARS-CoV-2 were issued by the WHO on 11 February 2020 with COVID-19 being shorthand for "coronavirus disease 2019". The WHO additionally uses "the COVID‑19 virus" and "the virus responsible for COVID‑19" in public communications. Symptoms and signs Complications Complications may include pneumonia, acute respiratory distress syndrome (ARDS), multi-organ failure, septic shock, and death. Cardiovascular complications may include heart failure, arrhythmias (including atrial fibrillation), heart inflammation, thrombosis, particularly venous thromboembolism, and endothelial cell injury and dysfunction. Approximately 20–30% of people who present with COVID‑19 have elevated liver enzymes, reflecting liver injury. Neurologic manifestations include seizure, stroke, encephalitis, and Guillain–Barré syndrome (which includes loss of motor functions). Following the infection, children may develop paediatric multisystem inflammatory syndrome, which has symptoms similar to Kawasaki disease, which can be fatal. In very rare cases, acute encephalopathy can occur, and it can be considered in those who have been diagnosed with COVID‑19 and have an altered mental status. According to the US Centers for Disease Control and Prevention, pregnant women are at increased risk of becoming seriously ill from COVID‑19. This is because pregnant women with COVID‑19 appear to be more likely to develop respiratory and obstetric complications that can lead to miscarriage, premature delivery and intrauterine growth restriction. Fungal infections such as aspergillosis, candidiasis, cryptococcosis and mucormycosis have been recorded in people recovering from COVID‑19. Cause COVID‑19 is caused by infection with a strain of coronavirus known as "severe acute respiratory syndrome coronavirus 2" (SARS-CoV-2). Transmission Virology Severe acute respiratory syndrome coronavirus2 (SARS-CoV-2) is a novel severe acute respiratory syndrome coronavirus. It was first isolated from three people with pneumonia connected to the cluster of acute respiratory illness cases in Wuhan. All structural features of the novel SARS-CoV-2 virus particle occur in related coronaviruses in nature, particularly in Rhinolophus sinicus (Chinese horseshoe bats). Outside the human body, the virus is destroyed by household soap which bursts its protective bubble. Hospital disinfectants, alcohols, heat, povidone-iodine, and ultraviolet-C (UV-C) irradiation are also effective disinfection methods for surfaces. SARS-CoV-2 is closely related to the original SARS-CoV. It is thought to have an animal (zoonotic) origin. Genetic analysis has revealed that the coronavirus genetically clusters with the genus Betacoronavirus, in subgenus Sarbecovirus (lineage B) together with two bat-derived strains. It is 96% identical at the whole genome level to other bat coronavirus samples (BatCov RaTG13). The structural proteins of SARS-CoV-2 include membrane glycoprotein (M), envelope protein (E), nucleocapsid protein (N), and the spike protein (S). The M protein of SARS-CoV-2 is about 98% similar to the M protein of bat SARS-CoV, maintains around 98% homology with pangolin SARS-CoV, and has 90% homology with the M protein of SARS-CoV; whereas, the similarity is only around 38% with the M protein of MERS-CoV. SARS-CoV-2 variants The many thousands of SARS-CoV-2 variants are grouped into either clades or lineages. The WHO, in collaboration with partners, expert networks, national authorities, institutions and researchers, have established nomenclature systems for naming and tracking SARS-CoV-2 genetic lineages by GISAID, Nextstrain and Pango. The expert group convened by the WHO recommended the labelling of variants using letters of the Greek alphabet, for example, Alpha, Beta, Delta, and Gamma, giving the justification that they "will be easier and more practical to discussed by non-scientific audiences". Nextstrain divides the variants into five clades (19A, 19B, 20A, 20B, and 20C), while GISAID divides them into seven (L, O, V, S, G, GH, and GR). The Pango tool groups variants into lineages, with many circulating lineages being classed under the B.1 lineage. Several notable variants of SARS-CoV-2 emerged throughout 2020. Cluster 5 emerged among minks and mink farmers in Denmark. After strict quarantines and the slaughter of all the country's mink, the cluster was assessed to no longer be circulating among humans in Denmark as of 1 February 2021. , there are five dominant variants of SARS-CoV-2 spreading among global populations: the Alpha variant (B.1.1.7, formerly called the UK variant), first found in London and Kent, the Beta variant (B.1.351, formerly called the South Africa variant), the Gamma variant (P.1, formerly called the Brazil variant), the Delta variant (B.1.617.2, formerly called the India variant), and the Omicron variant (B.1.1.529), which had spread to 57 countries as of 7 December. On December 19, 2023, the WHO declared that another distinctive variant, JN.1, had emerged as a "variant of interest". Though the WHO expected an increase in cases globally, particularly for countries entering winter, the overall global health risk was considered low. Pathophysiology The SARS-CoV-2 virus can infect a wide range of cells and systems of the body. COVID‑19 is most known for affecting the upper respiratory tract (sinuses, nose, and throat) and the lower respiratory tract (windpipe and lungs). The lungs are the organs most affected by COVID‑19 because the virus accesses host cells via the receptor for the enzyme angiotensin-converting enzyme 2 (ACE2), which is most abundant on the surface of type II alveolar cells of the lungs. The virus uses a special surface glycoprotein called a "spike" to connect to the ACE2 receptor and enter the host cell. Respiratory tract Following viral entry, COVID‑19 infects the ciliated epithelium of the nasopharynx and upper airways. Autopsies of people who died of COVID‑19 have found diffuse alveolar damage, and lymphocyte-containing inflammatory infiltrates within the lung. From the CT scans of COVID-19 infected lungs, white patches were observed containing fluid known as ground-glass opacity (GGO) or simply ground glass. This tended to correlate with the clear jelly liquid found in lung autopsies of people who died of COVID-19. One possibility addressed in medical research is that hyuralonic acid (HA) could be the leading factor for this observation of the clear jelly liquid found in the lungs, in what could be hyuralonic storm, in conjunction with cytokine storm. Nervous system One common symptom, loss of smell, results from infection of the support cells of the olfactory epithelium, with subsequent damage to the olfactory neurons. The involvement of both the central and peripheral nervous system in COVID‑19 has been reported in many medical publications. It is clear that many people with COVID-19 exhibit neurological or mental health issues. The virus is not detected in the central nervous system (CNS) of the majority of people with COVID-19 who also have neurological issues. However, SARS-CoV-2 has been detected at low levels in the brains of those who have died from COVID‑19, but these results need to be confirmed. While virus has been detected in cerebrospinal fluid of autopsies, the exact mechanism by which it invades the CNS remains unclear and may first involve invasion of peripheral nerves given the low levels of ACE2 in the brain. The virus may also enter the bloodstream from the lungs and cross the blood–brain barrier to gain access to the CNS, possibly within an infected white blood cell. Research conducted when Alpha was the dominant variant has suggested COVID-19 may cause brain damage. Later research showed that all variants studied (including Omicron) killed brain cells, but the exact cells killed varied by variant. It is unknown if such damage is temporary or permanent. Observed individuals infected with COVID-19 (most with mild cases) experienced an additional 0.2% to 2% of brain tissue lost in regions of the brain connected to the sense of smell compared with uninfected individuals, and the overall effect on the brain was equivalent on average to at least one extra year of normal ageing; infected individuals also scored lower on several cognitive tests. All effects were more pronounced among older ages. Gastrointestinal tract The virus also affects gastrointestinal organs as ACE2 is abundantly expressed in the glandular cells of gastric, duodenal and rectal epithelium as well as endothelial cells and enterocytes of the small intestine. Cardiovascular system The virus can cause acute myocardial injury and chronic damage to the cardiovascular system. An acute cardiac injury was found in 12% of infected people admitted to the hospital in Wuhan, China, and is more frequent in severe disease. Rates of cardiovascular symptoms are high, owing to the systemic inflammatory response and immune system disorders during disease progression, but acute myocardial injuries may also be related to ACE2 receptors in the heart. ACE2 receptors are highly expressed in the heart and are involved in heart function. A high incidence of thrombosis and venous thromboembolism occurs in people transferred to intensive care units with COVID‑19 infections, and may be related to poor prognosis. Blood vessel dysfunction and clot formation (as suggested by high D-dimer levels caused by blood clots) may have a significant role in mortality, incidents of clots leading to pulmonary embolisms, and ischaemic events (strokes) within the brain found as complications leading to death in people infected with COVID‑19. Infection may initiate a chain of vasoconstrictive responses within the body, including pulmonary vasoconstriction a possible mechanism in which oxygenation decreases during pneumonia. Furthermore, damage of arterioles and capillaries was found in brain tissue samples of people who died from COVID‑19. COVID19 may also cause substantial structural changes to blood cells, sometimes persisting for months after hospital discharge. A low level of blood lymphocytess may result from the virus acting through ACE2-related entry into lymphocytes. Kidneys Another common cause of death is complications related to the kidneys. Early reports show that up to 30% of people hospitalised with COVID-19 both in China and in New York have experienced some injury to their kidneys, including some persons with no previous kidney problems. Immunopathology Although SARS-CoV-2 has a tropism for ACE2-expressing epithelial cells of the respiratory tract, people with severe COVID‑19 have symptoms of systemic hyperinflammation. Clinical laboratory findings of elevated IL2, IL6, IL7, as well as the following suggest an underlying immunopathology: Granulocyte-macrophage colony-stimulating factor (GMCSF) Interferon gamma-induced protein10 (IP10) Monocyte chemoattractant protein1 (MCP1) Macrophage inflammatory protein 1alpha (MIP1alpha) Tumour necrosis factor (TNFα) indicative of cytokine release syndrome (CRS) Interferon alpha plays a complex, Janus-faced role in the pathogenesis of COVID-19. Although it promotes the elimination of virus-infected cells, it also upregulates the expression of ACE-2, thereby facilitating the SARS-Cov2 virus to enter cells and to replicate. A competition of negative feedback loops (via protective effects of interferon alpha) and positive feedback loops (via upregulation of ACE-2) is assumed to determine the fate of people with COVID-19. Additionally, people with COVID‑19 and acute respiratory distress syndrome (ARDS) have classical serum biomarkers of CRS, including elevated C-reactive protein (CRP), lactate dehydrogenase (LDH), D-dimer, and ferritin. Systemic inflammation results in vasodilation, allowing inflammatory lymphocytic and monocytic infiltration of the lung and the heart. In particular, pathogenic GM-CSF-secreting T cells were shown to correlate with the recruitment of inflammatory IL-6-secreting monocytes and severe lung pathology in people with COVID‑19. Lymphocytic infiltrates have also been reported at autopsy. Viral and host factors Virus proteins Multiple viral and host factors affect the pathogenesis of the virus. The S-protein, otherwise known as the spike protein, is the viral component that attaches to the host receptor via the ACE2 receptors. It includes two subunits: S1 and S2. S1 determines the virus-host range and cellular tropism via the receptor-binding domain. S2 mediates the membrane fusion of the virus to its potential cell host via the H1 and HR2, which are heptad repeat regions. Studies have shown that S1 domain induced IgG and IgA antibody levels at a much higher capacity. It is the focus spike proteins expression that are involved in many effective COVID‑19 vaccines. The M protein is the viral protein responsible for the transmembrane transport of nutrients. It is the cause of the bud release and the formation of the viral envelope. The N and E protein are accessory proteins that interfere with the host's immune response. Host factors Human angiotensin converting enzyme 2 (hACE2) is the host factor that SARS-CoV-2 virus targets causing COVID‑19. Theoretically, the usage of angiotensin receptor blockers (ARB) and ACE inhibitors upregulating ACE2 expression might increase morbidity with COVID‑19, though animal data suggest some potential protective effect of ARB; however no clinical studies have proven susceptibility or outcomes. Until further data is available, guidelines and recommendations for people with hypertension remain. The effect of the virus on ACE2 cell surfaces leads to leukocytic infiltration, increased blood vessel permeability, alveolar wall permeability, as well as decreased secretion of lung surfactants. These effects cause the majority of the respiratory symptoms. However, the aggravation of local inflammation causes a cytokine storm eventually leading to a systemic inflammatory response syndrome. Among healthy adults not exposed to SARS-CoV-2, about 35% have CD4+ T cells that recognise the SARS-CoV-2 S protein (particularly the S2 subunit) and about 50% react to other proteins of the virus, suggesting cross-reactivity from previous common colds caused by other coronaviruses. It is unknown whether different persons use similar antibody genes in response to COVID‑19. Host cytokine response The severity of the inflammation can be attributed to the severity of what is known as the cytokine storm. Levels of interleukin1B, interferon-gamma, interferon-inducible protein 10, and monocyte chemoattractant protein1 were all associated with COVID‑19 disease severity. Treatment has been proposed to combat the cytokine storm as it remains to be one of the leading causes of morbidity and mortality in COVID‑19 disease. A cytokine storm is due to an acute hyperinflammatory response that is responsible for clinical illness in an array of diseases but in COVID‑19, it is related to worse prognosis and increased fatality. The storm causes acute respiratory distress syndrome, blood clotting events such as strokes, myocardial infarction, encephalitis, acute kidney injury, and vasculitis. The production of IL-1, IL-2, IL-6, TNF-alpha, and interferon-gamma, all crucial components of normal immune responses, inadvertently become the causes of a cytokine storm. The cells of the central nervous system, the microglia, neurons, and astrocytes, are also involved in the release of pro-inflammatory cytokines affecting the nervous system, and effects of cytokine storms toward the CNS are not uncommon. Pregnancy response There are many unknowns for pregnant women during the COVID-19 pandemic. Given that they are prone to have complications and severe disease infection with other types of coronaviruses, they have been identified as a vulnerable group and advised to take supplementary preventive measures. Physiological responses to pregnancy can include: Immunological: The immunological response to COVID-19, like other viruses, depends on a working immune system. It adapts during pregnancy to allow the development of the foetus whose genetic load is only partially shared with their mother, leading to a different immunological reaction to infections during the course of pregnancy. Respiratory: Many factors can make pregnant women more vulnerable to hard respiratory infections. One of them is the total reduction of the lungs' capacity and inability to clear secretions. Coagulation: During pregnancy, there are higher levels of circulating coagulation factors, and the pathogenesis of SARS-CoV-2 infection can be implicated. The thromboembolic events with associated mortality are a risk for pregnant women. However, from the evidence base, it is difficult to conclude whether pregnant women are at increased risk of grave consequences of this virus. In addition to the above, other clinical studies have proved that SARS-CoV-2 can affect the period of pregnancy in different ways. On the one hand, there is little evidence of its impact up to 12 weeks gestation. On the other hand, COVID-19 infection may cause increased rates of unfavourable outcomes in the course of the pregnancy. Some examples of these could be foetal growth restriction, preterm birth, and perinatal mortality, which refers to the foetal death past 22 or 28 completed weeks of pregnancy as well as the death among live-born children up to seven completed days of life. For preterm birth, a 2023 review indicates that there appears to be a correlation with COVID-19. Unvaccinated women in later stages of pregnancy with COVID-19 are more likely than other people to need very intensive care. Babies born to mothers with COVID-19 are more likely to have breathing problems. Pregnant women are strongly encouraged to get vaccinated. Diagnosis COVID‑19 can provisionally be diagnosed on the basis of symptoms and confirmed using reverse transcription polymerase chain reaction (RT-PCR) or other nucleic acid testing of infected secretions. Along with laboratory testing, chest CT scans may be helpful to diagnose COVID‑19 in individuals with a high clinical suspicion of infection. Detection of a past infection is possible with serological tests, which detect antibodies produced by the body in response to the infection. Viral testing The standard methods of testing for presence of SARS-CoV-2 are nucleic acid tests, which detects the presence of viral RNA fragments. As these tests detect RNA but not infectious virus, its "ability to determine duration of infectivity of patients is limited". The test is typically done on respiratory samples obtained by a nasopharyngeal swab; however, a nasal swab or sputum sample may also be used. Results are generally available within hours. The WHO has published several testing protocols for the disease. Several laboratories and companies have developed serological tests, which detect antibodies produced by the body in response to infection. Some have been evaluated by Public Health England and approved for use in the UK. The University of Oxford's CEBM has pointed to mounting evidence that "a good proportion of 'new' mild cases and people re-testing positives after quarantine or discharge from hospital are not infectious, but are simply clearing harmless virus particles which their immune system has efficiently dealt with" and have called for "an international effort to standardize and periodically calibrate testing" In September 2020, the UK government issued "guidance for procedures to be implemented in laboratories to provide assurance of positive SARS-CoV-2 RNA results during periods of low prevalence, when there is a reduction in the predictive value of positive test results". Imaging Chest CT scans may be helpful to diagnose COVID‑19 in individuals with a high clinical suspicion of infection but are not recommended for routine screening. Bilateral multilobar ground-glass opacities with a peripheral, asymmetric, and posterior distribution are common in early infection. Subpleural dominance, crazy paving (lobular septal thickening with variable alveolar filling), and consolidation may appear as the disease progresses. Characteristic imaging features on chest radiographs and computed tomography (CT) of people who are symptomatic include asymmetric peripheral ground-glass opacities without pleural effusions. Many groups have created COVID‑19 datasets that include imagery such as the Italian Radiological Society which has compiled an international online database of imaging findings for confirmed cases. Due to overlap with other infections such as adenovirus, imaging without confirmation by rRT-PCR is of limited specificity in identifying COVID‑19. A large study in China compared chest CT results to PCR and demonstrated that though imaging is less specific for the infection, it is faster and more sensitive. Coding In late 2019, the WHO assigned emergency ICD-10 disease codes U07.1 for deaths from lab-confirmed SARS-CoV-2 infection and U07.2 for deaths from clinically or epidemiologically diagnosed COVID‑19 without lab-confirmed SARS-CoV-2 infection. Pathology The main pathological findings at autopsy are: Macroscopy: pericarditis, lung consolidation and pulmonary oedema Lung findings: Minor serous exudation, minor fibrin exudation Pulmonary oedema, pneumocyte hyperplasia, large atypical pneumocytes, interstitial inflammation with lymphocytic infiltration and multinucleated giant cell formation Diffuse alveolar damage (DAD) with diffuse alveolar exudates. DAD is the cause of acute respiratory distress syndrome (ARDS) and severe hypoxaemia. Organisation of exudates in alveolar cavities and pulmonary interstitial fibrosis Plasmocytosis in bronchoalveolar lavage (BAL) Blood and vessels: disseminated intravascular coagulation (DIC); leukoerythroblastic reaction, endotheliitis, hemophagocytosis Heart: cardiac muscle cell necrosis Liver: microvesicular steatosis Nose: shedding of olfactory epithelium Brain: infarction Kidneys: acute tubular damage. Spleen: white pulp depletion. Prevention Preventive measures to reduce the chances of infection include getting vaccinated, staying at home, wearing a mask in public, avoiding crowded places, keeping distance from others, ventilating indoor spaces, managing potential exposure durations, washing hands with soap and water often and for at least twenty seconds, practising good respiratory hygiene, and avoiding touching the eyes, nose, or mouth with unwashed hands. Those diagnosed with COVID‑19 or who believe they may be infected are advised by the CDC to stay home except to get medical care, call ahead before visiting a healthcare provider, wear a face mask before entering the healthcare provider's office and when in any room or vehicle with another person, cover coughs and sneezes with a tissue, regularly wash hands with soap and water and avoid sharing personal household items. The first COVID‑19 vaccine was granted regulatory approval on 2December 2020 by the UK medicines regulator MHRA. It was evaluated for emergency use authorisation (EUA) status by the US FDA, and in several other countries. Initially, the US National Institutes of Health guidelines do not recommend any medication for prevention of COVID‑19, before or after exposure to the SARS-CoV-2 virus, outside the setting of a clinical trial. Without a vaccine, other prophylactic measures, or effective treatments, a key part of managing COVID‑19 is trying to decrease and delay the epidemic peak, known as "flattening the curve". This is done by slowing the infection rate to decrease the risk of health services being overwhelmed, allowing for better treatment of active cases, and delaying additional cases until effective treatments or a vaccine become available. Vaccine Face masks and respiratory hygiene Indoor ventilation and avoiding crowded indoor spaces The CDC states that avoiding crowded indoor spaces reduces the risk of COVID-19 infection. When indoors, increasing the rate of air change, decreasing recirculation of air and increasing the use of outdoor air can reduce transmission. The WHO recommends ventilation and air filtration in public spaces to help clear out infectious aerosols. Exhaled respiratory particles can build-up within enclosed spaces with inadequate ventilation. The risk of COVID‑19 infection increases especially in spaces where people engage in physical exertion or raise their voice (e.g., exercising, shouting, singing) as this increases exhalation of respiratory droplets. Prolonged exposure to these conditions, typically more than 15 minutes, leads to higher risk of infection. Displacement ventilation with large natural inlets can move stale air directly to the exhaust in laminar flow while significantly reducing the concentration of droplets and particles. Passive ventilation reduces energy consumption and maintenance costs but may lack controllability and heat recovery. Displacement ventilation can also be achieved mechanically with higher energy and maintenance costs. The use of large ducts and openings helps to prevent mixing in closed environments. Recirculation and mixing should be avoided because recirculation prevents dilution of harmful particles and redistributes possibly contaminated air, and mixing increases the concentration and range of infectious particles and keeps larger particles in the air. Hand-washing and hygiene Thorough hand hygiene after any cough or sneeze is required. The WHO also recommends that individuals wash hands often with soap and water for at least twenty seconds, especially after going to the toilet or when hands are visibly dirty, before eating and after blowing one's nose. When soap and water are not available, the CDC recommends using an alcohol-based hand sanitiser with at least 60% alcohol. For areas where commercial hand sanitisers are not readily available, the WHO provides two formulations for local production. In these formulations, the antimicrobial activity arises from ethanol or isopropanol. Hydrogen peroxide is used to help eliminate bacterial spores in the alcohol; it is "not an active substance for hand antisepsis". Glycerol is added as a humectant. Social distancing Social distancing (also known as physical distancing) includes infection control actions intended to slow the spread of the disease by minimising close contact between individuals. Methods include quarantines; travel restrictions; and the closing of schools, workplaces, stadiums, theatres, or shopping centres. Individuals may apply social distancing methods by staying at home, limiting travel, avoiding crowded areas, using no-contact greetings, and physically distancing themselves from others. In 2020, outbreaks occurred in prisons due to crowding and an inability to enforce adequate social distancing. In the United States, the prisoner population is ageing and many of them are at high risk for poor outcomes from COVID‑19 due to high rates of coexisting heart and lung disease, and poor access to high-quality healthcare. Surface cleaning After being expelled from the body, coronaviruses can survive on surfaces for hours to days. If a person touches the dirty surface, they may deposit the virus at the eyes, nose, or mouth where it can enter the body and cause infection. Evidence indicates that contact with infected surfaces is not the main driver of COVID‑19, leading to recommendations for optimised disinfection procedures to avoid issues such as the increase of antimicrobial resistance through the use of inappropriate cleaning products and processes. Deep cleaning and other surface sanitation has been criticised as hygiene theatre, giving a false sense of security against something primarily spread through the air. The amount of time that the virus can survive depends significantly on the type of surface, the temperature, and the humidity. Coronaviruses die very quickly when exposed to the UV light in sunlight. Like other enveloped viruses, SARS-CoV-2 survives longest when the temperature is at room temperature or lower, and when the relative humidity is low (<50%). On many surfaces, including glass, some types of plastic, stainless steel, and skin, the virus can remain infective for several days indoors at room temperature, or even about a week under ideal conditions. On some surfaces, including cotton fabric and copper, the virus usually dies after a few hours. The virus dies faster on porous surfaces than on non-porous surfaces due to capillary action within pores and faster aerosol droplet evaporation. However, of the many surfaces tested, two with the longest survival times are N95 respirator masks and surgical masks, both of which are considered porous surfaces. The CDC says that in most situations, cleaning surfaces with soap or detergent, not disinfecting, is enough to reduce risk of transmission. The CDC recommends that if a COVID‑19 case is suspected or confirmed at a facility such as an office or day care, all areas such as offices, bathrooms, common areas, shared electronic equipment like tablets, touch screens, keyboards, remote controls, and ATMs used by the ill persons should be disinfected. Surfaces may be decontaminated with the following: 62–71% ethanol 50–100% isopropanol 0.1% sodium hypochlorite 0.5% hydrogen peroxide 0.2–7.5% povidone-iodine 50–200 ppm hypochlorous acid Other solutions, such as benzalkonium chloride and chlorhexidine gluconate, are less effective. Ultraviolet germicidal irradiation may also be used, although popular devices require exposure and may deteriorate some materials over time. A datasheet listing the authorised substances to disinfection in the food industry (including suspension or surface tested, kind of surface, use dilution, disinfectant and inoculum volumes) can be seen in the supplementary material of a 2021 Foods article. Self-isolation Self-isolation at home has been recommended for those diagnosed with COVID‑19 and those who suspect they have been infected. Health agencies have issued detailed instructions for proper self-isolation. Many governments have mandated or recommended self-quarantine for entire populations. The strongest self-quarantine instructions have been issued to those in high-risk groups. Those who may have been exposed to someone with COVID‑19 and those who have recently travelled to a country or region with the widespread transmission have been advised to self-quarantine for 14 days from the time of last possible exposure. International travel-related control measures A 2021 Cochrane rapid review found that based upon low-certainty evidence, international travel-related control measures such as restricting cross-border travel may help to contain the spread of COVID‑19. Additionally, symptom/exposure-based screening measures at borders may miss many positive cases. While test-based border screening measures may be more effective, it could also miss many positive cases if only conducted upon arrival without follow-up. The review concluded that a minimum 10-day quarantine may be beneficial in preventing the spread of COVID‑19 and may be more effective if combined with an additional control measure like border screening. Treatment Prognosis and risk factors The severity of COVID‑19 varies. The disease may take a mild course with few or no symptoms, resembling other common upper respiratory diseases such as the common cold. In 3–4% of cases (7.4% for those over age 65) symptoms are severe enough to cause hospitalisation. Mild cases typically recover within two weeks, while those with severe or critical diseases may take three to six weeks to recover. Among those who have died, the time from symptom onset to death has ranged from two to eight weeks. The Italian Istituto Superiore di Sanità reported that the median time between the onset of symptoms and death was twelve days, with seven being hospitalised. However, people transferred to an ICU had a median time of ten days between hospitalisation and death. Abnormal sodium levels during hospitalisation with COVID-19 are associated with poor prognoses: high sodium with a greater risk of death, and low sodium with an increased chance of needing ventilator support. Prolonged prothrombin time and elevated C-reactive protein levels on admission to the hospital are associated with severe course of COVID‑19 and with a transfer to ICU. Some early studies suggest 10% to 20% of people with COVID‑19 will experience symptoms lasting longer than a month. A majority of those who were admitted to hospital with severe disease report long-term problems including fatigue and shortness of breath. On 30 October 2020, WHO chief Tedros Adhanom warned that "to a significant number of people, the COVID virus poses a range of serious long-term effects". He has described the vast spectrum of COVID‑19 symptoms that fluctuate over time as "really concerning". They range from fatigue, a cough and shortness of breath, to inflammation and injury of major organsincluding the lungs and heart, and also neurological and psychologic effects. Symptoms often overlap and can affect any system in the body. Infected people have reported cyclical bouts of fatigue, headaches, months of complete exhaustion, mood swings, and other symptoms. Tedros therefore concluded that a strategy of achieving herd immunity by infection, rather than vaccination, is "morally unconscionable and unfeasible". In terms of hospital readmissions about 9% of 106,000 individuals had to return for hospital treatment within two months of discharge. The average to readmit was eight days since first hospital visit. There are several risk factors that have been identified as being a cause of multiple admissions to a hospital facility. Among these are advanced age (above 65 years of age) and presence of a chronic condition such as diabetes, COPD, heart failure or chronic kidney disease. According to scientific reviews smokers are more likely to require intensive care or die compared to non-smokers. Acting on the same ACE2 pulmonary receptors affected by smoking, air pollution has been correlated with the disease. Short-term and chronic exposure to air pollution seems to enhance morbidity and mortality from COVID‑19. Pre-existing heart and lung diseases and also obesity, especially in conjunction with fatty liver disease, contributes to an increased health risk of COVID‑19. It is also assumed that those that are immunocompromised are at higher risk of getting severely sick from SARS-CoV-2. One research study that looked into the COVID‑19 infections in hospitalised kidney transplant recipients found a mortality rate of 11%. Men with untreated hypogonadism were 2.4 times more likely than men with eugonadism to be hospitalised if they contracted COVID-19; Hypogonad men treated with testosterone were less likely to be hospitalised for COVID-19 than men who were not treated for hypogonadism. Genetic risk factors Genetics plays an important role in the ability to fight off Covid. For instance, those that do not produce detectable type I interferons or produce auto-antibodies against these may get much sicker from COVID‑19. Genetic screening is able to detect interferon effector genes. Some genetic variants are risk factors in specific populations. For instance, an allele of the DOCK2 gene (dedicator of cytokinesis 2 gene) is a common risk factor in Asian populations but much less common in Europe. The mutation leads to lower expression of DOCK2 especially in younger people with severe COVID-19 infections. In fact, many other genes and genetic variants have been found that determine the outcome of SARS-CoV-2 infections. Children While very young children have experienced lower rates of infection, older children have a rate of infection that is similar to the population as a whole. Children are likely to have milder symptoms and are at lower risk of severe disease than adults. The CDC reports that in the US roughly a third of hospitalised children were admitted to the ICU, while a European multinational study of hospitalised children from June 2020, found that about 8% of children admitted to a hospital needed intensive care. Four of the 582 children (0.7%) in the European study died, but the actual mortality rate may be "substantially lower" since milder cases that did not seek medical help were not included in the study. Long-term effects Around 10% to 30% of non-hospitalised people with COVID-19 go on to develop long COVID. For those that do need hospitalisation, the incidence of long-term effects is over 50%. Long COVID is an often severe multisystem disease with a large set of symptoms. There are likely various, possibly coinciding, causes. Organ damage from the acute infection can explain a part of the symptoms, but long COVID is also observed in people where organ damage seems to be absent. By a variety of mechanisms, the lungs are the organs most affected in COVID19. In people requiring hospital admission, up to 98% of CT scans performed show lung abnormalities after 28 days of illness even if they had clinically improved. People with advanced age, severe disease, prolonged ICU stays, or who smoke are more likely to have long-lasting effects, including pulmonary fibrosis. Overall, approximately one-third of those investigated after four weeks will have findings of pulmonary fibrosis or reduced lung function as measured by DLCO, even in asymptomatic people, but with the suggestion of continuing improvement with the passing of more time. After severe disease, lung function can take anywhere from three months to a year or more to return to previous levels. The risks of cognitive deficit, dementia, psychotic disorders, and epilepsy or seizures persists at an increased level two years after infection. Immunity The immune response by humans to SARS-CoV-2 virus occurs as a combination of the cell-mediated immunity and antibody production, just as with most other infections. B cells interact with T cells and begin dividing before selection into the plasma cell, partly on the basis of their affinity for antigen. Since SARS-CoV-2 has been in the human population only since December 2019, it remains unknown if the immunity is long-lasting in people who recover from the disease. The presence of neutralising antibodies in blood strongly correlates with protection from infection, but the level of neutralising antibody declines with time. Those with asymptomatic or mild disease had undetectable levels of neutralising antibody two months after infection. In another study, the level of neutralising antibodies fell four-fold one to four months after the onset of symptoms. However, the lack of antibodies in the blood does not mean antibodies will not be rapidly produced upon reexposure to SARS-CoV-2. Memory B cells specific for the spike and nucleocapsid proteins of SARS-CoV-2 last for at least six months after the appearance of symptoms. As of August 2021, reinfection with COVID‑19 was possible but uncommon. The first case of reinfection was documented in August 2020. A systematic review found 17 cases of confirmed reinfection in medical literature as of May 2021. With the Omicron variant, as of 2022, reinfections have become common, albeit it is unclear how common. COVID-19 reinfections are thought to likely be less severe than primary infections, especially if one was previously infected by the same variant. Mortality Several measures are commonly used to quantify mortality. These numbers vary by region and over time and are influenced by the volume of testing, healthcare system quality, treatment options, time since the initial outbreak, and population characteristics such as age, sex, and overall health. The mortality rate reflects the number of deaths within a specific demographic group divided by the population of that demographic group. Consequently, the mortality rate reflects the prevalence as well as the severity of the disease within a given population. Mortality rates are highly correlated to age, with relatively low rates for young people and relatively high rates among the elderly. In fact, one relevant factor of mortality rates is the age structure of the countries' populations. For example, the case fatality rate for COVID‑19 is lower in India than in the US since India's younger population represents a larger percentage than in the US. Case fatality rate The case fatality rate (CFR) reflects the number of deaths divided by the number of diagnosed cases within a given time interval. Based on Johns Hopkins University statistics, the global death-to-case ratio is (/) as of . The number varies by region. Infection fatality rate A key metric in gauging the severity of COVID‑19 is the infection fatality rate (IFR), also referred to as the infection fatality ratio or infection fatality risk. This metric is calculated by dividing the total number of deaths from the disease by the total number of infected individuals; hence, in contrast to the CFR, the IFR incorporates asymptomatic and undiagnosed infections as well as reported cases. Estimates A December 2020 systematic review and meta-analysis estimated that population IFR during the first wave of the pandemic was about 0.5% to 1% in many locations (including France, Netherlands, New Zealand, and Portugal), 1% to 2% in other locations (Australia, England, Lithuania, and Spain), and exceeded 2% in Italy. That study also found that most of these differences in IFR reflected corresponding differences in the age composition of the population and age-specific infection rates; in particular, the metaregression estimate of IFR is very low for children and younger adults (e.g., 0.002% at age 10 and 0.01% at age 25) but increases progressively to 0.4% at age 55, 1.4% at age 65, 4.6% at age 75, and 15% at age 85. These results were also highlighted in a December 2020 report issued by the WHO. An analysis of those IFR rates indicates that COVID19 is hazardous not only for the elderly but also for middle-aged adults, for whom the infection fatality rate of COVID-19 is two orders of magnitude greater than the annualised risk of a fatal automobile accident and far more dangerous than seasonal influenza. Earlier estimates of IFR At an early stage of the pandemic, the World Health Organization reported estimates of IFR between 0.3% and 1%. On 2July, The WHO's chief scientist reported that the average IFR estimate presented at a two-day WHO expert forum was about 0.6%. In August, the WHO found that studies incorporating data from broad serology testing in Europe showed IFR estimates converging at approximately 0.5–1%. Firm lower limits of IFRs have been established in a number of locations such as New York City and Bergamo in Italy since the IFR cannot be less than the population fatality rate. (After sufficient time however, people can get reinfected). As of 10 July, in New York City, with a population of 8.4 million, 23,377 individuals (18,758 confirmed and 4,619 probable) have died with COVID‑19 (0.3% of the population). Antibody testing in New York City suggested an IFR of ≈0.9%, and ≈1.4%. In Bergamo province, 0.6% of the population has died. In September 2020, the U.S. Centers for Disease Control and Prevention (CDC) reported preliminary estimates of age-specific IFRs for public health planning purposes. Sex differences COVID‑19 case fatality rates are higher among men than women in most countries. However, in a few countries like India, Nepal, Vietnam, and Slovenia the fatality cases are higher in women than men. Globally, men are more likely to be admitted to the ICU and more likely to die. One meta-analysis found that globally, men were more likely to get COVID‑19 than women; there were approximately 55 men and 45 women per 100 infections (CI: 51.43–56.58). The Chinese Center for Disease Control and Prevention reported the death rate was 2.8% for men and 1.7% for women. Later reviews in June 2020 indicated that there is no significant difference in susceptibility or in CFR between genders. One review acknowledges the different mortality rates in Chinese men, suggesting that it may be attributable to lifestyle choices such as smoking and drinking alcohol rather than genetic factors. Smoking, which in some countries like China is mainly a male activity, is a habit that contributes to increasing significantly the case fatality rates among men. Sex-based immunological differences, lesser prevalence of smoking in women and men developing co-morbid conditions such as hypertension at a younger age than women could have contributed to the higher mortality in men. In Europe as of February 2020, 57% of the infected people were men and 72% of those died with COVID‑19 were men. As of April 2020, the US government is not tracking sex-related data of COVID‑19 infections. Research has shown that viral illnesses like Ebola, HIV, influenza and SARS affect men and women differently. Ethnic differences In the US, a greater proportion of deaths due to COVID‑19 have occurred among African Americans and other minority groups. Structural factors that prevent them from practising social distancing include their concentration in crowded substandard housing and in "essential" occupations such as retail grocery workers, public transit employees, health-care workers and custodial staff. Greater prevalence of lacking health insurance and care of underlying conditions such as diabetes, hypertension, and heart disease also increase their risk of death. Similar issues affect Native American and Latino communities. On the one hand, in the Dominican Republic there is a clear example of both gender and ethnic inequality. In this Latin American territory, there is great inequality and precariousness that especially affects Dominican women, with greater emphasis on those of Haitian descent. According to a US health policy non-profit, 34% of American Indian and Alaska Native People (AIAN) non-elderly adults are at risk of serious illness compared to 21% of white non-elderly adults. The source attributes it to disproportionately high rates of many health conditions that may put them at higher risk as well as living conditions like lack of access to clean water. Leaders have called for efforts to research and address the disparities. In the UK, a greater proportion of deaths due to COVID‑19 have occurred in those of a Black, Asian, and other ethnic minority background. More severe impacts upon patients including the relative incidence of the necessity of hospitalisation requirements, and vulnerability to the disease has been associated via DNA analysis to be expressed in genetic variants at chromosomal region 3, features that are associated with European Neanderthal heritage. That structure imposes greater risks that those affected will develop a more severe form of the disease. The findings are from Professor Svante Pääbo and researchers he leads at the Max Planck Institute for Evolutionary Anthropology and the Karolinska Institutet. This admixture of modern human and Neanderthal genes is estimated to have occurred roughly between 50,000 and 60,000 years ago in Southern Europe. Comorbidities Biological factors (immune response) and the general behaviour (habits) can strongly determine the consequences of COVID‑19. Most of those who die of COVID‑19 have pre-existing (underlying) conditions, including hypertension, diabetes mellitus, and cardiovascular disease. According to March data from the United States, 89% of those hospitalised had preexisting conditions. The Italian Istituto Superiore di Sanità reported that out of 8.8% of deaths where medical charts were available, 96.1% of people had at least one comorbidity with the average person having 3.4 diseases. According to this report the most common comorbidities are hypertension (66% of deaths), type 2 diabetes (29.8% of deaths), ischaemic heart disease (27.6% of deaths), atrial fibrillation (23.1% of deaths) and chronic renal failure (20.2% of deaths). Most critical respiratory comorbidities according to the US Centers for Disease Control and Prevention (CDC), are: moderate or severe asthma, pre-existing COPD, pulmonary fibrosis, cystic fibrosis. Evidence stemming from meta-analysis of several smaller research papers also suggests that smoking can be associated with worse outcomes. When someone with existing respiratory problems is infected with COVID‑19, they might be at greater risk for severe symptoms. COVID‑19 also poses a greater risk to people who misuse opioids and amphetamines, insofar as their drug use may have caused lung damage. In August 2020, the CDC issued a caution that tuberculosis (TB) infections could increase the risk of severe illness or death. The WHO recommended that people with respiratory symptoms be screened for both diseases, as testing positive for COVID‑19 could not rule out co-infections. Some projections have estimated that reduced TB detection due to the pandemic could result in 6.3 million additional TB cases and 1.4 million TB-related deaths by 2025. History The virus is thought to be of natural animal origin, most likely through spillover infection. A joint-study conducted in early 2021 by the People's Republic of China and the World Health Organization indicated that the virus descended from a coronavirus that infects wild bats, and likely spread to humans through an intermediary wildlife host. There are several theories about where the index case originated and investigations into the origin of the pandemic are ongoing. According to articles published in July 2022 in Science, virus transmission into humans occurred through two spillover events in November 2019 and was likely due to live wildlife trade on the Huanan wet market in the city of Wuhan (Hubei, China). Doubts about the conclusions have mostly centered on the precise site of spillover. Earlier phylogenetics estimated that SARS-CoV-2 arose in October or November 2019. A phylogenetic algorithm analysis suggested that the virus may have been circulating in Guangdong before Wuhan. Most scientists believe the virus spilled into human populations through natural zoonosis, similar to the SARS-CoV-1 and MERS-CoV outbreaks, and consistent with other pandemics in human history. According to the Intergovernmental Panel on Climate Change several social and environmental factors including climate change, natural ecosystem destruction and wildlife trade increased the likelihood of such zoonotic spillover. One study made with the support of the European Union found climate change increased the likelihood of the pandemic by influencing distribution of bat species. Available evidence suggests that the SARS-CoV-2 virus was originally harboured by bats, and spread to humans multiple times from infected wild animals at the Huanan Seafood Market in Wuhan in December 2019. A minority of scientists and some members of the U.S intelligence community believe the virus may have been unintentionally leaked from a laboratory such as the Wuhan Institute of Virology. The US intelligence community has mixed views on the issue, but overall agrees with the scientific consensus that the virus was not developed as a biological weapon and is unlikely to have been genetically engineered. There is no evidence SARS-CoV-2 existed in any laboratory prior to the pandemic. The first confirmed human infections were in Wuhan. A study of the first 41 cases of confirmed COVID‑19, published in January 2020 in The Lancet, reported the earliest date of onset of symptoms as 1December 2019. Official publications from the WHO reported the earliest onset of symptoms as 8December 2019. Human-to-human transmission was confirmed by the WHO and Chinese authorities by 20 January 2020. According to official Chinese sources, these were mostly linked to the Huanan Seafood Wholesale Market, which also sold live animals. In May 2020, George Gao, the director of the CDC, said animal samples collected from the seafood market had tested negative for the virus, indicating that the market was the site of an early superspreading event, but that it was not the site of the initial outbreak. Traces of the virus have been found in wastewater samples that were collected in Milan and Turin, Italy, on 18 December 2019. By December 2019, the spread of infection was almost entirely driven by human-to-human transmission. The number of COVID-19 cases in Hubei gradually increased, reaching sixty by 20 December, and at least 266 by 31 December. On 24 December, Wuhan Central Hospital sent a bronchoalveolar lavage fluid (BAL) sample from an unresolved clinical case to sequencing company Vision Medicals. On 27 and 28 December, Vision Medicals informed the Wuhan Central Hospital and the Chinese CDC of the results of the test, showing a new coronavirus. A pneumonia cluster of unknown cause was observed on 26 December and treated by the doctor Zhang Jixian in Hubei Provincial Hospital, who informed the Wuhan Jianghan CDC on 27 December. On 30 December, a test report addressed to Wuhan Central Hospital, from company CapitalBio Medlab, stated an erroneous positive result for SARS, causing a group of doctors at Wuhan Central Hospital to alert their colleagues and relevant hospital authorities of the result. The Wuhan Municipal Health Commission issued a notice to various medical institutions on "the treatment of pneumonia of unknown cause" that same evening. Eight of these doctors, including Li Wenliang (punished on 3January), were later admonished by the police for spreading false rumours and another, Ai Fen, was reprimanded by her superiors for raising the alarm. The Wuhan Municipal Health Commission made the first public announcement of a pneumonia outbreak of unknown cause on 31 December, confirming 27 casesenough to trigger an investigation. During the early stages of the outbreak, the number of cases doubled approximately every seven and a half days. In early and mid-January 2020, the virus spread to other Chinese provinces, helped by the Chinese New Year migration and Wuhan being a transport hub and major rail interchange. On 20 January, China reported nearly 140 new cases in one day, including two people in Beijing and one in Shenzhen. Later official data shows 6,174 people had already developed symptoms by then, and more may have been infected. A report in The Lancet on 24 January indicated human transmission, strongly recommended personal protective equipment for health workers, and said testing for the virus was essential due to its "pandemic potential". On 30 January, the WHO declared COVID-19 a Public Health Emergency of International Concern. By this time, the outbreak spread by a factor of 100 to 200 times. Italy had its first confirmed cases on 31 January 2020, two tourists from China. Italy overtook China as the country with the most deaths on 19 March 2020. By 26 March the United States had overtaken China and Italy with the highest number of confirmed cases in the world. Research on coronavirus genomes indicates the majority of COVID-19 cases in New York came from European travellers, rather than directly from China or any other Asian country. Retesting of prior samples found a person in France who had the virus on 27 December 2019, and a person in the United States who died from the disease on 6February 2020. RT-PCR testing of untreated wastewater samples from Brazil and Italy have suggested detection of SARS-CoV-2 as early as November and December 2019, respectively, but the methods of such sewage studies have not been optimised, many have not been peer-reviewed, details are often missing, and there is a risk of false positives due to contamination or if only one gene target is detected. A September 2020 review journal article said, "The possibility that the COVID‑19 infection had already spread to Europe at the end of last year is now indicated by abundant, even if partially circumstantial, evidence", including pneumonia case numbers and radiology in France and Italy in November and December. , Reuters reported that it had estimated the worldwide total number of deaths due to COVID‑19 to have exceeded five million. The Public Health Emergency of International Concern for COVID-19 ended on May 5, 2023. By this time, everyday life in most countries had returned to how it was before the pandemic. Misinformation After the initial outbreak of COVID19, misinformation and disinformation regarding the origin, scale, prevention, treatment, and other aspects of the disease rapidly spread online. In September 2020, the US Centers for Disease Control and Prevention (CDC) published preliminary estimates of the risk of death by age groups in the United States, but those estimates were widely misreported and misunderstood. Other species Humans appear to be capable of spreading the virus to some other animals, a type of disease transmission referred to as zooanthroponosis. Some pets, especially cats and ferrets, can catch this virus from infected humans. Symptoms in cats include respiratory (such as a cough) and digestive symptoms. Cats can spread the virus to other cats, and may be able to spread the virus to humans, but cat-to-human transmission of SARS-CoV-2 has not been proven. Compared to cats, dogs are less susceptible to this infection. Behaviours which increase the risk of transmission include kissing, licking, and petting the animal. The virus does not appear to be able to infect pigs, ducks, or chickens at all. Mice, rats, and rabbits, if they can be infected at all, are unlikely to be involved in spreading the virus. Tigers and lions in zoos have become infected as a result of contact with infected humans. As expected, monkeys and great ape species such as orangutans can also be infected with the COVID‑19 virus. Minks, which are in the same family as ferrets, have been infected. Minks may be asymptomatic, and can also spread the virus to humans. Multiple countries have identified infected animals in mink farms. Denmark, a major producer of mink pelts, ordered the slaughter of all minks over fears of viral mutations, following an outbreak referred to as Cluster 5. A vaccine for mink and other animals is being researched. Research International research on vaccines and medicines in COVID19 is underway by government organisations, academic groups, and industry researchers. The CDC has classified it to require a BSL3 grade laboratory. There has been a great deal of COVID‑19 research, involving accelerated research processes and publishing shortcuts to meet the global demand. , hundreds of clinical trials have been undertaken, with research happening on every continent except Antarctica. , more than 200 possible treatments have been studied in humans. Transmission and prevention research Modelling research has been conducted with several objectives, including predictions of the dynamics of transmission, diagnosis and prognosis of infection, estimation of the impact of interventions, or allocation of resources. Modelling studies are mostly based on compartmental models in epidemiology, estimating the number of infected people over time under given conditions. Several other types of models have been developed and used during the COVID19 pandemic including computational fluid dynamics models to study the flow physics of COVID19, retrofits of crowd movement models to study occupant exposure, mobility-data based models to investigate transmission, or the use of macroeconomic models to assess the economic impact of the pandemic. Treatment-related research Repurposed antiviral drugs make up most of the research into COVID‑19 treatments. Other candidates in trials include vasodilators, corticosteroids, immune therapies, lipoic acid, bevacizumab, and recombinant angiotensin-converting enzyme 2. In March 2020, the World Health Organization (WHO) initiated the Solidarity trial to assess the treatment effects of some promising drugs: An experimental drug called remdesivir Anti-malarial drugs chloroquine and hydroxychloroquine Two anti-HIV drugs, lopinavir/ritonavir and interferon-beta More than 300 active clinical trials are underway as of April 2020. Research on the antimalarial drugs hydroxychloroquine and chloroquine showed that they were ineffective at best, and that they may reduce the antiviral activity of remdesivir. , France, Italy, and Belgium had banned the use of hydroxychloroquine as a COVID‑19 treatment. In June, initial results from the randomised RECOVERY Trial in the United Kingdom showed that dexamethasone reduced mortality by one third for people who are critically ill on ventilators and one fifth for those receiving supplemental oxygen. Because this is a well-tested and widely available treatment, it was welcomed by the WHO, which is in the process of updating treatment guidelines to include dexamethasone and other steroids. Based on those preliminary results, dexamethasone treatment has been recommended by the NIH for peoples with COVID‑19 who are mechanically ventilated or who require supplemental oxygen but not in people with COVID‑19 who do not require supplemental oxygen. In September 2020, the WHO released updated guidance on using corticosteroids for COVID‑19. The WHO recommends systemic corticosteroids rather than no systemic corticosteroids for the treatment of people with severe and critical COVID‑19 (strong recommendation, based on moderate certainty evidence). The WHO suggests not to use corticosteroids in the treatment of people with non-severe COVID‑19 (conditional recommendation, based on low certainty evidence). The updated guidance was based on a meta-analysis of clinical trials of people critically ill with COVID‑19. In September 2020, the European Medicines Agency (EMA) endorsed the use of dexamethasone in adults and adolescents from twelve years of age and weighing at least who require supplemental oxygen therapy. Dexamethasone can be taken by mouth or given as an injection or infusion (drip) into a vein. In November 2020, the US Food and Drug Administration (FDA) issued an emergency use authorisation for the investigational monoclonal antibody therapy bamlanivimab for the treatment of mild-to-moderate COVID‑19. Bamlanivimab is authorised for people with positive results of direct SARS-CoV-2 viral testing who are twelve years of age and older weighing at least , and who are at high risk for progressing to severe COVID‑19 or hospitalisation. This includes those who are 65 years of age or older, or who have chronic medical conditions. In February 2021, the FDA issued an emergency use authorisation (EUA) for bamlanivimab and etesevimab administered together for the treatment of mild to moderate COVID‑19 in people twelve years of age or older weighing at least who test positive for SARS‑CoV‑2 and who are at high risk for progressing to severe COVID‑19. The authorised use includes treatment for those who are 65 years of age or older or who have certain chronic medical conditions. In April 2021, the FDA revoked the emergency use authorisation (EUA) that allowed for the investigational monoclonal antibody therapy bamlanivimab, when administered alone, to be used for the treatment of mild-to-moderate COVID‑19 in adults and certain paediatric patients. Cytokine storm A cytokine storm can be a complication in the later stages of severe COVID‑19. A cytokine storm is a potentially deadly immune reaction where a large amount of pro-inflammatory cytokines and chemokines are released too quickly. A cytokine storm can lead to ARDS and multiple organ failure. Data collected from Jin Yin-tan Hospital in Wuhan, China indicates that people who had more severe responses to COVID‑19 had greater amounts of pro-inflammatory cytokines and chemokines in their system than people who had milder responses. These high levels of pro-inflammatory cytokines and chemokines indicate presence of a cytokine storm. Tocilizumab has been included in treatment guidelines by China's National Health Commission after a small study was completed. It is undergoing a PhaseII non-randomised trial at the national level in Italy after showing positive results in people with severe disease. Combined with a serum ferritin blood test to identify a cytokine storm (also called cytokine storm syndrome, not to be confused with cytokine release syndrome), it is meant to counter such developments, which are thought to be the cause of death in some affected people. The interleukin-6 receptor (IL-6R) antagonist was approved by the FDA to undergo a PhaseIII clinical trial assessing its effectiveness on COVID‑19 based on retrospective case studies for the treatment of steroid-refractory cytokine release syndrome induced by a different cause, CAR T cell therapy, in 2017. There is no randomised, controlled evidence that tocilizumab is an efficacious treatment for CRS. Prophylactic tocilizumab has been shown to increase serum IL-6 levels by saturating the IL-6R, driving IL-6 across the blood–brain barrier, and exacerbating neurotoxicity while having no effect on the incidence of CRS. Lenzilumab, an anti-GM-CSF monoclonal antibody, is protective in murine models for CAR T cell-induced CRS and neurotoxicity and is a viable therapeutic option due to the observed increase of pathogenic GM-CSF secreting Tcells in hospitalised patients with COVID‑19. Passive antibodies Transferring purified and concentrated antibodies produced by the immune systems of those who have recovered from COVID‑19 to people who need them is being investigated as a non-vaccine method of passive immunisation. Viral neutralisation is the anticipated mechanism of action by which passive antibody therapy can mediate defence against SARS-CoV-2. The spike protein of SARS-CoV-2 is the primary target for neutralising antibodies. As of 8August 2020, eight neutralising antibodies targeting the spike protein of SARS-CoV-2 have entered clinical studies. It has been proposed that selection of broad-neutralising antibodies against SARS-CoV-2 and SARS-CoV might be useful for treating not only COVID‑19 but also future SARS-related CoV infections. Other mechanisms, however, such as antibody-dependant cellular cytotoxicity or phagocytosis, may be possible. Other forms of passive antibody therapy, for example, using manufactured monoclonal antibodies, are in development. The use of passive antibodies to treat people with active COVID19 is also being studied. This involves the production of convalescent serum, which consists of the liquid portion of the blood from people who recovered from the infection and contains antibodies specific to this virus, which is then administered to active patients. This strategy was tried for SARS with inconclusive results. An updated Cochrane review in May 2023 found high certainty evidence that, for the treatment of people with moderate to severe COVID‑19, convalescent plasma did not reduce mortality or bring about symptom improvement. There continues to be uncertainty about the safety of convalescent plasma administration to people with COVID‑19 and differing outcomes measured in different studies limits their use in determining efficacy. Bioethics Since the outbreak of the COVID‑19 pandemic, scholars have explored the bioethics, normative economics, and political theories of healthcare policies related to the public health crisis. Academics have pointed to the moral distress of healthcare workers, ethics of distributing scarce healthcare resources such as ventilators, and the global justice of vaccine diplomacies. The socio-economic inequalities between genders, races, groups with disabilities, communities, regions, countries, and continents have also drawn attention in academia and the general public. See also Coronavirus diseases, a group of closely related syndromes Disease X, a WHO term References Further reading Scholia Q104287299. External links Health agencies Coronavirus disease (COVID‑19) Facts by the World Health Organization (WHO) Coronavirus (COVID‑19) by the UK National Health Service (NHS) Coronavirus 2019 (COVID-19) by the US Centers for Disease Control and Prevention (CDC) Directories Coronavirus Resource Center at the Center for Inquiry COVID‑19 Information on FireMountain.net COVID‑19 Resource Directory on OpenMD Medical journals BMJ's Coronavirus (covid‑19) Hub by the BMJ Coronavirus (Covid‑19) by The New England Journal of Medicine Coronavirus (COVID‑19) Research Highlights by Springer Nature Coronavirus Disease 2019 (COVID‑19) by JAMA COVID‑19 Resource Centre by The Lancet Covid‑19: Novel Coronavirus by Wiley Publishing Novel Coronavirus Information Center by Elsevier Treatment guidelines Occupational safety and health Vaccine-preventable diseases Viral respiratory tract infections Zoonoses Public health Coronavirus-associated diseases
COVID-19
Biology
15,449
62,565,191
https://en.wikipedia.org/wiki/NGC%20622
NGC 622 is a barred spiral galaxy located in the constellation Cetus about 234 million light-years from the Milky Way. It was discovered by British astronomer William Herschel in 1785. References External links 622 Barred spiral galaxies Cetus 01143 Markarian galaxies 005939
NGC 622
Astronomy
58
40,967,465
https://en.wikipedia.org/wiki/Expressive%20suppression
Expressive suppression is defined as the intentional reduction of the facial expression of an emotion. It is a component of emotion regulation. Expressive suppression is a concept "based on individuals' emotion knowledge, which includes knowledge about the causes of emotion, about their bodily sensations and expressive behavior, and about the possible means of modifying them" In other words, expressive suppression signifies the act of masking facial giveaways (see facial expression) to hide an underlying emotional state (see affect). Simply suppressing the facial expressions that accompany certain emotions can affect "the individual's experience of emotion" According to a 1974 study done by Kopel and Arkowitz, repressing the facial expressions associated with pain decreased the experience of pain in participants. However, "there is little evidence that the suppression of spontaneous emotional expression leads to a decrease in emotional experience and physiological arousal apart from the manipulation of the pain expressions". According to Gross and Levenson's 1993 study in which subjects watched a disgusting film while suppressing or not suppressing their expressions, suppression produced increased blinking. However, suppression also produced a decreased heart rate in participants and self-reports did not reflect that suppression affected the disgust experience. While it is unclear from Gross and Levenson's study whether suppression successfully diminishes the experience of emotions, it can be concluded that expressive suppression does not completely inhibit all facial movements and expressions (e.g. blinking of the eyes). Niedenthal argues that expressive suppression works to decrease the experience of positive emotions whereas it does not successfully decrease the experience of negative emotions. It may be that expressive suppression serves more of a social purpose than it serves a purpose for the individual. In a study done by Kleck and colleagues in 1976, participants were told to suppress facial expressions of pain during the reception of electric shocks. Specifically, "in one study the subjects were induced to exaggerate or minimize their facial expressions to fool a supposed audience". This idea of covering up an internal experience in front of observers could be the true reason that expressive suppression is utilized in social situations. "In everyday life, suppression may serve to conform individuals' outward appearance to emotional norms in a given situation, and to facilitate social interaction". In this way, hiding negative emotions may cause more successful social relationships by preventing conflict, stifling the spread of negative emotions, and protecting an individual from negative judgments made by others. Component Expressive suppression is a response-focused emotion regulation strategy. This strategy involves an individual voluntarily suppressing their outward emotional expressions. Expressive suppression has a direct relationship to our emotional experiences and is significant in communication studies. Individuals who suppress their emotions seek to control their actions and maintain a positive social image. Expressive suppression involves reducing facial expression and controlling positive and negative feelings of emotion. This type of emotion regulation strategy can have negative emotional and psychological effects on individuals. Emotional suppression reduces expressive behavior significantly. As many researchers have concluded, though emotional suppression decreases outward expressive emotions, it does not decrease our negative feelings and emotional arousal. Different forms of emotional regulation affect our response trajectory of emotions. We target situations for regulation by the process of selecting the situations we are exposed to or by modifying the situation we are in. Emotion suppression relates to the behavioral component of emotion. Expressive suppression has physiological influences such as decreasing heart rate, increasing blood pressure, and increasing sympathetic activation. Expressive suppression requires self-control. We use self-control when handling our emotion-based expressions in public. It is believed that the use of expressive suppression has a negative connection with a human's well-being. Expressive suppression has been found to occur late after the peripheral physiological response or emotion process is triggered. Künh et al. (2011) compare this strategy to vetoing actions. This type of emotion regulation strategy is considered a method that strongly resists various urges and voluntarily inhibits actions. Kühn et al. (2011) also posited the notion that expression suppression may be internally controlled and that emotional responses are targeted by suppression efforts. One of the characteristics of expressive suppression, a response-based strategy, is that it occurs after an activated response. Larsen et al. (2013) claim expressive suppression to be one of the less effective emotion regulation strategies. These researchers label expressive suppression as an inhibition of the behavioral display of emotion. Externalizers vs. Internalizers Regarding emotion regulation, specifically expressive suppression, two groups can be characterized by their different response patterns. These two groups are labeled externalizers and internalizers. Internalizers generally "show more skin conductance deflections and greater heart rate acceleration than do externalizers" when attempting to suppress facial expressions during a potentially emotional event. This signifies that internalizers can successfully employ expressive suppression while experiencing physiological arousal. However, when asked to describe their feelings, internalizers do not usually speak about themselves or specific feelings, which could be a sign of alexithymia. Alexithymia is defined as the inability to verbally explain an emotional experience or a feeling. Peter Sifneos first used this word in the realm of psychiatry in 1972 and it means "having no words for emotions". Those who can consistently suppress their facial expressions (e.g. internalizers) may be experiencing symptoms of alexithymia. On the other hand, externalizers employ less expressive suppression in response to emotional experiences or other external stimuli and do not usually struggle with alexithymia. Gender differences Men and women do not equally utilize expressive suppression. Typically, men show less facial expression and employ more expressive suppression than women do. This behavior difference rooted in gender differences can be traced back to social norms that are taught to children at a young age. At a young age, boys are implicitly taught that having emotional reactions makes them weak, which is a lesson that encourages the suppression of emotional behavior in masculine individuals. This suppression is a result of "the punishment and consequent conditioned inhibition of all expression of a given emotion". If a masculine individual expresses an undesirable emotion and society responds by punishing that behavior, that masculine individual will learn to suppress the socially unacceptable behavior. On the other hand, feminine individuals do not experience the same societal pressure to the same extent to suppress their emotional expressions. Because feminine individuals are not as pressured to keep their emotions concealed, most do not feel the need to suppress them. However, there are exceptions. Vs. display rules Complete expressive suppression means that no facial expressions are visible to exemplify a given emotion. However, display rules are examples of a controlled form of expression management and "involve the learned manipulation of facial expression to agree with cultural conventions and interpersonal expectations in the pursuit of tactical and/or strategic social ends" The utilization of display rules differs from expressive suppression because when display rules are enacted, the action to manage expression is voluntary, controlled, and incorporates certain types of expressive behavior. Conversely, expressive suppression is involuntary and is the result of social pressures that shape subconscious behaviors. It is not a controlled action nor does expressive suppression involve the manipulation of voluntary expressions, it is only manifested in the absence of expression. There are three ways in which facial expression displays may be influenced: modulation, qualification, and falsification. Modulation refers to the act of showing a different amount of expression than one feels. Qualification requires the addition of an extra (unfelt) emotional expression to the expression of felt emotion. Lastly, falsification has three separate components. Falsification incorporates expressing an unfelt emotion (simulation), expressing no emotion when an emotion is felt (neutralization), or concealing a felt emotion by expressing an unfelt emotion (masking). A response-focused strategy Expressive suppression is an emotion management strategy that works to decrease positive emotional experiences. However, it has not been proven to reduce the experience of negative emotions. This strategy is a response-focused form of emotion regulation, which "refers to things we do once an emotion is underway and response tendencies have already been generated". Response-focused strategies are generally not as successful as antecedent-focused regulation strategies, which refers to "things we do, either consciously or automatically, before emotion-response tendencies have become fully activated". Srivastava and colleagues performed a study in 2009 in which the effectiveness of students' use of expressive suppression was analyzed in the transition period between high school and college. This study concluded that "suppression is an antecedent of poor social functioning" in the domains of social support, closeness, and social satisfaction. Psychological consequences Suppressing the expression of emotion is one of the most frequent emotion-regulation strategies utilized by human beings. Clinical traditions state that a person's psychological health is based upon how affective impulses are regulated; the consequences of affective regulation have become the main focus of psychological researchers. The psychological consequences directly related to expressive suppression are frequently disputed. Some early 20th-century researchers state that suppressing a physical emotional response while emotionally aroused will increase the emotional experience due to concentration on suppressing that emotion. These researchers argue that common sense tells us emotions become more severe the longer they are bottled up. Other researchers dispute this theory, saying that emotional expression is so significant to the overall emotional response that when suppression occurs, all other responses (e.g. physiological) are weakened. These researchers solidify this argument with the tradition that people are taught to count to ten when emotionally aroused to calm themselves down. If suppressing emotions were to increase the emotional experience, this counting exercise would only intensify a person's reactions. However, it has been deemed to do the opposite. Unfortunately, few studies have been carried out to test these hypotheses. The idea that people have conflicting views on what is better - to bottle up emotions by counting to ten before acting/speaking or to release emotions as bottling them up is bad for your mental health - is of constant interest to researchers in the field of emotion. These differing views on such commonplace human behavior suggest that expressive suppression is one of the more complicated emotion-regulation techniques. As a solution to these opposing ideas, it has been suggested (and mentioned in the Externalizers vs. Internalizers section above) that people tend to be either emotionally expressive (externalizers) or inexpressive (internalizers). The habitual use of one expressive technique over the other leads to different psychological and physiological consequences over time. Expressive behavior is directly related to emotional suppression as it is assumed that internalizers consciously choose not to express themselves. However, this assumption has gone primarily untested except for a 1979 study by Notarius and Levenson, whose research found that internalizers are more physiologically reactive to emotional stimuli than externalizers. One explanation for these findings was that when a behavioral emotional response is suppressed it must be released in other ways, in this case, physiological reactions. These findings lend themselves to the suggestion by Cannon (1927) and Jones (1935) that emotional suppression intensifies other reactions. It has also been suggested that illness and disease are increased by continued emotional suppression, especially the suppression of intensely aggressive emotions such as anger and hostility which can lead to hypertension and coronary heart-disease. As well as physical illness, expressive suppression is said to be the cause of mental illnesses such as depression. Many psychotherapists will try to relieve their patients' illness/strain by teaching them expressive techniques in a controlled environment or within the particular relationship in which their suppressed emotions are causing problems. A counter-argument to this idea suggests that expressive suppression is an important part of emotional regulation that needs to be learned due to its beneficial use in adulthood. Adults must learn to successfully suppress certain emotional responses (e.g. those to anger which could have destructive social consequences). However, then the question is whether or not to suppress all anger-related responses or to release those less volatile ones to reduce the risk of contracting physical and mental illnesses. The Clinical Theory implies that there is an optimum level between total suppression and total expression which, during adulthood, a person must find to protect their physical and psychological being. While expressive suppression may be socially acceptable in certain situations, it cannot be considered a healthy practice at all times. Concealing and suppressing expressions can cause stress-related physiological reactions. Stress occurs because "the social disapproval and punishment of overt emotional expression that causes suppression are itself intimidating and stressful". There are several occupations that require the suppression of positive or negative emotions, such as estate agents masking their happiness when an offer is placed on a house to maintain their professionalism, or elementary-school teachers suppressing their anger to not upset their young students when teaching them right from wrong. Only in recent studies have researchers begun looking into the effects that continual suppression of emotion in the workplace has on people. Continual suppression causes strain on those utilizing it, especially on those who may be natural externalizers. Strain elicited by such suppression can cause an elevated heart rate, increased Anxiety, low commitment, and other effects which can be detrimental to an employee. The common conception is that expressive suppression in the workplace is beneficial for the organization and dangerous for the employee over long periods. However, in a 2005 study, Cote found that factors contributing to the social dynamics of emotions determine when emotion regulation increases, decreases, or does not affect strain at all. The suppression of unpleasant emotions such as anger contributes to increasing high levels of strain. Expressive Suppression in Adolescents While it is thought that the way that a child is parented is crucial in determining whether or not they will develop tendencies to internalize their problems, "...mechanisms linking parental practices to adolescents’ internalizing problems remain poorly understood. "A potential pathway connecting parental behaviors to internalizing problems could be through adolescent expressive suppression". As children age into adolescents, their brain structure changes in the regions and systems of the brain "that are considered pivotal for the regulation of behavior and emotion, and for the perception and evaluation of risk and reward". During adolescence, especially in mid-to-late adolescence, internalizing behavior increases. These significant changes can trigger vulnerability in adolescents, leading to mental health problems. Link with depression Expressive suppression, as an emotion regulation strategy, serves different purposes such as supporting goal pursuits and satisfying hedonic needs. Though expressive suppression is considered a weak influence on the experience of emotion, it has other functions. Expressive suppression is a goal-oriented strategy that is guided by people's beliefs and potentially by abstract theories about emotion regulation. In a 2012 study by Larsen and colleagues, the researchers looked at the positive association between expressive suppression and depressive symptoms among adults and adolescents which are influenced by parental support and peer victimization. They found a reciprocal relationship between parental support and depressive symptoms. The same was not true for the relationship between peer victimization and depressive symptoms. Depressive symptoms followed decreased perception of parental support one year later. They found that initial suppression occurred after increases in depressive symptoms one year later, yet depression did not occur after suppression. However, in a continuation of their original study, Larsen and colleagues found that this relationship between suppression and depression was reversed. Depressive symptoms occurred after the use of suppression, and suppression did not occur after future depressive symptoms. The authors of this study support that expressive suppression has physiological, social, and cognitive costs. Some evidence says that "depressed people judge their negative emotions as less socially acceptable" than non-depressed people. "Appraising one's emotions as unacceptable mediates the relationship between negative emotion intensity and use of suppression". Negative social consequences An appropriate level of expressive suppression is vital for physiological and psychological health. However, excessive use of expressive suppression can negatively affect social interactions. While expressive suppression may seem like an easier way of coping with emotions in society or of becoming more likable in a social environment, it alters behavior in a way that is visible and undesirable to others. The act of suppressing facial expressions prohibits others in the social world from gaining information about a suppressor's emotional state. This can prevent a suppressor from receiving social-emotional benefits such as sympathy or sharing in collective positive and negative emotions that "facilitate social bonding". Secondly, expressive suppression is not always fully successful. If a suppressor accidentally shows signs of concealed feelings, others may perceive that the suppressor is covering up true emotions and may assume that the suppressor is insincere and uninterested in forming legitimate social relationships. Lastly, expressive suppression is hard work and therefore requires more cognitive processing than freely communicating emotions. If a suppressor is unable to devote full attention to social interactions because they are using cognitive power to suppress, the suppressor will not be able to remain engaged nor put in the work to maintain relationships. See also References Emotion
Expressive suppression
Biology
3,396
1,491,909
https://en.wikipedia.org/wiki/Canadian%20Medical%20Hall%20of%20Fame
The Canadian Medical Hall of Fame is a Canadian charitable organization, founded in 1994, that honours Canadians who have contributed to the understanding of disease and improving the health of people. It has an exhibit hall in London, Ontario, an annual induction ceremony, career exploration programs for youth and a virtual hall of fame. Laureates References External links Official site 1994 establishments in Ontario Health charities in Canada Medical Organizations based in London, Ontario Museums established in 1994 Companies based in London, Ontario Museums in London, Ontario Medical museums in Canada Science and technology halls of fame
Canadian Medical Hall of Fame
Technology
110
31,397,529
https://en.wikipedia.org/wiki/Log%20trigger
In relational databases, the log trigger or history trigger is a mechanism for automatic recording of information about changes inserting or/and updating or/and deleting rows in a database table. It is a particular technique for change data capturing, and in data warehousing for dealing with slowly changing dimensions. Definition Suppose there is a table which we want to audit. This table contains the following columns: Column1, Column2, ..., Columnn The column Column1 is assumed to be the primary key. These columns are defined to have the following types: Type1, Type2, ..., Typen The Log Trigger works writing the changes (INSERT, UPDATE and DELETE operations) on the table in another, history table, defined as following: CREATE TABLE HistoryTable ( Column1 Type1, Column2 Type2, : : Columnn Typen, StartDate DATETIME, EndDate DATETIME ) As shown above, this new table contains the same columns as the original table, and additionally two new columns of type DATETIME: StartDate and EndDate. This is known as tuple versioning. These two additional columns define a period of time of "validity" of the data associated with a specified entity (the entity of the primary key), or in other words, it stores how the data were in the period of time between the StartDate (included) and EndDate (not included). For each entity (distinct primary key) on the original table, the following structure is created in the history table. Data is shown as example. Notice that if they are shown chronologically the EndDate column of any row is exactly the StartDate of its successor (if any). It does not mean that both rows are common to that point in time, since -by definition- the value of EndDate is not included. There are two variants of the Log trigger, depending how the old values (DELETE, UPDATE) and new values (INSERT, UPDATE) are exposed to the trigger (it is RDBMS dependent): Old and new values as fields of a record data structure CREATE TRIGGER HistoryTable ON OriginalTable FOR INSERT, DELETE, UPDATE AS DECLARE @Now DATETIME SET @Now = GETDATE() /* deleting section */ UPDATE HistoryTable SET EndDate = @Now WHERE EndDate IS NULL AND Column1 = OLD.Column1 /* inserting section */ INSERT INTO HistoryTable (Column1, Column2, ...,Columnn, StartDate, EndDate) VALUES (NEW.Column1, NEW.Column2, ..., NEW.Columnn, @Now, NULL) Old and new values as rows of virtual tables CREATE TRIGGER HistoryTable ON OriginalTable FOR INSERT, DELETE, UPDATE AS DECLARE @Now DATETIME SET @Now = GETDATE() /* deleting section */ UPDATE HistoryTable SET EndDate = @Now FROM HistoryTable, DELETED WHERE HistoryTable.Column1 = DELETED.Column1 AND HistoryTable.EndDate IS NULL /* inserting section */ INSERT INTO HistoryTable (Column1, Column2, ..., Columnn, StartDate, EndDate) SELECT (Column1, Column2, ..., Columnn, @Now, NULL) FROM INSERTED Compatibility notes The function GetDate() is used to get the system date and time, a specific RDBMS could either use another function name, or get this information by another way. Several RDBMS (Db2, MySQL) do not support that the same trigger can be attached to more than one operation (INSERT, DELETE, UPDATE). In such a case a trigger must be created for each operation; For an INSERT operation only the inserting section must be specified, for a DELETE operation only the deleting section must be specified, and for an UPDATE operation both sections must be present, just as it is shown above (the deleting section first, then the inserting section), because an UPDATE operation is logically represented as a DELETE operation followed by an INSERT operation. In the code shown, the record data structure containing the old and new values are called OLD and NEW. On a specific RDBMS they could have different names. In the code shown, the virtual tables are called DELETED and INSERTED. On a specific RDBMS they could have different names. Another RDBMS (Db2) even let the name of these logical tables be specified. In the code shown, comments are in C/C++ style, they could not be supported by a specific RDBMS, or a different syntax should be used. Several RDBMS require that the body of the trigger is enclosed between BEGIN and END keywords. Data warehousing According with the slowly changing dimension management methodologies, The log trigger falls into the following: Type 2 (tuple versioning variant) Type 4 (use of history tables) Implementation in common RDBMS IBM Db2 Source: A trigger cannot be attached to more than one operation (INSERT, DELETE, UPDATE), so a trigger must be created for each operation. The old and new values are exposed as fields of a record data structures. The names of these records can be defined, in this example they are named as O for old values and N for new values. -- Trigger for INSERT CREATE TRIGGER Database.TableInsert AFTER INSERT ON Database.OriginalTable REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL BEGIN DECLARE Now TIMESTAMP; SET NOW = CURRENT TIMESTAMP; INSERT INTO Database.HistoryTable (Column1, Column2, ..., Columnn, StartDate, EndDate) VALUES (N.Column1, N.Column2, ..., N.Columnn, Now, NULL); END; -- Trigger for DELETE CREATE TRIGGER Database.TableDelete AFTER DELETE ON Database.OriginalTable REFERENCING OLD AS O FOR EACH ROW MODE DB2SQL BEGIN DECLARE Now TIMESTAMP; SET NOW = CURRENT TIMESTAMP; UPDATE Database.HistoryTable SET EndDate = Now WHERE Column1 = O.Column1 AND EndDate IS NULL; END; -- Trigger for UPDATE CREATE TRIGGER Database.TableUpdate AFTER UPDATE ON Database.OriginalTable REFERENCING NEW AS N OLD AS O FOR EACH ROW MODE DB2SQL BEGIN DECLARE Now TIMESTAMP; SET NOW = CURRENT TIMESTAMP; UPDATE Database.HistoryTable SET EndDate = Now WHERE Column1 = O.Column1 AND EndDate IS NULL; INSERT INTO Database.HistoryTable (Column1, Column2, ..., Columnn, StartDate, EndDate) VALUES (N.Column1, N.Column2, ..., N.Columnn, Now, NULL); END; Microsoft SQL Server Source: The same trigger can be attached to all the INSERT, DELETE, and UPDATE operations. Old and new values as rows of virtual tables named DELETED and INSERTED. CREATE TRIGGER TableTrigger ON OriginalTable FOR DELETE, INSERT, UPDATE AS DECLARE @NOW DATETIME SET @NOW = CURRENT_TIMESTAMP UPDATE HistoryTable SET EndDate = @now FROM HistoryTable, DELETED WHERE HistoryTable.ColumnID = DELETED.ColumnID AND HistoryTable.EndDate IS NULL INSERT INTO HistoryTable (ColumnID, Column2, ..., Columnn, StartDate, EndDate) SELECT ColumnID, Column2, ..., Columnn, @NOW, NULL FROM INSERTED MySQL A trigger cannot be attached to more than one operation (INSERT, DELETE, UPDATE), so a trigger must be created for each operation. The old and new values are exposed as fields of a record data structures called Old and New. DELIMITER $$ /* Trigger for INSERT */ CREATE TRIGGER HistoryTableInsert AFTER INSERT ON OriginalTable FOR EACH ROW BEGIN DECLARE N DATETIME; SET N = now(); INSERT INTO HistoryTable (Column1, Column2, ..., Columnn, StartDate, EndDate) VALUES (New.Column1, New.Column2, ..., New.Columnn, N, NULL); END; /* Trigger for DELETE */ CREATE TRIGGER HistoryTableDelete AFTER DELETE ON OriginalTable FOR EACH ROW BEGIN DECLARE N DATETIME; SET N = now(); UPDATE HistoryTable SET EndDate = N WHERE Column1 = OLD.Column1 AND EndDate IS NULL; END; /* Trigger for UPDATE */ CREATE TRIGGER HistoryTableUpdate AFTER UPDATE ON OriginalTable FOR EACH ROW BEGIN DECLARE N DATETIME; SET N = now(); UPDATE HistoryTable SET EndDate = N WHERE Column1 = OLD.Column1 AND EndDate IS NULL; INSERT INTO HistoryTable (Column1, Column2, ..., Columnn, StartDate, EndDate) VALUES (New.Column1, New.Column2, ..., New.Columnn, N, NULL); END; Oracle The same trigger can be attached to all the INSERT, DELETE, and UPDATE operations. The old and new values are exposed as fields of a record data structures called :OLD and :NEW. It is necessary to test the nullity of the fields of the :NEW record that define the primary key (when a DELETE operation is performed), in order to avoid the insertion of a new row with null values in all columns. CREATE OR REPLACE TRIGGER TableTrigger AFTER INSERT OR UPDATE OR DELETE ON OriginalTable FOR EACH ROW DECLARE Now TIMESTAMP; BEGIN SELECT CURRENT_TIMESTAMP INTO Now FROM Dual; UPDATE HistoryTable SET EndDate = Now WHERE EndDate IS NULL AND Column1 = :OLD.Column1; IF :NEW.Column1 IS NOT NULL THEN INSERT INTO HistoryTable (Column1, Column2, ..., Columnn, StartDate, EndDate) VALUES (:NEW.Column1, :NEW.Column2, ..., :NEW.Columnn, Now, NULL); END IF; END; Historic information Typically, database backups are used to store and retrieve historic information. A database backup is a security mechanism, more than an effective way to retrieve ready-to-use historic information. A (full) database backup is only a snapshot of the data in specific points of time, so we could know the information of each snapshot, but we can know nothing between them. Information in database backups is discrete in time. Using the log trigger the information we can know is not discrete but continuous, we can know the exact state of the information in any point of time, only limited to the granularity of time provided with the DATETIME data type of the RDBMS used. Advantages It is simple. It is not a commercial product, it works with available features in common RDBMS. It is automatic, once it is created, it works with no further human intervention. It is not required to have good knowledge about the tables of the database, or the data model. Changes in current programming are not required. Changes in the current tables are not required, because log data of any table is stored in a different one. It works for both programmed and ad hoc statements. Only changes (INSERT, UPDATE and DELETE operations) are registered, so the growing rate of the history tables are proportional to the changes. It is not necessary to apply the trigger to all the tables on database, it can be applied to certain tables, or certain columns of a table. Disadvantages It does not automatically store information about the user producing the changes (information system user, not database user). This information might be provided explicitly. It could be enforced in information systems, but not in ad hoc queries. Examples of use Getting the current version of a table SELECT Column1, Column2, ..., Columnn FROM HistoryTable WHERE EndDate IS NULL It should return the same resultset of the whole original table. Getting the version of a table in a certain point of time Suppose the @DATE variable contains the point or time of interest. SELECT Column1, Column2, ..., Columnn FROM HistoryTable WHERE @Date >= StartDate AND (@Date < EndDate OR EndDate IS NULL) Getting the information of an entity in a certain point of time Suppose the @DATE variable contains the point or time of interest, and the @KEY variable contains the primary key of the entity of interest. SELECT Column1, Column2, ..., Columnn FROM HistoryTable WHERE Column1 = @Key AND @Date >= StartDate AND (@Date < EndDate OR EndDate IS NULL) Getting the history of an entity Suppose the @KEY variable contains the primary key of the entity of interest. SELECT Column1, Column2, ..., Columnn, StartDate, EndDate FROM HistoryTable WHERE Column1 = @Key ORDER BY StartDate Getting when and how an entity was created Suppose the @KEY variable contains the primary key of the entity of interest. SELECT H2.Column1, H2.Column2, ..., H2.Columnn, H2.StartDate FROM HistoryTable AS H2 LEFT OUTER JOIN HistoryTable AS H1 ON H2.Column1 = H1.Column1 AND H2.Column1 = @Key AND H2.StartDate = H1.EndDate WHERE H2.EndDate IS NULL Immutability of primary keys Since the trigger requires that primary key being the same throughout time, it is desirable to either ensure or maximize its immutability, if a primary key changed its value, the entity it represents would break its own history. There are several options to achieve or maximize the primary key immutability: Use of a surrogate key as a primary key. Since there is no reason to change a value with no meaning other than identity and uniqueness, it would never change. Use of an immutable natural key as a primary key. In a good database design, a natural key which can change should not be considered as a "real" primary key. Use of a mutable natural key as a primary key (it is widely discouraged) where changes are propagated in every place where it is a foreign key. In such a case, the history table should be also affected. Alternatives Sometimes the Slowly changing dimension is used as a method, this diagram is an example: See also Relational database Primary key Natural key Surrogate key Change data capture Slowly changing dimension Tuple versioning Notes The Log trigger was written by Laurence R. Ugalde to automatically generate history of transactional databases. External links References Computer data Data management Data modeling Data warehousing
Log trigger
Technology,Engineering
3,022
105,012
https://en.wikipedia.org/wiki/Solvable%20group
In mathematics, more specifically in the field of group theory, a solvable group or soluble group is a group that can be constructed from abelian groups using extensions. Equivalently, a solvable group is a group whose derived series terminates in the trivial subgroup. Motivation Historically, the word "solvable" arose from Galois theory and the proof of the general unsolvability of quintic equations. Specifically, a polynomial equation is solvable in radicals if and only if the corresponding Galois group is solvable (note this theorem holds only in characteristic 0). This means associated to a polynomial there is a tower of field extensionssuch that where , so is a solution to the equation where contains a splitting field for Example The smallest Galois field extension of containing the elementgives a solvable group. The associated field extensionsgive a solvable group of Galois extensions containing the following composition factors (where is the identity permutation). with group action , and minimal polynomial with group action , and minimal polynomial with group action , and minimal polynomial containing the 5th roots of unity excluding with group action , and minimal polynomial Each of the defining group actions (for example, ) changes a single extension while keeping all of the other extensions fixed. The 80 group actions are the set . This group is not abelian. For example, , whilst , and in fact, . It is isomorphic to , where , defined using the semidirect product and direct product of the cyclic groups. is not a normal subgroup. Definition A group G is called solvable if it has a subnormal series whose factor groups (quotient groups) are all abelian, that is, if there are subgroups meaning that Gj−1 is normal in Gj, such that Gj /Gj−1 is an abelian group, for j = 1, 2, ..., k. Or equivalently, if its derived series, the descending normal series where every subgroup is the commutator subgroup of the previous one, eventually reaches the trivial subgroup of G. These two definitions are equivalent, since for every group H and every normal subgroup N of H, the quotient H/N is abelian if and only if N includes the commutator subgroup of H. The least n such that G(n) = 1 is called the derived length of the solvable group G. For finite groups, an equivalent definition is that a solvable group is a group with a composition series all of whose factors are cyclic groups of prime order. This is equivalent because a finite group has finite composition length, and every simple abelian group is cyclic of prime order. The Jordan–Hölder theorem guarantees that if one composition series has this property, then all composition series will have this property as well. For the Galois group of a polynomial, these cyclic groups correspond to nth roots (radicals) over some field. The equivalence does not necessarily hold for infinite groups: for example, since every nontrivial subgroup of the group Z of integers under addition is isomorphic to Z itself, it has no composition series, but the normal series {0, Z}, with its only factor group isomorphic to Z, proves that it is in fact solvable. Examples Abelian groups The basic example of solvable groups are abelian groups. They are trivially solvable since a subnormal series is formed by just the group itself and the trivial group. But non-abelian groups may or may not be solvable. Nilpotent groups More generally, all nilpotent groups are solvable. In particular, finite p-groups are solvable, as all finite p-groups are nilpotent. Quaternion groups In particular, the quaternion group is a solvable group given by the group extensionwhere the kernel is the subgroup generated by . Group extensions Group extensions form the prototypical examples of solvable groups. That is, if and are solvable groups, then any extensiondefines a solvable group . In fact, all solvable groups can be formed from such group extensions. Non-abelian group which is non-nilpotent A small example of a solvable, non-nilpotent group is the symmetric group S3. In fact, as the smallest simple non-abelian group is A5, (the alternating group of degree 5) it follows that every group with order less than 60 is solvable. Finite groups of odd order The Feit–Thompson theorem states that every finite group of odd order is solvable. In particular this implies that if a finite group is simple, it is either a prime cyclic or of even order. Non-example The group S5 is not solvable — it has a composition series {E, A5, S5} (and the Jordan–Hölder theorem states that every other composition series is equivalent to that one), giving factor groups isomorphic to A5 and C2; and A5 is not abelian. Generalizing this argument, coupled with the fact that An is a normal, maximal, non-abelian simple subgroup of Sn for n > 4, we see that Sn is not solvable for n > 4. This is a key step in the proof that for every n > 4 there are polynomials of degree n which are not solvable by radicals (Abel–Ruffini theorem). This property is also used in complexity theory in the proof of Barrington's theorem. Subgroups of GL2 Consider the subgroups of for some field . Then, the group quotient can be found by taking arbitrary elements in , multiplying them together, and figuring out what structure this gives. SoNote the determinant condition on implies , hence is a subgroup (which are the matrices where ). For fixed , the linear equation implies , which is an arbitrary element in since . Since we can take any matrix in and multiply it by the matrixwith , we can get a diagonal matrix in . This shows the quotient group . Remark Notice that this description gives the decomposition of as where acts on by . This implies . Also, a matrix of the formcorresponds to the element in the group. Borel subgroups For a linear algebraic group , a Borel subgroup is defined as a subgroup which is closed, connected, and solvable in , and is a maximal possible subgroup with these properties (note the first two are topological properties). For example, in and the groups of upper-triangular, or lower-triangular matrices are two of the Borel subgroups. The example given above, the subgroup in , is a Borel subgroup. Borel subgroup in GL3 In there are the subgroupsNotice , hence the Borel group has the form Borel subgroup in product of simple linear algebraic groups In the product group the Borel subgroup can be represented by matrices of the formwhere is an upper triangular matrix and is a upper triangular matrix. Z-groups Any finite group whose p-Sylow subgroups are cyclic is a semidirect product of two cyclic groups, in particular solvable. Such groups are called Z-groups. OEIS values Numbers of solvable groups with order n are (start with n = 0) 0, 1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, 1, 5, 1, 5, 2, 2, 1, 15, 2, 2, 5, 4, 1, 4, 1, 51, 1, 2, 1, 14, 1, 2, 2, 14, 1, 6, 1, 4, 2, 2, 1, 52, 2, 5, 1, 5, 1, 15, 2, 13, 2, 2, 1, 12, 1, 2, 4, 267, 1, 4, 1, 5, 1, 4, 1, 50, ... Orders of non-solvable groups are 60, 120, 168, 180, 240, 300, 336, 360, 420, 480, 504, 540, 600, 660, 672, 720, 780, 840, 900, 960, 1008, 1020, 1080, 1092, 1140, 1176, 1200, 1260, 1320, 1344, 1380, 1440, 1500, ... Properties Solvability is closed under a number of operations. If G is solvable, and H is a subgroup of G, then H is solvable. If G is solvable, and there is a homomorphism from G onto H, then H is solvable; equivalently (by the first isomorphism theorem), if G is solvable, and N is a normal subgroup of G, then G/N is solvable. The previous properties can be expanded into the following "three for the price of two" property: G is solvable if and only if both N and G/N are solvable. In particular, if G and H are solvable, the direct product G × H is solvable. Solvability is closed under group extension: If H and G/H are solvable, then so is G; in particular, if N and H are solvable, their semidirect product is also solvable. It is also closed under wreath product: If G and H are solvable, and X is a G-set, then the wreath product of G and H with respect to X is also solvable. For any positive integer N, the solvable groups of derived length at most N form a subvariety of the variety of groups, as they are closed under the taking of homomorphic images, subalgebras, and (direct) products. The direct product of a sequence of solvable groups with unbounded derived length is not solvable, so the class of all solvable groups is not a variety. Burnside's theorem Burnside's theorem states that if G is a finite group of order paqb where p and q are prime numbers, and a and b are non-negative integers, then G is solvable. Related concepts Supersolvable groups As a strengthening of solvability, a group G is called supersolvable (or supersoluble) if it has an invariant normal series whose factors are all cyclic. Since a normal series has finite length by definition, uncountable groups are not supersolvable. In fact, all supersolvable groups are finitely generated, and an abelian group is supersolvable if and only if it is finitely generated. The alternating group A4 is an example of a finite solvable group that is not supersolvable. If we restrict ourselves to finitely generated groups, we can consider the following arrangement of classes of groups: cyclic < abelian < nilpotent < supersolvable < polycyclic < solvable < finitely generated group. Virtually solvable groups A group G is called virtually solvable if it has a solvable subgroup of finite index. This is similar to virtually abelian. Clearly all solvable groups are virtually solvable, since one can just choose the group itself, which has index 1. Hypoabelian A solvable group is one whose derived series reaches the trivial subgroup at a finite stage. For an infinite group, the finite derived series may not stabilize, but the transfinite derived series always stabilizes. A group whose transfinite derived series reaches the trivial group is called a hypoabelian group, and every solvable group is a hypoabelian group. The first ordinal α such that G(α) = G(α+1) is called the (transfinite) derived length of the group G, and it has been shown that every ordinal is the derived length of some group . p-solvable A finite group is p-solvable for some prime p if every factor in the composition series is a p-group or has order prime to p. A finite group is solvable iff it is p-solvable for every p. See also Prosolvable group Notes References External links Solvable groups as iterated extensions Properties of groups
Solvable group
Mathematics
2,521
72,640,914
https://en.wikipedia.org/wiki/Oliver%20Gilbert%20%28lichenologist%29
Oliver Gilbert (7 September 1936 – 15 May 2005) was an urban ecologist and lichenologist. He was a reader in landscape ecology at Sheffield University. He was one of the early users of lichens as indicators of air pollution, and also studied the ecology and diversity of wildlife in urban areas. Early life and education Oliver Lathe Gilbert and his twin brother Christopher were born in Lancaster. His parents were Ruth (nee Ainsworth) who wrote books for children, and Frank Gilbert, managing director of Durham Chemicals. One of his uncles was the mycologist Geoffrey Clough Ainsworth. The family soon moved to London and he attended the private co-educational boarding school St George's School, Harpenden. As a child he became interested in plants, and rock climbing. He studied botany at University of Exeter and was especially interested in mosses and liverworts. He then studied fungal diseases of plants at Imperial College, London, and took up a post as deputy warden at Malham Tarn Field Studies Centre. Here, he was inspired by Arthur Edward Wade to study lichens. While employed at University of Newcastle upon Tyne, he started research for a PhD degree on the subject of Biological Indicators of Air Pollution which was awarded in 1970. Career In 1963 he was employed by University of Newcastle upon Tyne as a demonstrator. He carried out research into the distribution and effects of air pollution on lichens and mosses and showed that their diversity reduced in moving from countryside to industrial urban areas. He moved to University of Sheffield as a lecturer in landscape ecology in 1968 and was promoted to reader in 1986. He retired in 1993 but continued as a part-time tutor until 2000. He learnt how to identify the lichen flora of the British Isles in the 1960s and went on many field visits to record more unusual species, and their locations in more remote parts of the country. In 1970 he began a systematic survey of lichen in the Cheviots that lasted for several decades. He also led surveys of the lichen flora of several Scottish islands and mountains. He collaborated with Brian John Coppins, Alan Fryday and Vince Giavarini. Gilbert wrote a book about the efforts to find lichens in the British Isles. He also studied the urban ecology of Sheffield, identifying that many fig trees grew on the banks of the river Don as it passed through Sheffield, supported by the warm microclimate caused by industrial cooling water. He undertook research into ways to repair urban brownfield land to become a biodiverse habitat and was co-author of the book Habitat Creation and Repair that was considered important for its philosophy and ethics as well as practical information. Awards and honours He was president of the British Lichen Society from 1976 until 1978 and editor of its bi-annual bulletin from 1980 until 1989. In 1997, he was made an honorary member of the society and in 2004 was awarded its Ursula Duncan Award in recognition of his outstanding contribution to the study of lichens in Britain. The Caledonian lichen Catillaria gilbertii was named in his honour by colleagues Alan Fryday and Brian John Coppins in 1996. They noted that the naming of this species, which produces twice the usual number of ascospores in its asci, was "particularly appropriate given the pre-disposition of the Gilbert family for producing twice the usual number of offspring at a time; Dr Gilbert himself is a twin and he also has twin daughters". However, inheritance of a tendency to have non-identical twins is a maternal characteristic; there is no inheritance of a tendency to have identical twins. Personal life He married Daphne Broughton in 1969 and they had three children together, before the marriage was dissolved. Publications Gilbert was the author or co-author of over 150 scientific publications and several books. These included: Papers and book chapters Gilbert, O. L. (2000) Aquatic lichens. In: Lichen Atlas of the British Isles. Fascicle 5. Aquatic Lichens and Cladonia (Part 2) (M. R. D. Seaward, ed.) London, British Lichen Society. Gilbert, O. L. (1996) Retaining trees on construction sites. Arboricultural Journal 20 39–45. Gilbert, O. L., Fryday, A. J., Giavarini, V. J. & Coppins, B. J. (1992) The lichen vegetation of the Ben Nevis range. The Lichenologist 24 43–56. Gilbert, O. L., Fox, B. W. & Purvis, O. W. (1982) The lichen flora of a high-level limestone-epidiorite outcrop in the Ben Alder Range, Scotland. The Lichenologist 14 165–174. Wathern, P. & Gilbert, O. L. (1979) The production of grassland on subsoil. The Journal of Environmental Management 8 269–275. Gilbert, O. L., Earland-Bennett, P. & Coppins, B. J.(1978) Lichens of the sugar limestone refugium in Upper Teesdale. New Phytologist 80 403–408 Gilbert, O. L. (1975) Effects of air pollution on landscape and land use around Norwegian aluminium smelters. Environmental Pollution 8 113–121. Gilbert, O. L. (1974) Lichens and air pollution. In: The Lichens (V. Ahmadjian & M. E. Hale, eds): 443–472. New York and London: Academic Press. Gilbert, O. L. (1970) A biological scale for the estimation of sulphur dioxide pollution. New Phytologist 69 629–634. Gilbert, O. L. (1968) Bryophytes as indicators of air pollution in the Tyne Valley. New Phytologist 67 15–30 Books The Lichen Hunters (2004) Lichens (2000) in the Collins New Naturalist series number 86 Habitat Creation and Repair (1998) co-authored with Penny Anderson The Ecology of Urban Habitats (1989) A Lichen Flora of Northumberland (1988) See also :Category:Taxa named by Oliver Gilbert (lichenologist) References 1936 births 2005 deaths Alumni of Newcastle University Academics of the University of Sheffield British lichenologists Ecologists
Oliver Gilbert (lichenologist)
Environmental_science
1,291
50,738,337
https://en.wikipedia.org/wiki/Brighton%20and%20Lewes%20Downs%20Biosphere%20Reserve
The Brighton and Lewes Downs Biosphere Reserve (established 2014) is a UNESCO Biosphere Reserve located in Sussex on the southeast coast of England near the city of Brighton and Hove. Forming a central unit of the hills of the South Downs National Park, it is centred on the Brighton chalk block that lies between the River Adur in the west and the River Ouse in the east. Chalk downland makes up the principal terrestrial landscape of the area, bounded at each end by the two river valleys. The coastline is dominated by high chalk cliffs in the east and urbanized plains in the west, running to the estuary of the River Adur at Shoreham-by-Sea. Area The reserve's surface area is . The core area is , surrounded by buffer zone(s) of and transition area(s) of . Ecological characteristics Brighton and Lewes Downs Biosphere Reserve is found within the temperate broadleaf forests biome of the Palearctic realm's British Island province and includes the following habitats: coastal chalk cliffs, sub-tidal chalk reef, freshwater wetland, shingle beaches, deciduous woodland, river estuaries and chalk grassland. Three distinct but interrelated environments make up the biosphere reserve area; rural, coastal and marine, and urban. The rural environment contains lowland chalk grassland which is one of the richest wildlife habitats in the country and particularly important for its high botanical species diversity with up to 40-50 vascular plant species per square meter. It also supports the invertebrate communities, notably butterflies with 20 species having a substantial proportion of their breeding populations within this habitat. Characteristic species include: Phyteuma orbiculare, Wart-biter (Decticus verrucivorus), and Adonis blue butterfly (Lysandra bellargus). The coastal and marine environments are made up of a moderately exposed coast and inshore area of the English Channel with cliffs providing nesting niches for birds such as Northern fulmar (Fulmarus glacialis). The discontinuous chalk ledge to the west of Brighton gives rise to a unique series of low underwater north-facing chalk cliffs with biological records of over 300 marine species in the area. The area is home to 211 species that have been recorded on international conservation lists, such as the European eel (Anguilla Anguilla)) in addition to 1,052 local rare species including the hedgehog (Erinaceus europaeus) and yellowhammer (Emberiza citronella). Important local genetic varieties of species include the unique elm tree {Elmus spp.) as well as wild apple tree varieties. The domesticated breeds of Southdown sheep and rare Sussex cattle are also distinctive to the area. Socio-economic characteristics The biosphere area is home to around 371,500 people, the great majority of whom are urban-dwellers in the transition area (population around 358,500) in the main settlements of the city of Brighton and Hove and the towns of Lewes, Newhaven, Peacehaven, Shoreham and Southwick. The rural buffer zone of the South Downs National Park is additionally home to a population of around 13,500 people whilst no inhabitants occupy the 14 protected areas that make up the core areas. Due to its proximity to London (55 miles) it is a target for tourists, receiving around 12 million visitors per year. They are attracted to the natural environment, contemporary culture and heritage, which includes a range of archaeological sites dating back to the Neolithic period as well as a legacy of more recent urban architecture. Evolving from a seaside resort to a service sector economy, Brighton and Hove has a total population of 273,000. However, socio-economic challenges exist with urban areas constrained in their size and future growth by their geography (between the sea and the national park). Economic activities include harvesting and extraction of primary resources, mainly through farming and commercial sea fishing. Coastal fishing sees the majority of the catch sent to local and regional markets and restaurants with the remainder going to mainland Europe where there is high demand. Sources References External links Official web site Detailed map of the Brighton and Lewes Downs Biosphere Reserve Sussex Biomes Biosphere reserves of England Biodiversity National parks in England Parks and open spaces in West Sussex Parks and open spaces in East Sussex Environment of East Sussex Environment of West Sussex
Brighton and Lewes Downs Biosphere Reserve
Biology
873
35,538,934
https://en.wikipedia.org/wiki/Saturation%20dome
A saturation dome is a graphical representation of the combination of vapor and gas that is used in thermodynamics. It can be used to find either the pressure or the specific volume as long as one already has at least one of these properties. Description A saturation dome uses the projection of a P–v–T diagram (pressure, specific volume, and temperature) onto the P–v plane. The points that create the left-hand side of the dome represent the saturated liquid states, while the points on the right-hand side represent the saturated vapor states (commonly referred to as the “dry” region). On the left-hand side of the dome there is compressed liquid and on the right-hand side there is superheated gas. Within the dome itself, there is a liquid–vapor mixture. This two-phase region is commonly referred to as the “wet” region. The percentage of liquid and vapor can be calculated using vapor quality. The triple state line is where the three phases (solid, liquid, and vapor) exist in equilibrium. Critical point The point at the very top of the dome is called the critical point. This point is where the saturated liquid and saturated vapor lines meet. Past this point, it is impossible for a liquid–vapor transformation to occur. It is also where the critical temperature and critical pressure meet. Beyond this point, it is also impossible to distinguish between the liquid and vapor phases. States A saturation state is the point where a phase change begins or ends. For example, the saturated liquid line represents the point where any further addition of energy will cause a small portion of the liquid to convert to vapor. Likewise, along the saturated vapor line, any removal of energy will cause some of the vapor to condense back into a liquid, producing a mixture. When a substance reaches the saturated liquid line it is commonly said to be at its boiling point. The temperature will remain constant while it is at constant pressure underneath the saturation dome (boiling water stays at a constant of 212F) until it reaches the saturated vapor line. This line is where the mixture has converted completely to vapor. Further heating of the saturated vapor will result in a superheated vapor state. This is because the vapor will be at a temperature higher than the saturation temperature (212F for water) for a given pressure. Vapor quality Vapor quality refers to the vapor–liquid mixture that is contained underneath the dome. This quality is defined as the fraction of the total mixture which is vapor, based on mass. A fully saturated vapor has a quality of 100% while a saturated liquid has a quality of 0%. Quality can be estimated graphically as it is related to the specific volume, or how far horizontally across the dome the point exists. At the saturated liquid state, the specific volume is denoted as vf, while at the saturated vapor stage it is denoted as vg. Quality can be calculated by the equation: References Phase transitions
Saturation dome
Physics,Chemistry
601
1,952,635
https://en.wikipedia.org/wiki/Ion%20trap
An ion trap is a combination of electric and/or magnetic fields used to capture charged particles — known as ions — often in a system isolated from an external environment. Atomic and molecular ion traps have a number of applications in physics and chemistry such as precision mass spectrometry, improved atomic frequency standards, and quantum computing. In comparison to neutral atom traps, ion traps have deeper trapping potentials (up to several electronvolts) that do not depend on the internal electronic structure of a trapped ion. This makes ion traps more suitable for the study of light interactions with single atomic systems. The two most popular types of ion traps are the Penning trap, which forms a potential via a combination of static electric and magnetic fields, and the Paul trap which forms a potential via a combination of static and oscillating electric fields. Penning traps can be used for precise magnetic measurements in spectroscopy. Studies of quantum state manipulation most often use the Paul trap. This may lead to a trapped ion quantum computer and has already been used to create the world's most accurate atomic clocks. Electron guns (a device emitting high-speed electrons, used in CRTs) can use an ion trap to prevent degradation of the cathode by positive ions. History The physical principles of ion traps were first explored by F. M. Penning (1894–1953), who observed that electrons released by the cathode of an ionization vacuum gauge follow a long cycloidal path to the anode in the presence of a sufficiently strong magnetic field. A scheme for confining charged particles in three dimensions without the use of magnetic fields was developed by W. Paul based on his work with quadrupole mass spectrometers. Ion traps were used in television receivers prior to the introduction of aluminized CRT faces around 1958, to protect the phosphor screen from ions. The ion trap must be delicately adjusted for maximum brightness. Theory Any charged particle, such as an ion, feels a force from an electric or magnetic field. Ion traps work by using this force to confine ions in a small, isolated volume of space so that they can be studied or manipulated. Although any static (constant in time) electromagnetic field produces a force on an ion, it is not possible to confine an ion using only a static electric field. This is a consequence of Earnshaw's theorem. However, physicists have various ways of working around this theorem by using combinations of static magnetic and electric fields (as in a Penning trap) or by an oscillating electric field and a static electric field(Paul trap). Ion motion and confinement in the trap is generally divided into axial and radial components, which are typically addressed separately by different fields. In both Paul and Penning traps, axial ion motion is confined by a static electric field. Paul traps use an oscillating electric field to confine the ion radially and Penning traps generate radial confinement with a static magnetic field. Paul Trap A Paul trap that uses an oscillating quadrupole field to trap ions radially and a static potential to confine ions axially. The quadrupole field is realized by four parallel electrodes laying in the -axis positioned at the corners of a square in the -plane. Electrodes diagonally opposite each other are connected and an a.c. voltage is applied. Using Maxwell's equations, the electric field produced by this potential is electric field . Applying Newton's second law to an ion of charge and mass in this a.c. electric field, we can find the force on the ion using . We wind up with . Assuming that the ion has zero initial velocity, two successive integrations give the velocity and displacement as , , where is a constant of integration. Thus, the ion oscillates with angular frequency and amplitude proportional to the electric field strength and is confined radially. Working specifically with a linear Paul trap, we can write more specific equations of motion. Along the -axis, an analysis of the radial symmetry yields a potential . The constants and are determined by boundary conditions on the electrodes and satisfies Laplace's equation . Assuming the length of the electrodes is much greater than their separation , it can be shown that . Since the electric field is given by the gradient of the potential, we get that . Defining , the equations of motion in the -plane are a simplified form of the Mathieu equation, . Penning Trap A standard configuration for a Penning trap consists of a ring electrode and two end caps. A static voltage differential between the ring and end caps confines ions along the axial direction (between end caps). However, as expected from Earnshaw's theorem, the static electric potential is not sufficient to trap an ion in all three dimensions. To provide the radial confinement, a strong axial magnetic field is applied. For a uniform electric field , the force accelerates a positively charged ion along the -axis. For a uniform magnetic field , the Lorentz force causes the ion to move in circular motion with cyclotron frequency . Assuming an ion with zero initial velocity placed in a region with and , the equations of motion are , , . The resulting motion is a combination of oscillatory motion around the -axis with frequency and a drift velocity in the -direction. The drift velocity is perpendicular to the direction of the electric field. For the radial electric field produced by the electrodes in a Penning trap, the drift velocity will precess around the axial direction with some frequency , called the magnetron frequency. An ion will also have a third characteristic frequency between the two end cap electrodes. The frequencies usually have widely different values with . Ion trap mass spectrometers An ion trap mass spectrometer may incorporate a Penning trap (Fourier-transform ion cyclotron resonance), Paul trap or the Kingdon trap. The Orbitrap, introduced in 2005, is based on the Kingdon trap. Other types of mass spectrometers may also use a linear quadrupole ion trap as a selective mass filter. Penning ion trap A Penning trap stores charged particles using a strong homogeneous axial magnetic field to confine particles radially and a quadrupole electric field to confine the particles axially. Penning traps are well suited for measurements of the properties of ions and stable charged subatomic particles. Precision studies of the electron magnetic moment by Dehmelt and others are an important topic in modern physics. Penning traps can be used in quantum computation and quantum information processing and are used at CERN to store antimatter. Penning traps form the basis of Fourier-transform ion cyclotron resonance mass spectrometry for determining the mass-to-charge ratio of ions. The Penning Trap was invented by Frans Michel Penning and Hans Georg Dehmelt, who built the first trap in the 1950s. Paul ion trap A Paul trap is a type of quadrupole ion trap that uses static direct current (DC) and radio frequency (RF) oscillating electric fields to trap ions. Paul traps are commonly used as components of a mass spectrometer. The invention of the 3D quadrupole ion trap itself is attributed to Wolfgang Paul who shared the Nobel Prize in Physics in 1989 for this work. The trap consists of two hyperbolic metal electrodes with their foci facing each other and a hyperbolic ring electrode halfway between the other two electrodes. Ions are trapped in the space between these three electrodes by the oscillating and static electric fields. Kingdon trap and orbitrap A Kingdon trap consists of a thin central wire, an outer cylindrical electrode and isolated end cap electrodes at both ends. A static applied voltage results in a radial logarithmic potential between the electrodes. In a Kingdon trap there is no potential minimum to store the ions; however, they are stored with a finite angular momentum about the central wire and the applied electric field in the device allows for the stability of the ion trajectories. In 1981, Knight introduced a modified outer electrode that included an axial quadrupole term that confines the ions on the trap axis. The dynamic Kingdon trap has an additional AC voltage that uses strong defocusing to permanently store charged particles. The dynamic Kingdon trap does not require the trapped ions to have angular momentum with respect to the filament. An Orbitrap is a modified Kingdon trap that is used for mass spectrometry. Though the idea has been suggested and computer simulations performed neither the Kingdon nor the Knight configurations were reported to produce mass spectra, as the simulations indicated mass resolving power would be problematic. Trapped ion quantum computer Some experimental work towards developing quantum computers use trapped ions. Units of quantum information called qubits are stored in stable electronic states of each ion, and quantum information can be processed and transferred through the collective quantized motion of the ions, interacting by the Coulomb force. Lasers are applied to induce coupling between the qubit states (for single qubit operations) or between the internal qubit states and external motional states (for entanglement between qubits). See also Laser cooling Mass spectrometry Quantum jump References External links VIAS Science Cartoons A cranky view of an ion trap... Paul trap Mass spectrometry Ions
Ion trap
Physics,Chemistry
1,909
14,837,315
https://en.wikipedia.org/wiki/Police%20Reform%20Act%202002
The Police Reform Act 2002 (c. 30) is an Act of the Parliament of the United Kingdom. Amongst the provisions of the Act are the creation of the role of Police Community Support Officers, who have some police powers whilst not being 'sworn' constables, and the ability for chief constables to confer a more limited range of police powers on other (non-sworn) individuals as part of Community Safety Accreditation Schemes. The Act also replaced the Police Complaints Authority with the Independent Police Complaints Commission (later replaced by the Independent Office for Police Conduct). Section 59 Section 59 of the Act is a common tool now used by police constables and police community support officers (PCSOs) to seize vehicles being used in an anti-social manner. Vehicles can be seized if the police officer / PCSO reasonably believes that a mechanically propelled vehicle is being used in a manner: causing, or likely to cause alarm, distress or annoyance to the public, and: contravening section 3 (careless/inconsiderate driving), or contravening section 34 (prohibition of off-road driving/driving other than a road) of the Road Traffic Act 1988. Vehicles should be issued with a warning first, unless this is impracticable. An example of it being impractical would be the offenders leaving the vehicle/making off or the vehicle being unregistered and unable to be traced - therefore a warning unable to be placed. If an officer also reasonably believes a warning has been given within the past 12 months - whether or not recorded on the Police National Computer or similar system, they can seize the vehicle immediately. References Law enforcement in the United Kingdom Anti-social behaviour United Kingdom Acts of Parliament 2002 Police legislation in the United Kingdom
Police Reform Act 2002
Biology
353
13,207,951
https://en.wikipedia.org/wiki/Bridgeport%20Covered%20Bridge
The Bridgeport Covered Bridge is located in Bridgeport, Nevada County, California, southwest of French Corral and north of Lake Wildwood. It is used as a pedestrian crossing over the South Yuba River. The bridge was built in 1862 by David John Wood. Its lumber came from Plum Valley in Sierra County, California. The bridge was closed to vehicular traffic in 1972 and pedestrian traffic in 2011 due to deferred maintenance and "structural problems". On June 20, 2014, California Gov. Jerry Brown signed budget legislation that included $1.3 million for the bridge's restoration. The work was slated to be done in two phases—near-term stabilization followed by restoration. The bridge reopened to pedestrians in November 2021 following completion of the restoration work. The Bridgeport Covered Bridge has the longest clear single span of any surviving wooden covered bridge in the world. Historic landmark The bridge is California Registered Historical Landmark No. 390, was designated as a National Historic Civil Engineering Landmark in 1970, and was listed in the National Register of Historic Places in 1971. There are four plaques at the site. The State Historical Landmark plaque was placed in 1964. The landmark was rededicated in 2014. The inscription on the current plaque reads: "Built in 1862 by David J. Wood with lumber from his mill in Sierra County. The covered bridge was part of the Virginia Turnpike Company toll road that served the northern mines and the Nevada Comstock Lode. The associated ranch and resources for rest and repair provided a necessary lifeline across the Sierra Nevada. Utilizing a unique combination truss and arch construction, Bridgeport Covered Bridge is one of the oldest housed spans in the western United States and the longest single span wooden covered bridge in the world." The bridge was an important link in a freight-hauling route that stretched from the San Francisco Bay to Virginia City, Nevada and points beyond after the discovery of the Comstock Lode in 1859 sparked a mining boom in Nevada. Steamboats carried freight from the San Francisco Bay up the Sacramento River to Marysville, where it was loaded onto wagons for the trip across the Sierra Nevada via the Virginia Turnpike, and Henness Pass Road. The route across the bridge was ultimately eclipsed by the completion of the First transcontinental railroad as far as Reno in 1868 via Donner Pass, but it continued to serve nearby communities in the foothills until improved roads and bridges on other routes drew away most of the traffic. Longest span A report by the U.S. Department of the Interior states that the Bridgeport Covered Bridge ( No. CA-41) has clear spans of on one side and on the other, while Old Blenheim Bridge ( No. NY-331) had a documented clear span of in the middle (1936 drawings). With the 2011 destruction of the Old Blenheim Bridge, the Bridgeport Covered Bridge is the undisputed longest-span wooden covered bridge still surviving. Historically, the longest single-span covered bridge on record was Pennsylvania's McCall's Ferry Bridge with a claimed clear span of (built 1814–15, destroyed by ice jam 1817). See also California Historical Landmarks in Nevada County List of bridges documented by the Historic American Engineering Record in California List of covered bridges in California National Register of Historic Places listings in Nevada County, California External links Bridgeport Covered Bridge, at Nevada County, California website Pictures of the Bridgeport Covered Bridge, at California Dept. of Transportation South Yuba River State Park Bridgeport Covered Bridge at the Covered Spans of Yesteryear website South Yuba River Park Adventures, pictures, events, maps, wildflowers References Wooden bridges in California Pedestrian bridges in California Bridges in Nevada County, California Bridges completed in 1862 California Historical Landmarks Historic Civil Engineering Landmarks Covered bridges on the National Register of Historic Places in California National Register of Historic Places in Nevada County, California Former road bridges in the United States Historic American Buildings Survey in California Historic American Engineering Record in California Tourist attractions in Nevada County, California 1862 establishments in California Road bridges on the National Register of Historic Places in California Burr Truss bridges in the United States
Bridgeport Covered Bridge
Engineering
815
56,550,400
https://en.wikipedia.org/wiki/NGC%20525
NGC 525, also occasionally referred to as PGC 5232 or UGC 972, is a lenticular galaxy located approximately 95.6 million light-years from the Solar System in the constellation Pisces. It was discovered on 25 September 1862 by astronomer Heinrich d'Arrest. Observation history D'Arrest discovered NGC 525 using his 11-inch refractor telescope at Copenhagen. He located the galaxy's position with a total of two observations. As he also noted the mag 11-12 star just 2' northwest, his position is fairly accurate. The galaxy was later catalogued by John Louis Emil Dreyer in the New General Catalogue, where it was described as "very faint, very small, 11th or 12th magnitude star 5 seconds of time to west". Description The galaxy appears very dim in the sky as it only has an apparent visual magnitude of 13.3 and thus can only be observed with telescopes. It can be classified as type S0 using the Hubble Sequence. The object's distance of roughly 95.6 million light-years from the Solar System can be estimated using its redshift and Hubble's law. See also List of NGC objects (1–1000) References External links SEDS Lenticular galaxies Pisces (constellation) 0525 5232 Astronomical objects discovered in 1862 Discoveries by Heinrich Louis d'Arrest
NGC 525
Astronomy
275
18,969,176
https://en.wikipedia.org/wiki/List%20of%20symbolic%20stars
This is a list of symbolic uses of "star" ideograms. Star (classification), a scoring system for hotels, restaurants and movies Star (football badge), representing trophies won by a football team Barnstar, a decorative painted object or image often used to adorn a barn Brunswick star, an eight- or sixteen-pointed star surrounding the British Royal Cypher, used on police badges Hex sign, a form of Pennsylvania Dutch folk art Mullet (heraldry), unconventional shapes of stars on coats-of-arms Nautical star, a popular tattoo design Red star, a political symbol of communism and socialism Star of Life, representing emergency medical services units and personnel Geometry Star polygon, a star drawn with a number of lines equal to the number of points Pentagram, a five-pointed star polygon Five-pointed star, a pentagram with internal line segments removed Lute of Pythagoras, a pentagram-based fractal pattern Hexagram, a six-pointed star polygon Heptagram, a seven-pointed star polygon Octagram, an eight-pointed star polygon Enneagram, a nine-pointed star polygon Decagram, a ten-pointed star polygon Hendecagram, an eleven-pointed star polygon Dodecagram, a twelve-pointed star polygon Magic star, a star polygon in which numbers can be placed at each of the vertices and intersections, such that the four numbers on each line sum to the same "magic" constant Typography Star (glyph), any of a number of star-shaped glyphs in typography Asterisk, a typographical symbol (*) Arabic star, a typographical symbol developed to be distinct from the asterisk Medals and awards 1-, 2-, 3-, 4-, or 5-star rank, officer ranks used in many armed services, as well as the rare 6-star rank. Africa Star, awarded by the British Commonwealth for service in World War II. Award star, issued by the United States military for meritorious action in combat. Bronze Star Medal, a United States Armed Forces individual military decoration. Gold star, the highest state decoration in the Soviet Union and several post-Soviet states. Service star, an attachment to a military decoration which denotes participation in military campaigns or multiple bestowals of the same award. Silver Star, a military decoration which can be awarded to a member of any branch of the United States Armed Forces. Order of the White Star, an Estonian civilian public service award. Star Scout, a rank in the Boy Scouts of America. Religious and supernatural uses Star of David, or Jewish Star, a hexagram symbolizing Israel, Judeans, and/or Jews; properly speaking, this "star" is called the "Shield of David," (Magen David), while the pentagram is the "Star of David." Note that this is a cultural, rather than religious symbol. Star of Lakshmi, a Hindu symbol associated with the goddess Lakshmi Star of Ishtar, an ancient symbol associated with the Mesopotamian goddess Ishtar Star and crescent, an Islamic symbol The Star (Tarot card), one of the Major Arcana Druze star, a symbol of the Druze religion Marian star, a six-pointed star used as a Roman Catholic symbol of celestial objects Rub el Hizb, a common Islamic symbol alQuds Star, a star representing 'alQuds' (Jerusalem) Haykal, a five-pointed star that represents the Bahá'í Faith Nine-pointed star, a common symbol of the Bahá'í Faith that represents unity and Bahá’.
List of symbolic stars
Mathematics
758
9,918,051
https://en.wikipedia.org/wiki/Chip%20timing
Transponder timing (also called chip timing or RFID timing) is a technique for measuring performance in sport events. A transponder working on a radio-frequency identification (RFID) basis is attached to the athlete and emits a unique code that is detected by radio receivers located at the strategic points in an event. Prior to the use of this technology, races were either timed by hand (with operators pressing a stopwatch) or using video camera systems. Transponder systems Generally, there are two types of transponder timing systems; active and passive. An active transponder consists of a battery-powered transceiver, connected to the athlete, that emits its unique code when it is interrogated. A passive transponder does not contain a power source inside the transponder. Instead, the transponder captures electromagnetic energy produced by a nearby exciter and utilizes that energy to emit a unique code. In both systems, an antenna is placed at the start, finish, and in some cases, intermediate time points and is connected to a decoder. This decoder identifies the unique transponder code and calculates the exact time when the transponder passes a timing point. Some implementations of timing systems require the use of a mat on the ground at the timing points while other systems implement the timing points with vertically oriented portals. History RFID was first used in the late 1980s primarily for motor racing and became more widely adopted in athletic events in the mid-1990s upon the release of low cost 134 kHz transponders and readers from Texas Instruments. This technology formed the basis of electronic sports timing for the world's largest running events as well as for cycling, triathlon and skiing. Some manufacturers made improvements to the technology to handle larger numbers of transponders in the read field or improve the tolerance of their systems to low-frequency noise. These low-frequency systems are still used a lot today. Other manufacturers developed their own proprietary RFID systems usually as an offshoot to more industrial applications. These latter systems attempted to get around the problem of reading large numbers of transponders in a read field by using the High Frequency 13.56 MHz RFID methodology that allowed transponders to use anti-collision algorithms to avoid tags interfering with each other's signal during the down-link between transponder and reader. Active transponder systems continued to mature and despite their much higher cost they retained market share in the high speed sports like motor racing, cycling and ice skating. Active systems are also used at high-profile events such as the Olympics due to their very high read rates and time-stamping precision. By 2005 a newer RFID technology was becoming available, mostly for industrial applications. The first and second generation (UHF) transponders and readers that were being developed followed a strict protocol to ensure that multiple transponders and readers could be used between manufacturers. Much like the HF tags, the UHF tags were much cheaper to produce in volume and formed the basis in the next revolution in sports timing. Currently, many of the largest athletic events are timed using disposable transponders either placed on the back of a race number or on the runner's shoe. The low cost meant that transponders were now fully disposable and did not need to be returned to the organizers after the event. Usage Very large running events (more than 10,000) and triathlons were the first events to be transponder (or chip) timed because it is near impossible to manually time them. Also for large runs there are delays in participants reaching the start line, which penalize their performance. Some races place antennas or timing mats at both the start line and the finish line, which allow the exact net time to be calculated. Awards in a race are generally based on the "gun time" (which ignores any delay at the start) as per IAAF and USA Track and Field rules. However, some races use "net time" for presenting age group awards. In the past the transponder was almost always worn on the athletes running shoe, or on an ankle band. This enabled the transponder to be read best on antenna mats because the distance between the transponder and readers antenna is minimized offering the best capture rate. Transponders may be threaded onto the shoe laces for running. For triathlon a soft elastic ankle band holds the transponder to the leg and care is taken to ensure the transponder is in the correct orientation or polarity for maximum read performance. Transponders have also been placed on the race bib. In the past 5 years the newer UHF systems use transponders placed on the shoe lace, or stuck to the race number bib. In both cases, care must be taken to ensure the UHF tag does not directly touch a large part of the skin as this affects read performance. Despite this, UHF Systems have read performances as good (if not better) than the conventional low and high frequency systems. Because these UHF tags are made in huge volumes for industrial applications, their price is much lower than that of conventional re-usable transponders and the race does not bother to collect them afterwards. As of 2015, many UHF timers use a combination of ground antennas with panel antenna(s) mounted on a tripod at the side of the race course. All RFID timing systems incorporate a box housing the reader(s) with peripherals like a microprocessor, serial or Ethernet communications and power source (battery). The readers are attached to one or more antennas that are designed for the particular operating frequency. In the case of low or medium frequencies these consist of wire loops incorporated into mats that cover the entire width of the timing point. For UHF systems the antennas consist of patch antennas that are protected in a matting system. The patch antennas may also be placed on stands or a finish gantry pointing towards the oncoming athlete. In most cases the distance between reader and antennas is restricted. Also more equipment is needed for events that require multiple timing points. Wider timing points require more readers and antennas. For active systems a simple wire loop is all that is needed since the transponder has its own power source and the loop serves as a trigger to turn on the transponder, then receive the relatively strong signal from the transponder. Therefore, active systems need less readers (or decoders) per timing point width. All systems utilize specialized software to calculate results and splits. This software usually resides on a separate PC computer that is connected to the readers via serial or Ethernet communications. The software relates the raw transponder code and timestamp data to each entrant in a database and calculates gun and net times of runners, or the splits of a triathlete. In advanced systems these results are instantly calculated and published to the internet so that athletes and spectators have access to results via any web enabled device. References Sports equipment Running Radio-frequency identification
Chip timing
Engineering
1,421
47,238,886
https://en.wikipedia.org/wiki/Enterprise%20legal%20management
Enterprise legal management (ELM) is a practice management strategy of corporate legal departments, insurance claims departments, and government legal and contract management departments. ELM developed during the 1990s in response to increase corporate demands for accountability, transparency, and predictability. It employs software to manage internal legal documents and workflows, electronic billing and invoicing, and to guide decision-making through reporting and analytics. Definitions Still an evolving term, ELM is a recognized management discipline and a strategic objective of general counsel. Some have argued that ELM falls within the broader category of corporate governance, risk, and compliance (GRC); others maintain that ELM and GRC are separate entities along a continuum. Separate but related technologies include information governance, electronic discovery, legal hold, contract management, corporate secretary, and board of directors’ communications. ELM software may integrate some or all of these components. Historical development Early practice management Law practice management refers to the business aspect of operating a law firm or in-house legal team. Components include economics, workplace communication and management, ethics, and client service. Historically, corporate legal spend was considered a “black box” with limited predictability and transparency, making it difficult for corporate legal teams to parse differences of efficiency and cost among outside firms, or to benchmark firm performance against previously hired counsel. Transition and early adoption Several factors led to a shift away from traditional, low-technology solutions and toward ELM, most notably the expansion of the Internet during the 1990s and subsequent development of Software as a service (SaaS) platforms. Within legal departments, factors included greater regulatory compliance risk, smaller budgets, and board member demands for greater accountability, predictability, and transparency. Over time, the demand for budgetary information, including metrics such as the ratio of legal spend to total enterprise revenue, extended beyond board members to include other stakeholders. Corporate legal departments were positioned as the next frontier of corporate efficiency and risk management, and encouraged to operate as a true business partner. This created pressure to reduce costs and, when possible, generate revenue for the larger enterprise. Departments’ varied sizes and responsibilities created a range of practice management needs. A legal department with a small number of in-house attorneys might oversee thousands of cases managed by outside counsel, while a large internal legal department could handle most cases in-house. Smaller departments focused on management of workflow, collaboration, and spend management; larger departments had greater needs for internal matter management, attorney utilization, and document management. The first electronic transitions were to generic matter management applications, which replaced paper files as the system of record. Specialization and expansion Initial enterprise resource planning systems did not meet the specific needs of legal departments. External legal costs presented unique management challenges because of the billable-hour model and unpredictable labor requirements. Software specialization integrated matter management and preexisting, internal billing software. This integration provided the opportunity to meet management demands for increased communication and information from within corporate legal departments. Ultimately, it extended further to combine legal information with finance, compliance, and risk-management departments. These developments also presented risks to enterprise-wide information management. Specialized software generated the potential for conflicts between company‐wide enterprise solutions and the ELM systems of in-house legal departments. Additionally, maintaining data security was and remains a preeminent concern, especially for SaaS-based ELM platforms. One in four chief legal officers (CLOs) reported a data breach during 2013–15, with the health-care industry especially vulnerable. ELM software The expansion of ELM as a practice management strategy was fostered by the growth of ELM software. That growth, in turn, was made possible through the expansion of the Internet and adoption of SaaS platforms across a wide range of industries. Total revenue of all SaaS providers accelerated into the 2010s, with the International Data Corporation (IDC) forecasting revenue to grow from $22.6 billion to $50.8 billion during 2014–18. ELM software primarily supports matter management and electronic billing, with derived analytics and reporting guiding legal department business processes. As of 2013, the maturity level of the industry was characterized as early mainstream, with market penetration of less than 20%. A Blue Hill Research report stated that economic motivations have encouraged adoption, with an average return on investment of 766%; median spend contraction of 4.5% from automated processing and rejection of nonconforming invoices; and median recurring annual spend reduction of 4% from use of analytics to support data-driven spend management. Variables affecting the purchase of ELM software include considerations of license type, usage scope, maintenance and support, installation location, and license fee calculation. Vendors employ a range of licensing practices, with no model inherently advantaged or disadvantaged. Components of ELM software Matter management Matter management includes the storage and retrieval of all data related to matters handled by a legal department, including the creation, revision, approval, and consumption of legal documents. Matter management is used to facilitate document collaboration internally and with outside counsel. In complex legal matters such as mass tort litigation, ELM software provides matter management capabilities such as batch uploading of invoices to expedite review and approval. Electronic billing Electronic billing provides a centralized repository for legal bills and invoices, and a method to deliver those bills securely for review and payment. ELM software integrates with internal electronic billing software through the Legal Electronic Data Exchange Standard (LEDES) format, which has standardized the transfer of legal data. Development of LEDES in the late 1990s was supplemented by the American Bar Association’s creation of Uniform Task-Based Management System (UTBMS) to establish consistent coding of services by outside counsel. Electronic billing automates review for compliance errors, allocation to cost centers, and routing for approval. Independent research suggests that it reduces costs by decreasing manual labor and paper costs. Analytics and business process management Matter management and electronic billing data collected by ELM software is used to generate reports and provide analytics that influence business process management within legal departments. According to a Gartner survey, CLOs increasingly focus on regulatory compliance, customer and stakeholder satisfaction, and risk management. Efforts to reduce legal spend center on the reduction of outside counsel costs, achieved through the negotiation of alternative fee arrangements, increased reliance on internal counsel, and convergence of outside counsel. Use of flat fees for entire matters grew from 12 to 20% during 2013–15, with larger legal departments—those serving companies with at least $4 billion in annual revenue—more than twice as likely to use a flat fee structure compared to companies with less than $100 million in annual revenue. Legal departments use analytics to inform these budgeting and forecasting decisions, with the selection of outside law firms based on tradeoffs between cost and attorney performance. Internal historical billing data and industry benchmarks identify trends and differences among providers, and average fees associated with matter types. Some ELM software vendors offer comparative metrics harvested from subscribers. Recent developments within ELM software include the utilization of machine learning and artificial intelligence in order to predict claims costs. These types of predictions are intended to reduce insurer's combined ratio by driving early settlements for claims likely to carry a greater than average cost. ELM vendors that offer these predictive claims include LSG, Jalubro and Thomson Reuters. Early data regarding return on investment of predictive data analytics suggests average legal spend reductions of 6-11% See also Corporate lawyers General counsel Legal governance, risk management, and compliance References Business management Practice of law Software development
Enterprise legal management
Technology,Engineering
1,503
58,876,827
https://en.wikipedia.org/wiki/Many-body%20localization
Many-body localization (MBL) is a dynamical phenomenon occurring in isolated many-body quantum systems. It is characterized by the system failing to reach thermal equilibrium, and retaining a memory of its initial condition in local observables for infinite times. Thermalization and localization Textbook quantum statistical mechanics assumes that systems go to thermal equilibrium (thermalization). The process of thermalization erases local memory of the initial conditions. In textbooks, thermalization is ensured by coupling the system to an external environment or "reservoir," with which the system can exchange energy. What happens if the system is isolated from the environment, and evolves according to its own Schrödinger equation? Does the system still thermalize? Quantum mechanical time evolution is unitary and formally preserves all information about the initial condition in the quantum state at all times. However, a quantum system generically contains a macroscopic number of degrees of freedom, but can only be probed through few-body measurements which are local in real space. The meaningful question then becomes whether accessible local measurements display thermalization. This question can be formalized by considering the quantum mechanical density matrix of the system. If the system is divided into a subregion (the region being probed) and its complement (everything else), then all information that can be extracted by measurements made on alone is encoded in the reduced density matrix . If, in the long time limit, approaches a thermal density matrix at a temperature set by the energy density in the state, then the system has "thermalized," and no local information about the initial condition can be extracted from local measurements. This process of "quantum thermalization" may be understood in terms of acting as a reservoir for . In this perspective, the entanglement entropy of a thermalizing system in a pure state plays the role of thermal entropy. Thermalizing systems therefore generically have extensive or "volume law" entanglement entropy at any non-zero temperature. They also generically obey the eigenstate thermalization hypothesis (ETH). In contrast, if fails to approach a thermal density matrix even in the long time limit, and remains instead close to its initial condition , then the system retains forever a memory of its initial condition in local observables. This latter possibility is referred to as "many body localization," and involves failing to act as a reservoir for . A system in a many body localized phase exhibits MBL, and continues to exhibit MBL even when subject to arbitrary local perturbations. Eigenstates of systems exhibiting MBL do not obey the ETH, and generically follow an "area law" for entanglement entropy (i.e. the entanglement entropy scales with the surface area of subregion ). A brief list of properties differentiating thermalizing and MBL systems is provided below. In thermalizing systems, a memory of initial conditions is not accessible in local observables at long times. In MBL systems, memory of initial conditions remains accessible in local observables at long times. In thermalizing systems, energy eigenstates obey ETH. In MBL systems, energy eigenstates do not obey ETH. In thermalizing systems, energy eigenstates have volume law entanglement entropy. In MBL systems, energy eigenstates have area law entanglement entropy. Thermalizing systems generically have non-zero thermal conductivity. MBL systems have zero thermal conductivity. Thermalizing systems have continuous local spectra. MBL systems have discrete local spectra. In thermalizing systems, entanglement entropy grows as a power law in time starting from low entanglement initial conditions. In MBL systems, entanglement entropy grows logarithmically in time starting from low entanglement initial conditions. In thermalizing systems, the dynamics of out-of-time-ordered correlators forms a linear light cone which reflects the ballistic propagation of information. In MBL systems, the light cone is logarithmic. History MBL was first proposed by P.W. Anderson in 1958 as a possibility that could arise in strongly disordered quantum systems. The basic idea was that if particles all live in a random energy landscape, then any rearrangement of particles would change the energy of the system. Since energy is a conserved quantity in quantum mechanics, such a process can only be virtual and cannot lead to any transport of particle number or energy. While localization for single particle systems was demonstrated already in Anderson's original paper (coming to be known as Anderson localization), the existence of the phenomenon for many particle systems remained a conjecture for decades. In 1980 Fleishman and Anderson demonstrated the phenomenon survived the addition of interactions to lowest order in perturbation theory. In a 1998 study, the analysis was extended to all orders in perturbation theory, in a zero-dimensional system, and the MBL phenomenon was shown to survive. In 2005 and 2006, this was extended to high orders in perturbation theory in high dimensional systems. MBL was argued to survive at least at low energy density. A series of numerical works provided further evidence for the phenomenon in one dimensional systems, at all energy densities (“infinite temperature”). Finally, in 2014 Imbrie presented a proof of MBL for certain one dimensional spin chains with strong disorder, with the localization being stable to arbitrary local perturbations – i.e. the systems were shown to be in a many body localized phase. It is now believed that MBL can arise also in periodically driven "Floquet" systems where energy is conserved only modulo the drive frequency. Emergent integrability Many body localized systems exhibit a phenomenon known as emergent integrability. In a non-interacting Anderson insulator, the occupation number of each localized single particle orbital is separately a local integral of motion. It was conjectured (and proven by Imbrie) that a similar extensive set of local integrals of motion should also exist in the MBL phase. Consider for specificity a one dimensional spin-1/2 chain with Hamiltonian where , and are Pauli operators, and are random variables drawn from a distribution of some width . When the disorder is strong enough () that all eigenstates are localized, then there exists a local unitary transformation to new variables such that where are Pauli operators that are related to the physical Pauli operators by a local unitary transformation, the ... indicates additional terms which only involve operators, and the coefficients fall off exponentially with distance. This Hamiltonian manifestly contains an extensive number of localized integrals of motion or "l-bits" (the operators , which all commute with the Hamiltonian). If the original Hamiltonian is perturbed, the l-bits get redefined, but the integrable structure survives. Exotic orders MBL enables the formation of exotic forms of quantum order that could not arise in thermal equilibrium, through the phenomenon of localization-protected quantum order. A form of localization-protected quantum order, arising only in periodically driven systems, is the Floquet time crystal. Experimental realizations A number of experiments have been reported observing the MBL phenomenon. Most of these experiments involve synthetic quantum systems, such as assemblies of ultracold atoms or trapped ions. Experimental explorations of the phenomenon in solid state systems are still in their infancy. See also Quantum scar Thermalization Time crystal References Quantum mechanics Quantum chaos theory
Many-body localization
Physics
1,532
7,913,274
https://en.wikipedia.org/wiki/Athenian%20sacred%20ships
Athenian sacred ships were ancient Athenian ships, often triremes, which had special religious functions such as serving in sacred processions (theoria) or embassies or racing in boat races during religious festivals. The two most famous such ships were the Paralus and the Salaminia, which also served as the messenger ships of the Athenian government in the 5th and 4th centuries BC. Other notable ships included one possibly named the Delia (Δηλία), a triakonter (thirty-oared galley) believed to be the ship in which Theseus had sailed to Crete, and which was involved in several traditional theoria to Delos; the vessel was constantly repaired by replacing individual planks to keep it seaworthy while maintaining its identity as the same ship. (For the philosophical question of the ship's identity, see Ship of Theseus.) After the reforms of Cleisthenes, a ship was named for each of the ten tribes that political leader had created; these ships may also have been sacred ships. Another known sacred ship was the Theoris (θεωρίς), a trireme kept for sacred embassies. Most probably the name of the ships derive from: Delia: was called like this because it was used (probably exclusively) for Delian theoriae Salaminia: was called like this because it was manned originally by natives of Salamis Paralus: was called like this because it was manned by sailors from the Paralia Theoris: from the term theori (θεωροί), who were sacred ambassadors or delegates and were dispatched on special missions (θεωρίαι) to carry out a religious task for the state, speak with an oracle, or represent the state at a religious celebration in another country. The Paralus and the Salaminia, and possibly some other sacred ships, served in the Athenian combat fleet. Those two vessels, being particularly swift, were used as scout and messenger ships, but also fought in the line of battle. The Paralus and Salaminia, meanwhile, also performed various tasks for the government; the Paralus appears to have carried most diplomatic missions, and the Salaminia carried official state messages; most famously, it was sent to arrest Alcibiades while that politician was commanding the Sicilian Expedition. These two triremes also had dedicated treasurers, or tamiai. References Sources Jordan, Borimir, The Athenian Navy in the Classical Period. (Berkeley, University of California Press, 1975). . Lewis, David M. "Book Review: The Athenian Navy in the Classical Period by Borimir Jordan". Classical Philology Vol. 73 No. 1 1978, pp. 70–72. Xenophon, A History of my Times Ancient Greek religion Ships of ancient Greece Sacred ships Sacred ships Religious objects
Athenian sacred ships
Physics
585
14,879,835
https://en.wikipedia.org/wiki/OLIG1
Oligodendrocyte transcription factor 1 is a protein that in humans is encoded by the OLIG1 gene. See also Oligodendrocyte Transcription factor OLIG2 References Further reading External links Transcription factors
OLIG1
Chemistry,Biology
46
1,347,818
https://en.wikipedia.org/wiki/Door%20closer
A door closer is a mechanical device that regulates the speed and action of a door’s swing. Manual closers store the force used to open the door in some type of spring and reuse it to close the door. Automatic types use electricity to regulate door swing behavior. Door closers can be linked to a building's fire and security alarm systems. History Early days One of the first references concerning a device to close a door can be found in the writings of Hero of Alexandria who describes his "automata" which controlled the doors of temples, both opening and closing them automatically. Weights and levers have also been used to close doors. Another device for smaller domestic doors used a loop of rope or skein fixed to the door frame, that was twisted, with a piece of wood placed in between the twists to push the door. The opening of the door twists the skein further, when the door is released the rope's torsional force pushes the arm back against the door, thereby closing it. In more modern times the clock manufacturers Thwaites and Reed in 1850 claimed to be the original inventors of the spiral door spring. The earliest English patent for a door closing device consisting of weights and pulleys was issued in 1786 to Francis Moore. The first English patent issued that mentions a spring can be traced to a few years later to that of Henry Downer. (Ironmonger) of Fleet Street, London recognised for the invention of a "spring to shut a door" (1790). There were even earlier devices invented to close a door, for instance, Mr Delevitz's model of a door with spiral spring hinges (1768). Earlier still is reference by way of a letter between Sir Edward Filmer (3rd Bart.) and his brother, Beversham Filmer dated 1748, in which they discuss a door spring. Whilst not a door closer, there was a mechanical statue, reported in the Stamford Gazette and displayed by a Monsieur Delanois at the White Swan in Stamford, December 21, 1736, that opened and closed his own door. Closer development The first door closers consisted of just a spring mechanism only, as time went on the rate at which the door closed was arrested or checked by adding an additional checking device. Door closers at this time were known as a door spring and check. Later these two devices were combined into one unit that both closed the door and slowed the speed at which this was done. These early "door closers" used a pneumatic piston to check the speed, later models used a hydraulic or oil filled device for the same effect. The first patent for a pneumatic device to prevent the sudden slamming of a door was given to William Bullock and James Boaz, on May 13, 1813 (Patent Number 3695). An improved hydraulic device to prevent the "clapping" (slamming) of doors was patented by William Overden Snr and William Overden Jnr in 1864. Door closers that utilize the properties of vulcanised Indian rubber have also been patented and used. The use of door closers expanded during the Victorian era. Companies such as William Tonks and Son, James Cartland and Sons and William Newman and Son and were all based in and around Birmingham. receiving in 1974 an award for their one millionth door closer produced. In 1907 the Briton B was first placed on the market. In the United States, Lewis. C. Norton started his business in 1877, entering the door closer market in 1880 with a door check for the Boston Trinity Church. Eugene Blount, Francis Richards and Joseph Bardsley also played important parts in the development, improvement and commercialization of door closers along with other companies, including Yale, Norton, Rixson and The Shelby Spring Hinge Company. Types Manual A spring, hydraulic pressure, or a combination of both, is found in manual door closers. The energy used in opening the door is stored in a spring (compression, torsion, tension, volute or leaf), and released to close the door. Spring tension is typically adjustable, altering both opening and closing force. Most door closers use oil-filled hydraulic dampers to limit closing speed, and allow for soft closing. Other types use a friction-based mechanical speed control mechanism. “Controlled closers” use adjustable hydraulic valves to variously regulate the door’s opening and closing speed, latch speed, and delayed action return. These allow setting of the “sweep speed” (the rate which the door travels along the majority of its closure, the sole closing setting on basic closers); the “latch speed” (the rate in the last 10 to 15 degrees of closing arc, allowing it to be set faster than the "sweep" to ensure proper latch closure); “delayed action” (which slows sweep speed dramatically for roughly the first half of its range, allowing more time for passage), and opening speed (preventing a door from being opened too fast, a useful feature in crowded environments, and those where the young, old, or infirm may be present). It is also particularly well-suited to exterior doors where there is a danger of wind catching and blowing them open, potentially harming the door, nearby objects, people, or pets. Automatic An automatic door closer, more often called a "door opener", opens the door itself, typically under the control of a push button, motion detector or other device, and then uses a motion sensor or proximity detector to determine when it is safe to close it. Automatic mechanisms are also used for security purposes, being controlled by a keypad, swipe card, or biometrically controlled electromagnetic device. The latter include retina scanners, fingerprint readers, and voice recognition technologies. Electric door closers may also be hooked to a building’s fire alarm system. A triggered alarm cuts power to any electromagnetic hold-open device, allowing the doors to close. Configurations There are seven configurations of interior door-closer: Surface-mounted Concealed in frame (jamb) Concealed in header (transom) Concealed in floor Concealed in door Concealed in shoe Integral to hinge Spring Hinge Self-Closing Hinge Overhead or surface-mounted door closers come in four variations: slide-track arm, regular arm surface mounted, parallel arm surface mounted, and top jamb mounted, most are surface mounted although some manufacturers offer concealed models too. Another type of surface mounted door closer is attached to the door frame behind the door (where the hinges are) next to the middle hinge. The "arm" (tail) rests against the door, and a spring that is twisted by the user opening the door closes the door by returning to its pre-twisted shape. This type of door closer is referred to as a "tail" spring and is one of the more simple mechanisms, having no damping control. There is also the storm door and screen door variation of the door closer: As the name implies, these piston type closers are used on storm doors, security, and screen doors which give the home an extra line of defense against weather, intruders, and insects. Whereas interior closers typically use hydraulics, storm door closers are more typically pneumatic, using air and springs to close the door. Storm door closers often have a small metal square washer on the rod that is used to lock the closer in the open position if required, more recent models have a button to actuate the hold open feature to make this process easier. Concealed, jamb-mounted type door closers, mounted in morticed recesses in the door and door frame, are concealed when the door is closed. These are available in controlled and uncontrolled versions, selected according to the application for which they are intended. Such concealed closers when mounted inside a pocket in the door frame (door jamb) are commonly known as "perco's" or perkomatic closers When door closers are mounted in the header they are known as transom closers. These can be HO (hold open) or NHO (none hold open). Door closers that are mounted in the floor directly under the pivot point beneath a decor plate are referred to as floor springs and come in two variations, single action for doors opening one way (right and left hand) and double action for doors that open inward and outward, both types can either be none hold open (NHO) or hold open (HO). They consist of a pivot which protrudes from the top of the device and mates to a shoe (or strap) that the door is connected to, some kind of spring and a damping device to control the rate at which the door closes (very early ones had no damping), these damping devices are either pneumatic (known as an air spring or air check) or hydraulic in nature. When a floor spring is used to control a door, they can be used in conjunction with hinges but generally have a single pivot point at the top of the door, this pivot point is known as a top centre. Floor springs are usually the most expensive and most hard wearing of all the door closing devices in use. The shoe door closer, known as a heel spring, is housed entirely in the 'heel' of the door, inside a shoe. This shoe looks very much like the shoe used in conjunction with floor springs. Spring hinge uses a spring mounted in a hinge and is integral to its design. The spring can be either visible or hidden within a tube and can be found more commonly on interior doors. When used on doors that open both ways they are known as double action spring hinges. A self-closing hinge combines door closers and spring hinges with an optional hold open feature into one component. These closer hinges eliminate the visual and physical clutter of using additional devices, as well as reduce maintenance problems associated with overhead and in-floor door closers. They are especially useful where other types of closers are difficult to use. The most durable self-closing hinge can handle doors up to 440 lb. Usage Door closers are widely used in both residential and non-residential settings. At home, they are most commonly found on screen and storm doors. They are also used in numerous applications in commercial, industrial, institutional, and public facilities from libraries and schools to museums and airports. Fire safety In non-residential settings, door closers are most commonly installed on bathroom doors, fire doors, and exit doors. During a fire, any door that penetrates a firewall must be fire-rated. Fire doors need to be closed in case of fire to help prevent the spread of fire and smoke. Any fire doors which are normally held open must automatically close and lock when a fire is present in the building. The function of an emergency exit rim device (crash bar or panic bar) will permit escape through a fire door, however it must re-latch once released. (A fire door must not be "dogged" to disable its latch.) In most countries fire door performance will be governed by national standards. Temperature control Door closers also play a role in maintaining desired interior temperatures, reducing air movement in and out of conditioned space. Security Door closers also play a role in security at building entrance doors, closing doors once somebody has passed through and re-latching the door lock. Noise control In buildings that require noise control (Studios) door closers play an important part in the suppression of unwanted noise both in and out rooms and the buildings themselves. Privacy Door closers are often used to ensure privacy in toilets and washrooms. Hygiene Door closers can also play a part in keeping buildings and rooms free from dirt, debris and pests. See also Door loop, a method for providing electric cabling to a door References External links Door automation Ironmongery Door furniture
Door closer
Engineering
2,420
61,024,315
https://en.wikipedia.org/wiki/C21H23NO
{{DISPLAYTITLE:C21H23NO}} The molecular formula C21H23NO (molar mass: 305.41 g/mol, exact mass: 305.1780 u) may refer to: Dapoxetine Indapyrophenidone JWH-167 (1-pentyl-3-(phenylacetyl)indole) Molecular formulas
C21H23NO
Physics,Chemistry
87
18,597,882
https://en.wikipedia.org/wiki/Pusey%20and%20Jones
The Pusey and Jones Corporation was a major shipbuilder and industrial-equipment manufacturer. Based in Wilmington, Delaware, it operated from 1848 to 1959. Shipbuilding was its primary focus from 1853 until the end of World War II, when the company converted the shipyard to produce machinery for paper manufacturing. The yard built more than 500 ships, from large cargo vessels to small warships and yachts, including Volunteer, the winner of the 1887 America's Cup. History The company began in 1848, when Joshua L. Pusey and John Jones formed a partnership in Wilmington, Delaware, to run a machine shop in space rented from a whaling company. The shipyard sat between the Christina River and the main line of the Pennsylvania Railroad. In 1851, Edward Betts and Joshua Seal, who were operating an iron foundry in Wilmington, purchased an interest in the business. The name of the company became Betts, Pusey, Jones & Seal. In 1854, Pusey and Jones built the first U.S. iron-hulled sailing vessel: a schooner named Mahlon Betts after Edward's father, who had built the foundry. At the beginning of the Civil War the company began building vessels for the U.S. military. The first was a sloop of war, which required immediate expansion of the workforce. The company also built engines and boilers for other shipbuilding firms. In 1887, the company built the first steel-hulled yacht to win the America's Cup, "Volunteer". During World War I, the firm grew to more than 2,000 employees. It established the Pennsylvania Shipbuilding Corporation shipyard in Gloucester City, New Jersey, with four ways capable of launching ships up to 12,500 tons and two ways of up to 7,000 tons. Shortly thereafter, the New Jersey Shipbuilding Corporation was formed and their shipyard, which was virtually an addition to the Pennsylvania S.B. yard, was planned to have six slipways for building 5,000-ton cargo steam ships. The keel of the first 7,000dwt tanker was laid on 9 September 1916. These two yards delivered 20 ships to the United States Shipping Board, all requisitions: 6 tankers of 7,000dwt 11 cargo ships of 12,500dwt Yard#7, War Serpent, launched as Indianapolis 3 cargo ships of 5,000dwt The Wilmington yard delivered 14 vessels, all requisitions, and two minesweepers for the United States Navy: 6 cargo, 2,600t 8 cargo, 3,000t 2 of 49 s , After the business slump of the early 1920s, the company reorganized in 1927 under businessman Clement C. Smith, becoming Pusey and Jones Corporation. The company focused on building large luxury steam and motor yachts for wealthy patrons. As World War II approached, military orders increased. The highest employment was reached during World War II, when more than 3,600 employees worked in the shipyards, plants and offices of the company. Pusey and Jones built 19 Type C1 ships for the U.S. Maritime Commission. Other craft such as minesweepers were built, along with specialty and smaller vessels. Many commercial and private vessels originally built by the company were also converted to military use. On Liberty Fleet Day — September 27, 1941 — the yard launched the SS Adabelle Lykes. After World War II, Pusey and Jones converted the shipyard's facilities to manufacture papermaking machinery. The company closed in 1959. Notable vessels See also :Category:Ships built by Pusey and Jones Harlan and Hollingsworth: Nearby shipyard in Wilmington, Delaware Jackson and Sharp Company: Nearby shipyard in Wilmington, Delaware References External links Pusey and Jones paper industry website List of ships built at the Wilmington shipyard shipbuildinghistory.com List of ships built at the Gloucester City shipyard shipbuildinghistory.com Wilmington Industrial History by Patrick Harshbarger Delaware River Shipyards yorkship.com Shipyards and Suppliers for U. S. Maritime Commission During World War II usmm.org Ship builders and Owners (list) wrecksite.eu Wilmington Strike Ends; Workers Return Today to Pusey & Jones Shipyards New York Times, December 5, 1941 Volunteer Americascup.com Outboard Profiles of Maritime Commission Vessels, The C1 Cargo Ship, Conversions and Subdesigns WWI Standard Built Ships, Shipbuilding Yards Photos of Pusey and Jones ships and facilities Building the Lydonia II Digital exhibit about a ship built at Pusey and Jones Defunct shipbuilding companies of the United States Maritime history of Delaware Wilmington Riverfront Companies based in Wilmington, Delaware American companies established in 1848 Manufacturing companies established in 1848 Manufacturing companies disestablished in 1959 1959 disestablishments in Delaware America's Cup yacht builders 1848 establishments in Delaware Papermaking in the United States Industrial machine manufacturers American companies disestablished in 1959 Defunct manufacturing companies based in Delaware
Pusey and Jones
Engineering
983
29,349,515
https://en.wikipedia.org/wiki/Human%20resource%20metrics
Human resource metrics are measurements used to determine the value and effectiveness of human resources (HR) initiatives, typically including such areas as turnover, training, return on human capital, costs of labor, and expenses per employee. Efficiency It is often required of human resources departments to show the organizational value of money and time spent on human resources management training and activities. The value of reporting and analysis of HR performance in various areas aims to improve the organization's function and internal temperature. HR's challenge is to provide business leaders with actionable information that helps them make decisions about investments, marketing strategies, and new products. HR metrics are a vital way to quantify the cost and impact of employee programs and HR processes and measure the success (or failure) of HR initiatives. They enable a company to track year-to-year trends and changes in these critical variables. It is how organizations measure the value of the time and money spent on HR activities in their organization. The following are some of the examples on efficiency of HR functions: Cost per hire: It is the cost associated with a new hire. It is not only important to know how much it cost in hiring, but it is also important to see if the money spent is used to hire right people. (Boudreau; Lawler & Levenson, 2004) Time to fill: It is the total days to fill up a job opening per each job. The shorter the time, the more efficient of the HR department in finding the replacement for the job HR expense factor: It is the ratio between total company expense and HR expense. It shows if the expenses on HR practices are too much in terms of the whole company expense. Effectiveness It shows whether the HR practices have a positive effect on the employees or the applicant pool. This is very important for HR because they are regarded as the leader for acquiring, developing and helping to deploy talent. (Boudreau; Lawler & Levenson, 2004) The following are some of the examples on effectiveness of the HR functions: (Kavanagh & Thite, 2009) Training ROI: It is the total financial gain an organization have from a particular training. It shows the effectiveness of the training program and how much it can benefit to the company after the training. Absent rate: It determines the company is having an absent problem from the employees. It also reflects the effectiveness of the HR policies as well as the company's own policies. It always goes along with employee satisfaction. Employee retention and Employee turnover Developing core competency Metrics help develop core competency by demonstrating the connection between HR practices and the tangible effects on an organization's ability to gain and sustain competitive advantage. This approach often treats employees as human capital instead of expense. (Boudreau; Lawler & Levenson, 2004) The following are some of the examples on effectiveness of the HR functions: 1. Revenue factor: It indicates the effectiveness of company operation with the use of the employees as their human capital. 2. Defects rate: It indicates the number of defective products in the operation. The lower the defect rate, the more effective the HR practices in developing companies' core competency in terms of reducing cost. HR metric & human capital Some HR groups no longer only assess their effectiveness and efficiency and the contribution to the company, but also how those practices can positively affect the human capital (employees) in the organization. "Based on corporate culture, organizational values and strategic business goals and objectives, human capital measures indicate the health of the organization."(Lockwood, 2006) Key Performance Indicators (KPIs) are used to measure human capital outcomes, such as talent management, employee engagement and high performance, illustrates the firm's business, financial and strategic goals, and promotes partnership with senior management for organizational success. Nowadays, HR people integrated the traditional metrics to KPI which aligned with corporate objectives. The best KPIs should be able to reflect the human capital performance, such as financial outcomes, performance drivers. At the same time, when determining strategic KPIs, it is essential to consider who designs human capital measures and how they are created. Nancy Lockwood suggests the following 5 assists that can help HR to create a better KPI. It includes involving HR in overall business strategy; Enlisting leaders outside of HR to help develop the KPIs; Collaborating with business managers to ensure KPIs link to business unit strategic goals; Focusing more attention on links between people measures and intermediate performance drivers (e.g., customer satisfaction, engagement etc.); Increasing manager acceptance through training programs and concrete action plans; Working with HR to simplify metric and automate data collection. Human resources & metrics Human capital is important to organization because they are the people who are actually working for the organization. They build the company's core competencies and competitive advantages to the organization. With effective management of the human capital, a company can achieve the maximum outputs from its own human capital and be superior to other competitors. Some organizations are unaware even of how many people they have in their organization. The problem with HR is that they have been held unaccountable in the initiatives and programs they promote across the organization. Typically, nobody in the organization, let alone top business leaders of the organization are aware of the impact of these programs whether, positive or negative. This is because HR leaders have not been delivering metrics that show the value of their programs or investments. HR metrics is important because it allows organizations to make the connection between the value of what HR is doing and the outcomes of the business. If HR professionals don't measure their function's effectiveness and providing decision-making leaders the data they need, HR will continue to be undermined and eventually sidelined when it comes to having a seat at the table. Therefore, many experts urge HR professionals to use the data they have in front of them and understanding how metrics and analysis could give HR an advantage as an overall better strategic partner. This will allow them to help business leaders solve the people problems that matter to the organization. Before HR metrics, many of the HR activities and processes were difficult to quantify, making it hard to fully understand the real employee costs associated with each HR functions. For example, “a decade ago, if someone looked for turnover rate by performance category, it could be a two-week project.” With HR metrics, more specifically Retention metrics, HR leaders are able to quantify variables such as turnover rate, average tenure, the rate of veteran worker, or the financial impact of employee turnover. These results can indicate how much separating employees is costing the company and help the company to create proactive plans to prevent future loss of top talent. More importantly, metrics enable leaders and decision makers in organizations towards more efficient and better delivery of HR services HR metrics and data Executives tend to make consistently better decisions when they use facts gathered from their organizations in objective ways. Many of the important decisions made by executives affect the business and the bottom line; therefore, in order to convince executive leaders that organizations are benefiting from their people or on the contrary, losing money and wasting resources, HR will need to provide palpable evidence. This evidence can be found in HR Metrics. The key to finding the right metrics for your organization needs is to identify the overall business needs as organizations may differ in terms of the metrics they use. Metrics used by the organization need to show data on how human capital strategy is effective and that organizations are acquiring, developing and deploying the proper talent. Organizations that have trouble deciding what metrics to use for their organizations can always enlist the help of a specialist or consultant to do a company-wide assessment on their organization. Measuring key data with HR metrics As long as you have employees, you will have turnover, both voluntary and involuntary and any turnover experienced by the organization is money and resources being lost. Most companies have no idea the impact turnover has on the organization but when the cost of turnover is 15%, 25% or 35% of an organization's profits, it has a big impact on organizations as a whole. By having your organization use metrics, organizations will be surprised by how much their HR functions can save on hiring, staffing, and separation costs. Below are some suggestions for organizations interested in tracking talent through metrics should consider the following: Percentage of performance goals met or exceeded, showing if the organization is meeting the performance goal aligned with its mission Percentage of employees' rate at the top performance appraisal level who are paid above average salary Percentage of top performing employees who resign for compensation related reasons Turnover percentages of low-performing managers Percentage of employees in performance management programs that show improvement within a year Percentage and rate of involuntary turnover in key positions Having HR metrics is first part and a critical one and obtaining the data is another but being able make meaning and provide a compelling story as to what the data means in relation to the business strategy is just as crucial. Software and outsourcing HR metrics For the most part, HR professionals in many companies probably don't need to purchase additional software to create valid metrics. The trick is knowing where to look and how to extract data. If using the correct HR information systems, most information systems should include reporting tools that can provide data on learning and performance management or financial systems. However, organizations have to ensure that the data they have uphold integrity and are quality data. While HR systems is one way of obtaining metrics, many organizations because of lack of resources or time, or simply because they don't know where to begin can enlist the help of a retention specialist or purchase metric systems designed solely for HR Metrics. The HRIS systems (Workday, Successfactors, Oracle HR, etc.) provide often strong reporting tools within the systems to reflect cost of people. While talent acquisition systems (like Taleo, etc.) Provide insight in recruitment costs. If the HR department wants to create data around organizational insight, engagement, culture and in general the opinions of the employees, software like FieldRate (business intelligence) or Beekeeper (more focused on communications) can be used, especially if there is a need to reach the employees who do not have a corporate email addresses. References Further reading External links Shrm hrmetrics analytics Shrm publications Case study of Hrmetrics Human resource management Metrics
Human resource metrics
Mathematics
2,102
20,753,177
https://en.wikipedia.org/wiki/Euthanasia%20device
A euthanasia device is a machine engineered to allow an individual to die quickly with minimal pain. The most common devices are those designed to help terminally ill people die by voluntary euthanasia or assisted suicide without prolonged pain. They may be operated by a second party, such as a physician, or by the person wishing to die. There is an ongoing debate on the ethics of euthanasia and the use of euthanasia devices. Notable devices Thanatron Invented by Jack Kevorkian, who used this device and called it a "Thanatron" or death machine after the Greek daemon, Thanatos. It worked by pushing a button to deliver the euthanizing drugs mechanically through an IV. It had three canisters mounted on a metal frame. Each bottle had a syringe that connected to a single IV line in the person's arm. One contained saline, one contained a sleep-inducing barbiturate called sodium thiopental and the last a lethal mixture of potassium chloride, which immediately stopped the heart, and pancuronium bromide, a paralytic medication to prevent spasms during the dying process. Two deaths were assisted with this method. Mercitron Kevorkian assisted others with a device that employed a gas mask fed by a canister of carbon monoxide which was called "Mercitron" (mercy machine). This became necessary because Kevorkian's medical license had been revoked after the first two deaths, and he could no longer have legal access to the substances required for the "Thanatron". It was a rudimentary device consisting of a canister of carbon monoxide attached to a face mask with a tube. A valve must be released to start the gas flowing. Depending on the person's disability, a makeshift handle may be attached to the valve to make it easier to turn. Or, with the valve in the "open" position, a clip or clothespin could be clamped on the tubing. Pulling it off allows the gas to flow. By Kevorkian's estimates, this method took 10 minutes or longer. Sometimes he encouraged people to take sedatives or muscle relaxants to keep them calm as they breathed deeply of the gas. Deliverance Machine The Deliverance Machine was invented by Philip Nitschke. It consisted of software entitled Deliverance, that came on a special laptop that could be connected to an IV in a person's arm. The computer program asked a series of questions to confirm the person's intent to die that being: 1." Are you aware that if you go ahead to the last screen and press the “Yes” button, you will be given a lethal dose of medications and die?" 2. "Are you certain you understand that if you proceed and press the “Yes” button on the next screen that you will die?" 3." In 15 seconds you will be given a lethal injection… press “Yes” to proceed." After answering affirmatively to all of the questions, a lethal injection of barbiturates was triggered. In an interview Nitschke said that, even if it had been legal for a doctor to give a lethal injection, he preferred that the patient be in control of the administration of the drugs. Reducing the role of a physician also allowed a patient to be alone with their family during the euthanasia process. The machine was used, legally, while the Australian Northern Territory's Rights of the Terminally Ill Act 1995 was in effect; the act was eventually nullified by legislation of the Australian Parliament. The machine was put on display in the British Science Museum. Exit International's euthanasia device The Exit International euthanasia device was invented by Philip Nitschke in 2008. It uses a canister of nitrogen, a plastic suicide bag, and a plastic tube with one end attached to the gas canister and the other fixed inside the bag by a tie held by adhesive tape. Nitschke said, "That idea of giving people access to a means of feeling that they're back in control of this issue is actually a way of prolonging life. It may seem paradoxical, but what we find is when people feel that they're back in control, they're less likely to do desperate things." Background The basic principle of autoeuthanasia by anoxia was first described in the book Final Exit by Derek Humphry in 1991. The original methodology was devised, using helium, by the NuTech group. Description Nitschke described his device as a modification of the exit bag with helium method described in The Peaceful Pill Handbook. Helium was replaced by a cylinder of compressed nitrogen and a regulator to supply the nitrogen into a plastic bag. One advantage of this method was the availability of larger amounts of nitrogen and flow rates last longer. Nitschke states that nitrogen is also more physiologically inert than helium, with less chance of adverse reaction, and that loss of consciousness is quick with death following within minutes. Unlike helium cylinders, nitrogen cylinders can be refilled in the event of leakage and nitrogen gas can't be detected during an autopsy. Process The principle behind the device is oxygen deprivation that leads to hypoxia, asphyxia and death within minutes. Deprivation of oxygen in the presence of carbon dioxide creates panic and a sense of suffocation (the hypercapnic alarm response), and struggling even when unconscious, whereas anoxia in the presence of an inert gas, like nitrogen, helium or argon, does not. Close contact with an enclosed inert gas is lethal, but released into the open air, it quickly disperses, and is safe for others. It is neither flammable nor explosive. Humphry's book describes close contact with the gas achieved by enclosing the head in a strong, clear plastic bag, secured around the neck, with the inert gas fed into the bag by plastic tubing. Suicides using this method are documented in the forensic literature. In the study Asphyxial suicide with helium and a plastic bag (Ogden et al.), the authors describe a typical case history, in which an elderly cancer sufferer used a plastic bag which was secured over her head, a helium tank, and a plastic hose attached to the tank valve and plastic bag. The authors noted that a suicide bag filled with helium will cause almost immediate unconsciousness, followed within minutes by death. Time to loss of consciousness in a bag filled with nitrogen is 15 seconds, according to professors Copeland, Pappas and Parr, who campaigned for a more humane execution method in the US state of Oklahoma. Sarco device In 2017, Nitschke invented the 3D-printed suicide capsule, which he named "the Sarco". The Sarco would contain a touchpad and nitrogen, and once an activation code is entered, "the person is again asked if they wish to die". An affirmative answer causes nitrogen to flow into the capsule, displacing oxygen, and death follows shortly thereafter. The Sarco machine cannot be printed on small 3D printers. The Sarco offers a "euphoric death". Nitschke planned to release the open source plans for the Sarco by 2019. In fiction Suicide booth A suicide booth is a fictional machine for committing suicide. Suicide booths appear in numerous fictional settings, one of which is the American animated series Futurama. Compulsory self-execution booths were also featured in an episode of the original Star Trek TV series entitled "A Taste of Armageddon". The concept can be found as early as 1893. When a series of suicides were vigorously discussed in United Kingdom newspapers, critic William Archer suggested that in the golden age there would be penny-in-the-slot machines by which a man could kill himself. Following Archer's statement in 1893, the 1895 story "The Repairer of Reputations" by Robert W. Chambers featured the Governor of New York presiding over the opening of the first "Government Lethal Chamber" in the then-future year of 1920, after the repeal of laws against suicide: However, as Chambers's protagonist who relates the story is suffering from brain damage, it remains ambiguous whether or not he is an unreliable narrator. Modern writer Martin Amis provoked a small controversy in January 2010 when he facetiously advocated "suicide booths" for the elderly, of whom he wrote: Futurama In the world of Futurama, Stop-and-Drop suicide booths resemble phone booths and cost one quarter per use. The booths have at least three modes of death: "quick and painless", "slow and horrible", and "clumsy bludgeoning" though, it is also implied that "electrocution, with a side order of poison" exists, and that the eyes can be scooped out for an extra charge. After a mode of death is selected and executed, the machine cheerfully says, "You are now dead. Thank you for using Stop-and-Drop, America's favorite suicide booth since 2008", or in Futurama: The Beast with a Billion Backs, "You are now dead, please take your receipt", and at this time many untaken receipts are shown. The first appearance of a suicide booth in Futurama is in "Space Pilot 3000", in which the character Bender wants to use it after learning that the girders he bent were used to construct suicide booths. Fry at first mistakes the suicide booth for a phone booth, and Bender offers to share it with him. Fry requests a collect call, which the machine interprets as a "slow and horrible" death. It then turns out that "slow and horrible" can be survived by pressing oneself against the side of the booth, leading Bender to accuse the machine of being a rip-off. In Futurama: Bender's Big Score, after failing to initially chase down Fry in the year 2000, Bender wants to kill himself, but then ironically mistakes a regular phone booth for a suicide booth. A suicide booth reappeared in Futurama: The Beast with a Billion Backs where Bender once again attempts to end his life, but is saved when dropped into the League of Robots' lair. During the season 6 episode "Ghost in the Machines", Bender commits suicide in a booth named Lynn that is still angry at him over the end of their relationship six months earlier; his ghost eventually makes its way back to his body so he can continue living. According to series co-creator Matt Groening, the suicide booth concept was inspired by a 1937 Donald Duck cartoon, Modern Inventions, in which Donald Duck visits a Museum of the Future and is nearly killed by various push button gadgets. The suicide booth was closely enough associated with Bender's character that in 2001 it was featured as the display stand for the Bender action figure. It was also one of the many features of the series which troubled the executives at Fox when Groening and David X. Cohen first pitched the series. In other media In the Star Trek episode "A Taste of Armageddon", people who were deemed war casualties by the government of Eminiar VII were required to enter suicide booths. Treaty arrangements require that everyone who is calculated as "dead" in the hypothetical thermonuclear war simulated using computers actually die, without actually damaging any infrastructure. In the end, the computers are destroyed, the war can no longer be calculated in this way, the treaty breaks down, and faced with a real threat, (presumably) peace begins. After the Heaven's Gate mass suicide event was linked by tabloids to an extreme fascination with science fiction and Star Trek in particular it was noted that multiple episodes, including "A Taste of Armageddon", actually advocated an anti-suicide standpoint as opposed to the viewpoint expressed by the Heaven's Gate group. In the seventeenth season The Simpsons episode "Million Dollar Abie", a suicide machine called a "diePod" (a pun on the iPod) is featured. The diePod allows the patient to choose visual and auditory themes that present themselves as the patient is killed. It also shows three different modes, namely, "Quick Painless Death", "Slow and Painful Death", and "Megadeath" (a pun on a band of a similarly spelled name). It was a reference to the suicide building in Soylent Green. Being a direct parody of the aforementioned scene, Abraham Simpson receives the opportunity to select his final vision and musical accompaniment: 1960s-era footage of "cops beatin' up hippies" to the tune of "Pennsylvania 6-5000" by the Glenn Miller Orchestra. See also Euthanasia Euthanasia Coaster Sarco pod Suicide bag References External links PBS Frontline: The Thanatron
Euthanasia device
Physics,Technology
2,615
15,026,438
https://en.wikipedia.org/wiki/Architecture%20of%20Tokyo
The architecture of Tokyo has largely been shaped by the city's history. Twice in recent history has the metropolis been left in ruins: first in the 1923 Great Kantō earthquake and later after extensive firebombing in World War II. Because of this and other factors, Tokyo's current urban landscape is mostly modern and contemporary architecture, and older buildings are scarce. Tokyo once was a city with low buildings and packed with single family homes, today the city has a larger focus on high rise residential homes and urbanization. Tokyo's culture is changing as well as increased risk of natural catastrophes, because of this architecture has had to make dramatic changes since the 1990s. Located off of Tokyo Bay which makes typhoons and rising sea levels a current risk, along with volcanoes and large earthquakes. As a result of this, a new focus has been placed on waterborne risks such as rising sea levels and seismic events. Tokyo in recent years has been growing at a steady rate. As a result, new buildings have been built at increased heights in order to make the most out of the land they occupy. Tokyo continues to advance in technology and grow, which will continue to change its architecture for years to come. History of Japanese architecture Japanese architects have designed a way to build temples, furniture, and homes without using screws or nails. To keep the piece together joints are constructed to hold everything in place. However, more time-consuming, joints tend to hold up to natural disasters better than nails and screws, which is how some temples in Japan are still standing despite recent natural events. Japanese homes were influenced from China greatly until 57 BC, when Japanese homes started to grow to be more distinct from other cultures. Until 660 AD homes and building constructed in Japan were made from stone and timber. Even though all buildings from this era are long gone there are documents showing traditional structures. Contrary to this however, wood still remains the most important material in Japanese architecture. Historic architects Arata Isozaki: Isozaki was born on July 23, 1931, in Kyushu, Japan. He studied architecture at the University of Tokyo. In 1963 he opened up his own studio and was the leading architect during the postwar period in Japan. Isozaki's first building he worked on was the Ōita Prefectural Library (1966). Kenzo Tange: Tange was born on September 4, 1913. His best known work is the Hiroshima Peace Center and the 1964 Olympic games gymnasium. In Tokyo his design for the New Tokyo City Hall Complex made him a famous both local and internationally. Notable buildings Tokyo Skytree: One of the most famous buildings in Tokyo is the Skytree standing at 1,148 feet tall which makes it the second tallest building in the world and the world's largest free-standing tower. The main function of the Skytree is for telecommunications. The start of construction on the building started in 2008 and was finished in May 2012, the main architect on the project was the Nikken Sekkei firm. Today the tower is a popular tourist stop with its observation decks and restaurants located in the tower. Tokyo Tower: Tokyo Tower is used as an observatory tower along with a broadcasting antenna. It is located in the Minato district within Tokyo, Japan. The tower was finished in 1958 and cost 2.8 Billion Yen. Standing at 1,092 feet tall, this is the second largest tower in Japan, right after the Tokyo Skytree. The tower was originally modeled off the Eiffel Tower in Paris, France; however, Tokyo tower is 13 meters taller than the Eiffel Tower. Tokyo Tower is painted in orange and white to comply with the air traffic in and out of Tokyo. This paint job, however, has to be repainted every 5 years. Asakusa Kannon Temple: Built in 645 A.D, located in one of the most famous parts of Tokyo, Asakusa Kannon Temple is one of the oldest and most famous tourist destinations in Tokyo. This temple can be found in Asakusa district located in the center of Shitmachi. Dedicated to Bodhisattva Kannon, used for a Buddhist Temple and practices. Asakusa also hosts an annual festival called Sanja Matsuri. Nakagin Capsule Tower: Built in 1972 by architect Kisho Kurokawa, the Nakagin Capsule Tower was built in only 30 days. Unlike other architecture that tower is built of removable cubes, each measuring to 107 feet, they are furnished with basic appliances, bathroom, and bed. Originally plans called to have each cube replaces every 25 years, however this proved to be too expensive. Since then residents have modified the cubes for different purposes, however the tower still home residents to this day. Yoyogi National Gymnasium: Built for the 1964 Olympic games the Yoyogi National Gymnasium was finished a little over a month before the games started. The architect on the project was Kenzo Tange. The gymnasium was used for basketball and swimming competitions during the games. In 2016 a campaign started to get the building on the world heritage list. Rainbow Bridge National Diet Building Tokyo Metropolitan Government Building Tokyo Big Sight Asahi Beer Hall by Philippe Starck Tokyo Station Tokyo International Forum Roppongi Hills Tokyo Imperial Palace Akasaka Palace Gallery References External links Tokyo Architecture checkonsite.com architectural guide to Tokyo - combines maps, addresses, ratings, reviews Tokyo
Architecture of Tokyo
Engineering
1,088
28,354,927
https://en.wikipedia.org/wiki/C16H14N4O
{{DISPLAYTITLE:C16H14N4O}} The molecular formula C16H14N4O (molar mass: 278.31 g/mol, exact mass: 278.1168 u) may refer to: Adibendan Sudan Yellow 3G, also known as Solvent Yellow 16 or C.I. disperse yellow Molecular formulas
C16H14N4O
Physics,Chemistry
76
9,883,115
https://en.wikipedia.org/wiki/Geer-Melkus%20Construction
Geer-Melkus Construction Co., Inc. was a commercial construction company located in Grand Island, Nebraska. The company was founded in 1893 and was in existence until 1986. Overview Originally known as the Geer Company, it later became known as Geer-Maurer Construction Company before becoming known as Geer-Melkus. Geer-Melkus was the general contractor on many prominent buildings and civil engineering projects throughout the Midwest, including the Stuhr Museum of the Prairie Pioneer in Grand Island, designed by Edward Durell Stone. The company was involved in several prominent lawsuits, including Geer-Melkus Constr. Co. v. United States, 302 F.2d 181 (8th Cir. 1962); United States v. Geer-Melkus Constr. Co., 195 F. Supp. 362 (D.N.D. 1961); Wood River v. Geer-Melkus Constr. Co., 233 Neb. 179 (Neb. 1989); and Geer-Melkus Constr. Co. v. Hall County Museum Board, 186 Neb. 615 (Neb. 1971). Geer-Melkus Constr. Co. v. United States, 302 F.2d 181 (8th Cir. 1962) This case was brought to the court by Bison Construction Co. against Geer-Melkus Construction in order to receive payment for a contract. Bison Construction claimed that Geer-Melkus had withheld a payment totalling $16,196.19. Geer-Melkus admitted to withholding said payments but said it was on account of damages incurred by water escaping through a break in the water line which Bison was contracted to construct. The end result was a judgement against Geer-Melkus and they were impelled to pay Bison Construction the outstanding total. References External links Metal Building Construction Companies based in Nebraska 1893 establishments in Nebraska Construction and civil engineering companies of the United States Construction and civil engineering companies established in 1893 Defunct companies based in Nebraska Construction and civil engineering companies disestablished in 1986 1986 disestablishments in Nebraska
Geer-Melkus Construction
Engineering
439
8,090,717
https://en.wikipedia.org/wiki/Mental%20mapping
In behavioral geography, a mental map is a person's point-of-view perception of their area of interaction. Although this kind of subject matter would seem most likely to be studied by fields in the social sciences, this particular subject is most often studied by modern-day geographers. They study it to determine subjective qualities from the public such as personal preference and practical uses of geography like driving directions. Mass media also have a virtually direct effect on a person's mental map of the geographical world. The perceived geographical dimensions of a foreign nation (relative to one's own nation) may often be heavily influenced by the amount of time and relative news coverage that the news media may spend covering news events from that foreign region. For instance, a person might perceive a small island to be nearly the size of a continent, merely based on the amount of news coverage that he or she is exposed to on a regular basis. In psychology, the term names the information maintained in the mind of an organism by means of which it may plan activities, select routes over previously traveled territories, etc. The rapid traversal of a familiar maze depends on this kind of mental map if scents or other markers laid down by the subject are eliminated before the maze is re-run. Background Mental maps are an outcome of the field of behavioral geography. The imagined maps are considered one of the first studies that intersected geographical settings with human action. The most prominent contribution and study of mental maps was in the writings of Kevin Lynch. In The Image of the City, Lynch used simple sketches of maps created from memory of an urban area to reveal five elements of the city; nodes, edges, districts, paths and landmarks. Lynch claimed that “Most often our perception of the city is not sustained, but rather partial, fragmentary, mixed with other concerns. Nearly every sense is in operation, and the image is the composite of them all.” (Lynch, 1960, p 2.) The creation of a mental map relies on memory as opposed to being copied from a preexisting map or image. In The Image of the City, Lynch asks a participant to create a map as follows: “Make it just as if you were making a rapid description of the city to a stranger, covering all the main features. We don’t expect an accurate drawing- just a rough sketch.” (Lynch 1960, p 141) In the field of human geography mental maps have led to an emphasizing of social factors and the use of social methods versus quantitative or positivist methods. Mental maps have often led to revelations regarding social conditions of a particular space or area. Haken and Portugali (2003) developed an information view, which argued that the face of the city is its information . Bin Jiang (2012) argued that the image of the city (or mental map) arises out of the scaling of city artifacts and locations. He addressed that why the image of city can be formed , and he even suggested ways of computing the image of the city, or more precisely the kind of collective image of the city, using increasingly available geographic information such as Flickr and Twitter . Using mental maps, we will be able to predict individual decision making and spatial selection, as well as evaluate their routing and navigation. A cognitive maps utility as a mnemonic and metaphorical device is precisely one of its other benefits as a shaper of the world and local attitudes. The first major field of study within the domain of memory maps is geography, spatial cognition and neurophysiology. This aims to understand how routes are drawn by subject from his or her set of subjects out into space which lead to memorization and internal representations. Overall these representations take the form of drawings, positioning in a graph, or oral/textual narratives, but are reflected as behavior is space that can be recorded as tracking items. Research applications Mental maps have been used in a collection of spatial research. Many studies have been performed that focus on the quality of an environment in terms of feelings such as fear, desire and stress. A study by Matei et al. in 2001 used mental maps to reveal the role of media in shaping urban space in Los Angeles. The study used Geographic Information Systems (GIS) to process 215 mental maps taken from seven neighborhoods across the city. The results showed that people's fear perceptions in Los Angeles are not associated with high crime rates but are instead associated with a concentration of certain ethnicities in a given area. The mental maps recorded in the study draw attention to these areas of concentrated ethnicities as parts of the urban space to avoid or stay away from. Mental maps have also been used to describe the urban experience of children. In a 2008 study by Olga den Besten mental maps were used to map out the fears and dislikes of children in Berlin and Paris. The study looked into the absence of children in today's cities and the urban environment from a child's perspective of safety, stress and fear. Peter Gould and Rodney White have performed prominent analyses in the book “Mental Maps.” This book is an investigation into people's spatial desires. The book asks of its participants: “Suppose you were suddenly given the chance to choose where you would like to live- an entirely free choice that you could make quite independently of the usual constraints of income or job availability. Where would you choose to go?” (Gould, 1974, p 15) Gould and White use their findings to create a surface of desire for various areas of the world. The surface of desire is meant to show people's environmental preferences and regional biases. In an experiment done by Edward C. Tolman, the development of a mental map was seen in rats. A rat was placed in a cross shaped maze and allowed to explore it. After this initial exploration, the rat was placed at one arm of the cross and food was placed at the next arm to the immediate right. The rat was conditioned to this layout and learned to turn right at the intersection in order to get to the food. When placed at different arms of the cross maze however, the rat still went in the correct direction to obtain the food because of the initial mental map it had created of the maze. Rather than just deciding to turn right at the intersection no matter what, the rat was able to determine the correct way to the food no matter where in the maze it was placed. The idea of mental maps is also used in strategic analysis. David Brewster, an Australian strategic analyst, has applied the concept to strategic conceptions of South Asia and Southeast Asia. He argues that popular mental maps of where regions begin and end can have a significant impact on the strategic behaviour of states. A collection of essays, documenting current geographical and historical research in mental maps is published by the Journal of Cultural Geography in 2018. See also Spatial cognition References Knowledge representation Cognitive psychology Human geography Spatial cognition
Mental mapping
Physics,Biology,Environmental_science
1,388
75,365,758
https://en.wikipedia.org/wiki/Ophirite
Ophirite is a tungstate mineral first discovered in the Ophir Hill Consolidated mine at Ophir district, Oquirrh Mountains, Tooele County, Utah, United States of America. It was found underground near a calcite cave in one veinlet, six centimeters wide by one meter long, surrounded by different sulfides. Before the closing of the mine in 1972, it was dominated by sulfide minerals, and the Ophir district was known for being a source of zinc, copper, silver, and lead ores. The crystals are formed as tablets. It is the first known mineral to contain a heteropolyanion, a lacunary defect derivative of the Keggin anion. The chemical formula of ophirite is Ca2Mg4[Zn2Mn3+2(H2O)2(Fe3+W9O34)2] · 46•H2O. The mineral has been approved by the Commission on New Minerals and Mineral Names, IMA, to be named ophirite for its type locality, the Ophir Consolidated mine. Occurrence Ophirite is found in association with scheelite and pyrite. The production of the mineral is thought to be from oxidative alteration of sulfides: a reaction between dolomite and scheelite with oxidizing and late acidic hydrothermal solutions that are in the presence of calcium-rich and pyrite hornfels. It occurs in one veinlet, which is surrounded by sphalerite, galena, bournonite, unidentified sulfide minerals, foci of apatite, and sericite-containing pyrite, and is typically interface between scheelite and dolomite. Also present in the vein are crystals of sulfur and fluorite. Physical properties Ophirite is an orange-brown, transparent mineral with a vitreous luster. It exhibits a hardness of 2 on the Mohs hardness scale. Ophirite occurs as tablet-shaped crystals on {001} with irregular {100} and {110} bounding forms. Ophirite has no observed cleavage and irregular/uneven fracture. The measured specific gravity is 4.060 g/cm3. Optical properties Ophirirte is biaxial positive, which means it will refract light along two axes. The mineral is optically biaxial positive, 2Vmeas. 43(2)°. The refractive indices are: α ~ 1.730(3), β ~ 1.735(3), and γ ~ 1.770(3)°. Dispersion is strong, r > v. Its pleochroism is light orange brown for X and Y, and orange brown for Z, where X<Y<<Z. Observations indicate that chemical species are in their fully oxidized states. Chemical properties Ophirite is a tungstate, and is the first mineral discovered containing [4]Fe3+[6]W6+9O34, a group in the structural unit of the ophirite polyanion. Tri-lacunary Keggin anions are well known in synthetic compounds, but ophirite is the first known example of a mineral with a tri-lacunary Keggin polyanion.The empirical chemical formula for ophirite calculated on the basis of 30 cations, is Ca1.73Mg3.99[Zn2.02Mn3+1.82(H2O)2(Fe3+2.34W17.99O68)2] · 45.95•H2O. The ideal formula for ophirite is Ca2Mg4[Zn2Mn3+2(H2O)2(Fe3+W9O34)2] · 46•H2O. Chemical composition X-ray crystallography A Rigaku R-Axis Rapid II curved imaging plate microdiffractometer using monochromatized MoKα radiation was used to collect X-ray diffraction data for ophirite. Ophirite is in the triclinic crystal system and in the space group P. Its unit-cell dimensions were determined to be a = 11.9860(2) Å; b = 13.2073(2) Å; c = 17.689(1) Å; β= 85.364(6)°; α = 69.690(5)°; γ = 64.875(5)°; Z = 1. See also List of minerals References Natural materials Tungstate minerals Triclinic minerals Minerals in space group 1 Wikipedia Student Program Zinc minerals Manganese minerals
Ophirite
Physics
979
78,173,272
https://en.wikipedia.org/wiki/Lu%2029-252
Lu 29-252 is a selective sigma σ2 receptor ligand which was under development for the treatment of anxiety disorders but was never marketed. It reached the preclinical stage of development prior to the discontinuation of its development. The drug was under development by Lundbeck. References Abandoned drugs Benzofurans Experimental psychiatric drugs Piperidines Sigma receptor ligands Spiro compounds
Lu 29-252
Chemistry
79
1,779,050
https://en.wikipedia.org/wiki/Guanidine%20nitrate
Guanidine nitrate is the chemical compound with the formula [C(NH2)3]NO3. It is a colorless, water-soluble salt. It is produced on a large scale and finds use as precursor for nitroguanidine, fuel in pyrotechnics and gas generators. Its correct name is guanidinium nitrate, but the colloquial term guanidine nitrate is widely used. Production and properties Although it is the salt formed by neutralizing guanidine with nitric acid, guanidine nitrate is produced industrially by the reaction of dicyandiamide (or calcium salt) and ammonium nitrate. It has been used as a monopropellant in the Jetex engine for model airplanes. It is attractive because it has a high gas output and low flame temperature. It has a relatively high monopropellant specific impulse of 177 seconds (1.7 kN·s/kg). Guanidine nitrate's explosive decomposition is given by the following equation: Uses Guanidine nitrate is used as the gas generator in automobile airbags. It is less toxic than the mixture used in older airbags of sodium azide, potassium nitrate and silica (NaN3, KNO3, and SiO2), and it is less explosive and sensitive to moisture compared to the very cheap ammonium nitrate (NH4NO3). Safety The compound is a hazardous substance, being an explosive and containing an oxidant (nitrate). It is also harmful to the eyes, skin, and respiratory tract. Notes External links Jetex: Propellants PhysChem: Guanidine Nitrate MSDS Guanidinium compounds Nitrates Monopropellants Explosive chemicals
Guanidine nitrate
Chemistry
359
3,567,426
https://en.wikipedia.org/wiki/Wave%20Organ
The Wave Organ is a sculpture located in San Francisco, California. It was constructed on the shore of San Francisco Bay in May 1986 by the Exploratorium, and more specifically, by installation artist and the Exploratorium artist-in-residence Peter Richards, who conceived and designed the organ, working with stonemason George Gonzales. The Wave Organ is dedicated to Frank Oppenheimer. Oppenheimer was the founding director of the Exploratorium, led the fundraising efforts for the Wave Organ, and died seven months before construction started. Location The Wave Organ is located at the end of a spit of land extending from the Golden Gate Yacht Club. There is a panoramic view of the city across the narrow channel into the St. Francis and Golden Gate yacht clubs, bounded on the left by the Fort Mason piers and to the right by a towering eucalyptus grove bordering Crissy Field. The park and trail to it are wheelchair accessible, with the trailhead at the Marina Green park. Mechanism Through a series of 25 PVC pipes, the Wave Organ interacts with the waves of the bay and conveys their sound to listeners at several different stations. The effects produced vary depending on the level of the tide but include rumbles, gurgles, sloshes, hisses, and other more typical wave sounds. The sound is best heard at high tide. The structure incorporates stone platforms and benches where visitors may sit near the mouths of pipes, listening. The stone pieces used in its construction were salvaged from the demolition of the Laurel Hill Cemetery in San Francisco. See also Blackpool High Tide Organ (in Blackpool, England, UK) Sea Organ (in Zadar, Croatia) Chillida's Comb of the Wind (in San Sebastián / Donostia, Basque Country, Spain, 1976) Biospherical Digital-Optical Aquaphone References External links Map: BlooSee Infopoint Wave Organ on The Traveling Twins 1986 establishments in California 1986 sculptures Coastal construction Hydraulophones Landmarks in San Francisco Organs (music) Outdoor sculptures in San Francisco Sound sculptures Stone sculptures in California
Wave Organ
Engineering
424
14,761,030
https://en.wikipedia.org/wiki/HMGB2
High-mobility group protein B2 also known as high-mobility group protein 2 (HMG-2) is a protein that in humans is encoded by the HMGB2 gene. Function This gene encodes a member of the non-histone chromosomal high-mobility group protein family. The proteins of this family are chromatin-associated and ubiquitously distributed in the nucleus of higher eukaryotic cells. In vitro studies have demonstrated that this protein is able to efficiently bend DNA and form DNA circles. These studies suggest a role in facilitating cooperative interactions between cis-acting proteins by promoting DNA flexibility. This protein was also reported to be involved in the final ligation step in DNA end-joining processes of DNA double-strand breaks repair and V(D)J recombination. References Further reading Loss of HMGB2 (High-mobility group protein box 2) during senescence blunts SASP (senescence-associated secretory phenotype) gene expression by allowing for spreading of repressive heterochromatin into SASP gene loci. This correlates with incorporation of SASP gene loci into SAHF (senescence-associated heterochromatin foci), which in turn represses SASP gene expression External links Transcription factors
HMGB2
Chemistry,Biology
265
274,110
https://en.wikipedia.org/wiki/Savage%20Love
Savage Love is a syndicated sex-advice column by Dan Savage. The column appears weekly in several dozen newspapers, mainly free newspapers in the US and Canada, but also newspapers in Europe and Asia. It started in 1991 with the first issue of the Seattle weekly newspaper The Stranger. Since October 2006, Savage has also recorded the Savage Lovecast, a weekly podcast version of the column, featuring telephone advice sessions. Podcasts are released every Tuesday. History In 1991, Savage was living in Madison, Wisconsin, and working as a manager at a local video store that specialized in independent film titles. There, he befriended Tim Keck, co-founder of The Onion, who announced that he was moving to Seattle to help start an alternative weekly newspaper titled The Stranger. Savage "made the offhand comment that forever altered [his] life: 'Make sure your paper has an advice column – everybody claims to hate 'em, but everybody seems to read 'em'." He typed up a sample column, and to his surprise Keck offered him the job. Until 1999, the format of the conversation began with an advice seeker saying, "Hey faggot", then asking their question. Savage's intent was reappropriation of the word into a positive description for gay guys. Using this word worked when the readers were LGBT, but as the column grew popular, Savage changed to a generic greeting to better match the expectations of the general public. Since 2002, he has written the column at Eppie Lederer's desk, which he, a "lifelong fan" of her Ann Landers column, bought at auction after the noted advice columnist died. Savage stated in a February 2006 interview in The Onions A.V. Club (which publishes his column) that he began the column with the express purpose of providing mocking advice to heterosexuals, since most straight advice columnists were "clueless" when responding to letters from gay people. Language During the run of Savage Love, Savage has popularized several neologisms and initialisms. He has also debunked several sexual neologisms for sex acts, including the "donkey punch", the "Dirty Sanchez", the "pirate", and the "hot Karl", concluding "they're all fictions." He has objected to use of the term "pussy" as an insult, saying that vaginas were wonderful, "popping out babies", and proposed "scrotum" (pl. "scrota") as an insult. Savage has also tried to reclaim many offensive words. For the first six years of the column, he had his readers address him with "Hey, faggot", as a comment on previous efforts to reclaim offensive words. He was criticized for this by some gay activists. After receiving criticism for use of the word "retarded"—considered by many to be an offensive slur against those with intellectual disabilities—Savage suggested "leotarded" as an alternative, because "leotard" rhymes with "retard". Campsite rule In any relationship, but particularly those with a large difference of age or experience between the partners, the older or more experienced partner has the responsibility to leave the younger or less experienced partner in at least as good a state (emotionally and physically) as before the relationship. The "campsite rule" includes things like leaving the younger or less experienced partner with no STDs, no unwanted pregnancies, and not overburdening them with emotional and sexual baggage. Tea and sympathy rule Shortly after a 2009 scandal in Portland, Oregon, involving openly gay mayor Sam Adams and Beau Breedlove, who had allegedly begun a sexual relationship with Adams almost immediately after turning 18, Savage created a companion rule to the "campsite rule", now known as the "tea and sympathy rule". The rule is a reference to a line in the play of the same name, in which a much older woman asks of a high-school-age boy, right before having sex with him: "Years from now, when you talk about this – and you will – be kind." Savage claimed in an article in The Portland Mercury that, while Adams followed the "campsite rule" – Breedlove did not claim that Adams had given him any diseases or caused him emotional trauma, and in fact still refers to Adams as a friend – Breedlove violated the "tea and sympathy" rule by making public statements that he knew could ruin Adams' career. CPOS "Cheating piece of shit", said of a cheater, but usually reserved for one who is chronic or abusive/passive-aggressive about it. DTMFA and ITMFA Savage often uses the expression "dump the mother-fucker already" (DTMFA), at the close of a response, recommending that the writer immediately end an abusive or worthless relationship. A reader of Savage Love suggested the initialisation ITMFA, a take on DTMFA, meaning "Impeach the Motherfucker Already!" The initialisation was coined in reference to the presidency of George W. Bush in 2006, but was reintroduced in 2017 in reaction to the presidency of Donald Trump. Starting in 2018 Savage, through his website, sold clothing with ITMFA on it. GGG Savage coined "GGG", "good, giving, and game", and it means one should strive to be good in bed, giving "equal time and equal pleasure" to one's partner, and game "for anything – within reason". The term inspired the "How GGG Are You? Test" on dating site OkCupid, and the invention of a cocktail. HTH "How'd that happen?", a mock-incredulous reply to those who write in and say they had certain (often sexual) things "happen to" them, as if they had no part or say in the incident, when they clearly did. Lifting luggage Following the "rent boy" allegations regarding George Rekers, who has widely promoted aversion therapy, Dan Savage, along with others including Stephen Colbert, promoted the use of the idiom "to lift [some]one's luggage", meaning to supply sexual pleasure to, or derive it from, one's partner. This originated from Rekers who, when outed, insisted he had hired the escort only to assist him with lifting his luggage. Rekers also claimed he "spent a great deal of time sharing scientific information on the desirability of abandoning homosexual intercourse" and "shared the gospel of Jesus Christ with him in great detail". Originally Savage suggested that "lifting my luggage" refer to listening to the speaker expound on the "desirability" of converting oneself from homosexual to heterosexual. Later, after several political humorists started employing "lifting your luggage" as an implicit or explicit reference to various sexual acts, Savage suggested that "whatever lifts your luggage" supplant "whatever floats your boat" in common parlance. Monogamish In a July 20, 2011 column, Savage coined the term "monogamish", applying it to his own relationship with his partner. The term describes couples who are "mostly" but not 100% monogamous; such couples have an understanding that allows for some amount of sexual activity outside the relationship. Savage believes that, of all the couples that people think are 100% monogamous, a lot of them are more "monogamish" than people realize. The term has since seen mainstream use. Pegging In 2001, Savage challenged readers of his column to coin a name for the sex act in which a woman uses a strap-on dildo to perform anal sex on her male partner. After multiple nominations and a reader vote, the verb "peg", popularized by the sex education movie Bend Over Boyfriend released in 1998, was chosen, with a 43% plurality over runners-up "bob" and "punt". Saddlebacking In 2009, after a controversy involving the Saddleback Church, the column defined "saddlebacking" as "the phenomenon of Christian teens engaging in unprotected anal sex in order to preserve their virginities". The term is a play on the word "barebacking", referring to sexual intercourse, especially anal sex between men, with no condom ("bare"). Santorum Savage reacted strongly to statements made about homosexuality by former United States Senator Rick Santorum in an April 2003 interview with the Associated Press. Santorum included gay sex as a form of deviant sexual behavior, along with incest, polygamy, and bestiality, that he said threatens society and the family; he said that he believed consenting adults do not have a constitutional right to privacy with respect to sexual acts. Savage invited his readers to create a sex-related definition for "santorum" to "memorialize the Santorum scandal [...] by attaching his name to a sex act that would make his big, white teeth fall out of his big, empty head." The winning definition was "the frothy mixture of lube and fecal matter that is sometimes the byproduct of anal sex." Savage set up a website to spread the term, inviting bloggers and others to link to it, which caused it to rise to the top of a Google search for Santorum's name. Tolyamorous A tolyamorous person is someone in a monogamous relationship who knows that their partner occasionally has sex with somebody else, and is willing to put up with it and turn a blind eye. The word is similar to polyamory, and is a portmanteau of the Latin words "tolerare" (to tolerate, to bear) and "amor" (love). Savage introduced this neologism on episode 900 of Savage Lovecast in January 2024. References External links Savage Love at The Stranger Spreading Santorum Relationship Training Courses 2006 podcast debuts Advice columns LGBTQ-related podcasts Santorum Dan Savage Sexology The Stranger (newspaper) Advice podcasts
Savage Love
Biology
2,054
66,182,782
https://en.wikipedia.org/wiki/Carrier%20aircraft%20used%20during%20World%20War%20II
Over 700 different aircraft models were used during World War II. At least 135 of these models were developed for naval use, including about 50 fighters and 38 bombers. Only about 25 carrier-launched aircraft models were used extensively for combat operations. Of these, nine were introduced during the war years after the Japanese attack on Pearl Harbor brought United States into the war, four by the United States Navy (USN) and three by the Royal Navy (RN) and two by the Imperial Japanese Navy (IJN). Principal carrier aircraft used The table below lists the principal carrier-launched fighters and bombers used during World War II. They are listed within each aircraft type in chronological order of their introduction to service. Allied reporting names such as "Val" and "Kate" are included for IJN aircraft. Neither Germany nor Italy put carriers or carrier-launched aircraft into service. Some Axis fighters are included in the table below for comparison with the Allied fighters that met them in combat. Sources: Notes: Values were obtained from multiple sources. Some reported values may not be directly comparable to others in the same column. Combat ranges for some of the aircraft could be extended using drop tanks containing supplemental fuel. Japanese carrier aircraft designations for planes introduced after 1922 typically adhered to the following conventions. The first letter indicated the aircraft type, "A" for fighter, "B" for torpedo bomber, "C" for reconnaissance, and "D" for dive bomber. The last letter indicated the manufacturer, "A" for Aichi, "M" for Mitsubishi, "N" for Nakajima, and "Y" for Yokosuka. The "Type" indicated the last two digits for the Japanese year that the plane was adopted for service. For example, the "D3A (Type 99)" was a dive bomber manufactured by Aichi and adapted for service during the Japanese Year 2599 (1939). The plane was actually introduced for combat in 1940. The Allies referred to it as a "Val". Relative aircraft capabilities were not the only factors that contributed to success or failure for carrier aircraft in combat. Attack coordination and tactics, along with pilot skill, determination, and willingness to self-sacrifice were at least as important and perhaps more so. For example, both the new, high-performing Grumman TBF Avenger and the obsolete Douglas TBD Devastator were slaughtered as they tried to deliver their torpedoes at Midway without fighter protection. But their pilots' dogged pressing of their attacks in the face of almost hopeless odds created opportunities for the dive bomber pilots that, by luck as well as by determination, arrived in a timely manner to sink four IJN carriers. The effectiveness of the special attacks ("kamikaze") late in the war using mostly outdated aircraft can also be attributed to pilot determination and self-sacrifice. Nonetheless, other things being equal, faster, more maneuverable aircraft with longer ranges and better armament contributed to successful combat outcomes. Carrier aircraft types, functions, and features Carrier aircraft types. The types of aircraft usually launched from aircraft carriers were fighters, torpedo bombers, and dive-bombers. Floatplanes were also launched from some carriers but were typically catapulted from cruisers and battleships. Land-based aircraft types were frequently launched from carriers when delivering them to forward bases, such as when Curtiss P-40 Warhawk fighters flew from carriers to newly captured land-bases during the Allied invasion of North Africa. Sometimes land-based aircraft were launched from carriers for special operations, such as the USN Doolittle Raid, when B-25s were launched for a raid on Tokyo. Some carrier aircraft served in dual roles, such as fighter-bomber and bomber-reconnaissance aircraft. Carrier aircraft functions. Torpedo and dive bombers attacked enemy warships, transports, merchant ships, and land installations. Fighters accompanied bombers on attack missions, protecting them during interceptions by enemy fighters. Fighters maintained overhead in Combat Air Patrols (CAP) protected their carriers and other warships in the fleet by intercepting enemy bombers and by attacking submarines. Fighters and bombers were also widely used for reconnaissance and sometimes used for mine laying or for spotting to assist bombardment by warships. Carrier aircraft features. Size of aircrew. With few exceptions, fighters had a single crewmember, the pilot, while dive-bombers had two and torpedo bombers had three crewmembers. The RN valued a second crewmember in fighters for observation and navigation, and the Fairey Fulmar and the Fairey Firefly. The Blackburn Roc was a "turret fighter" and the second crewmember operated the turret Armament. Fighters and bombers typically had two to four machine guns, sometimes six. These were mostly 7.7mm or 7.62mm (.303in), but the heavier 12.7mm (.50in) were on some RN and USN aircraft. The IJN Zero fighter had the more destructive 20mm (.79in) cannon in addition to machine guns. Later in the war, RN and USN fighters also had 20mm cannon in addition to or instead of machine guns. After 1943, some fighters and bombers were also capable of firing 12.7 cm (127mm) (5 in) rockets. Bombers also carried bombs or a torpedo, the maximum possible weight for which generally increased with new introductions during the war. Most fighters could carry a small bomb load. Self-sealing fuel tank. Self-sealing fuel tanks retarded or eliminated the flow of fuel from a tank that has been holed during combat. This was typically accomplished by incorporating a material that swelled when it came into contact with fuel. Many RN and USN carrier aircraft used this technology, but it involved adding weight, and the IJN was reluctant to sacrifice range and maneuverability for improved survivability. Protective armor. Like self-sealing fuel tanks, protective armor for the aircrew improved survivability at the cost of performance. The RN and USN armored their carrier aircraft, protecting the aircrew, but the IJN typically did not. Number of wings. Carrier aircraft introduced after 1937 were all monoplanes except for the biplane RN Fairey Albacore which was an improved version of the Swordfish. The biplane Fairey Swordfish, introduced in 1936, was removed from front line combat but put onto anti-submarine convoy escort served through the entire war. Folding wings. Monoplane carrier aircraft introduced after 1936 almost all had folding wings to reduce the space taken up in hangars. . Exceptions included the Mitsubishi A5M "Claude" fighter and the Douglas SBD Dauntless and Yokosuka D4Y "Judy" dive bombers. Cockpit. Some early aircraft had open cockpits, but newer introductions typically had enclosed cockpits. Undercarriage. Most of the carrier aircraft introduced after 1937 had retractable landing gear to reduce drag. The two exceptions, introduced in 1940, were the RN Fairy Albacore torpedo bomber and the IJN Aichi D3A2 "Val" dive bomber. Pre- and early-war aircraft (1936-1941) Chinese land-based aircraft vs. Japanese carrier aircraft During the short “Shanghai Incident” in 1932, Japanese carrier-launched fighters and bombers and water-launched floatplanes attacked areas in and around the city. In one engagement, a group of three Mitsubishi B1M torpedo bombers and three Nakajima A1N fighters were attacked by a lone Boeing 218 fighter flown by Robert McCawley Short, an American pilot training Chinese flyers. He was shot down and the following month, Japanese fighters were unsuccessfully opposed by Chinese Curtiss Hawk fighters, some of which were also shot down. Over the next five years, IJN introduced improved fighters including the Nakajima A2N in 1932, the first purely Japanese-designed fighter. In 1936, the Nakajima A4N entered service. The year after that, the Mitsubishi A5M "Claude" became the world's first low-wing, carrier-launched monoplane. It was highly maneuverable and the direct predecessor of the famed Mitsubishi A6M Zero introduced three years later. At the time the Second Sino-Japanese War began in July 1937, IJN had about 200 fighters and bombers and 62 floatplanes available to attack. The Imperial Japanese Army (IJA0 had force concentrations in China's north and agreed that IJN would be responsible for aerial operations over central China. Three aircraft carriers with a total of 136 up-to-date fighters and bombers were sent to the coast off Shanghai. The Republic of China Air Force (ROCAF) was just emerging from a loose confederation of aircraft and airmen controlled by individual Chinese warlords. It was a hodgepodge of about 300 land-based fighters, mostly supplied by the US, UK, and Italy, with which to intercept Japanese fighters and bombers. The USSR also began supplying China aircraft after the Sino-Soviet Non-Aggression Pact was agreed in August. The Curtiss Model 68 Hawk III biplane, built both in the US and China, was used by the ROCAF as a bomber as well as the primary fighter during the early part of the war. It took the brunt of the Japanese attack during the defense of Shanghai and battle of Nanking, helping to make flying aces of several Chinese pilots. Shortly after the conflict began, Gao Zhihang intercepted a land-based Japanese bomber group from Taiwan and shot down a Mitsubishi G3M medium bomber. This was the first aerial combat victory for the Chinese. In addition to Curtiss Hawk IIIs, the ROCAF opposed Japanese attacks with some older Curtis Hawk II, Boeing P-26 Peashooter, Gloster Gladiator, and Fiat CR.32 fighters. After attrition had taken its toll of these aircraft, they were replaced by Soviet Polikarpov I-15 biplane fighters and later by Polikarpov I-16 aircraft, the world's first low-wing monoplane fighter with retractable landing gear used in combat. The latter also had 20mm cannons, making it one of the most heavily armed fighters for the period. For the next three years, IJN and ROCAF pilots fought above Beijing, Shanghai, Nanjing, Wuhan and elsewhere as Japan sought unsuccessfully to subdue China. Sometimes IJN carrier air groups were sent temporarily to land bases. Both air forces suffered defeats and enjoyed victories. Losing ground, the Chinese shifted their capital westward and inland until establishing it at Chongqing in central China. By 1941, Japan held large portions of northern and coastal China but had been weakened by the battles for inland central China. In mid-September 1941, as the IJN began to focus on the possibility of a wider war in the Pacific, it turned over responsibility for the air war over China to the Imperial Japanese Army. Continued resistance by China's National Revolutionary Army led to a war of attrition that tied down large numbers of Japanese troops until the end of World War II in 1945. United States vs. Japanese carrier aircraft The design of Japanese carrier aircraft was consistent with their overall strategy of emphasizing the offense in order to win a short war before America's overwhelmingly superior production capacity could be brought to bear. Expecting to face a numerically superior fleet, Japan's strategy envisioned using aircraft to help neutralize this advantage by gradual attrition as the enemy USN fleet approached Japan. This required aircraft with extended ranges and striking power, which in turn meant having them be lighter and faster but with less protection. Accordingly, Japan's Mitsubishi A6M Zero fighter, Nakajima B5N "Kate" torpedo bomber, and Aichi D3A "Val" dive bomber were all lightly-built with weight minimized by not providing cockpit armor to protect pilots or self-sealing fuel tanks to enable them to continue fighting after taking some hits. As a result of having greater range for its aircraft, the IJN would, during 1942, attack from 250 to 300 miles away compared to the USN that would only do so from 200 miles away. Fighters At the time of the 1941 attack on Pearl Harbor, Japan had both the world's best fighter and best torpedo bomber. In addition, their aircraft were flown by the world's most extensively trained and experienced airmen, in part due to their engagement since 1937 in the war in China. The A6M Zero was fast, highly maneuverable, and could out-turn and out-climb the USN Grumman F4F Wildcat. Also, when enemy planes were approaching for an attack, it was important for fighters to get off their decks and reach an advantageous altitude quickly. The Zero could climb at 3,000 ft/minute and the Wildcat only 2,300 ft/minute. Experienced Zero pilots were initially very successful against the lower performing Allied aircraft. Over the course of 1942, however, pilots in Wildcats developed aerial combat tactics such as the high-side pass and the Thach Weave. Exploiting these tactics coupled with greater aircraft survivability due to armor and self-sealing tanks, American pilots neutralized the Zero's advantages. Although better USN fighters were introduced later, the Wildcat served throughout the war. Over the course of 1942, Japan's substantial losses of her experienced pilots contributed to the American's gaining an upper hand in fighter combat. Torpedo bombers The Nakajima B5N "Kate" torpedo bomber was superior to the obsolete USN Douglas TBD Devastator in speed, rate of climb, and range. Kate torpedoes contributed to sinking USN fleet carriers Lexington (battle of the Coral Sea), Yorktown (Midway), and Hornet (Santa Cruz Islands), all during 1942. Devastator torpedoes contributed to sinking the IJN carrier Shōhō at the Coral Sea, but Zero fighters slaughtered the Devastators at Midway where they attacked without fighter protection. Only six of the 41 Devastators launched returned to their carriers. Though some closed to targets and launched torpedoes (the Mark 13 torpedo), the torpedoes ran deep or failed to explode Dive bombers The US had a superior dive bomber in the Douglas SBD-5 Dauntless compared to Japan's Aichi D3A2 "Val". Benefitting from fortunate timing, bombs from the Dauntless sank all four of the Japanese carriers lost at Midway. Nonetheless, the "Vals" served throughout the war and sank more Allied warships than any other Axis aircraft. This included sinking HMS Hermes, the first carrier to be sunk by carrier aircraft. United Kingdom vs. German and Italian land-based aircraft Fighters When war broke out in 1939 in the Atlantic Theater, the RN had only recently reacquired responsibility for their carrier-launched aircraft. For the previous two decades, the Royal Air Force (RAF) had responsibility for all air operations, and development of improved carrier-launched aircraft had been neglected in favor of land-based fighters for the defence of UK and bombers for the offensive. As a result, the RN entered the war with mostly relatively slow, limited range biplane fighters and bombers that were that were inferior to USN and IJN carrier aircraft of the same age. At the time of Britain's evacuation from Norway, her Gloster Sea Gladiator fighters, Fairey Swordfish torpedo bombers, and Blackburn Skua fighter-bombers were also inferior compared to the land-based German aircraft. Nonetheless, the Sea Gladiators did succeed in shooting down a couple of Messerschmitt Bf 110 fighters during the Norwegian campaign in early 1940. Gladiators and Sea Gladiators did take part in the Battle of Britain that autumn, but it was the numerous land-based Supermarine Spitfires and Hawker Hurricanes and their pilots that provided the principal aerial defense for the UK. Sea Gladiators also assisted during the defense of Malta, where the few in operation shot down Italian aircraft. Gladiators were removed from front-line service around Britain by 1941. The more modern, monoplane Blackburn Roc was able to shoot down a Junkers Ju 88 fighter-bomber attacking a convoy in 1940. However, the Roc was no match for German Messerschmitt Bf 109 or Focke-Wulf Fw 190 fighters and could not even perform as well as Britain's own Blackburn Skua. Considered one of the worst fighters of the war, the Roc was, by late 1940, taken out of front line service and consigned to training, target towing, and rescue duties. A more modern monoplane fighter, the Fairey Fulmar, was introduced in 1940 to replace the obsolete Sea Gladiator. It was a two-seat design and was large and less nimble than the Axis fighters it opposed. Nonetheless, while protecting convoys to Malta, Fulmars shot down ten Italian bombers and six Axis fighters. During the war, they provided air cover for the raids on Taranto and Petsamo, protected convoys to Russia, and supported invasions of French North Africa and Italy. Fulmars shadowed the German battleship Bismarck enabling Fairey Swordfish torpedo bombers to catch up with her. The UK upgraded their naval fighter squadrons in 1941 by adapting the highly successful, land-based Hawker Hurricane to carrier use. The Hawker Sea Hurricane performed well while protecting the Malta convoys and, operating from escort carriers, many Atlantic convoys. Dive bombers The monoplane Blackburn Skua two-man fighter-dive bomber was introduced in late 1938. It sank the German cruiser Königsberg during the German invasion of Norway; the first major warship sunk by a dive bomber in combat. Skuas provided air cover during the Dunkirk evacuation and served in the Mediterranean. Like the Roc, however, they fared poorly against the higher performance land-based Messerschmitt Bf 109, and were withdrawn from front line service during 1941. Torpedo bombers The biplane Fairey Swordfish torpedo bomber, affectionately referred to as "Stringbag" by her aircrews, was introduced in 1936. It was an archaic-looking biplane with cloth-covered wings, open cockpit and fixed landing gear. It had been designed as a torpedo-spotter-reconnaissance aircraft and emerged as the standard naval attack aircraft serving as both a dive-bomber and torpedo bomber. In the first airborne torpedo attack of the war, Swordfish damaged a German destroyer at Trondheim. Later, Swordfish crippled the French battleship Dunkerque during the Attack on Mers-el-Kébir, disabled three Italian battleships during the Battle of Taranto and attacked the German battleship Bismarck through gale-force storms in the Atlantic, ultimately landing the torpedo that doomed her. Swordfish dropped depth charges and laid mines as well as well. The Fairey Albacore was introduced in 1940 to replace the Swordfish. Both were replaced in the front line by the Barracuda, but the Swordfish was retained to serve in anti-submarine and bombardment spotting assignments throughout the war. In the final accounting, the Swordfish destroyed more tonnage of Axis shipping than any other Allied aircraft. Later introductions (1942-1943) Over the course of the war, new aircraft introductions tended to be heavier with more powerful engines. They had greater speed, a faster rate of climb, and greater range than their predecessors. In 1944, the IJN would initiate an attack from 350 to 400 miles, 100 miles further away than in 1942. The RN would send out its Swordfish from 250 to about 300 and the USN from 250 miles out. However, aircraft "wing loading", the mass of the aircraft divided by the surface area of its wing, also tended to increase, suggesting poorer maneuverability for these larger planes. United States aircraft Fighters The Vought F4U Corsair fighter-bomber introduced in 1942 had almost twice the horsepower of the Wildcat, was faster, had greater range, faster rate of climb, and was capable of carrying a 4,000 lb total load of bombs and High Velocity Aircraft Rockets. It was judged to be relatively difficult to land on a carrier, however, and was initially released by the USN only for use by land-based Marine units. The Grumman F6F Hellcat fighter-bomber introduced in 1943 was also faster than the Wildcat, had greater range, a rate of climb comparable to the IJN Zero, and was capable of carrying a 4,000 lb total load of bombs, torpedoes, and rockets. Both the Corsair and the Hellcat aircraft were faster than the Zero and, having armor protection and self-sealing fuel tanks, could take much more punishment. With the Corsair initially relegated to land-based use, the Hellcat became the mainstay of the USN Fast Carrier Task Groups of 1944–45. She was the most successful fighter of the war, with her pilots shooting down over 5,000 enemy aircraft at a 19:1 ratio of victories to losses. The Corsair was deployed for USN carrier squadrons after the British refined the aircraft and landing procedures for it. Torpedo bombers. After their devastating torpedo bomber losses at Midway, the USN quickly replaced the Devastator with the faster Grumman TBF Avenger. It also had twice the range of the Devastator, in part because the Avenger's torpedo was carried inside the plane, reducing drag. Like the Devastator, it had attacked the Japanese fleet at Midway without fighter protection, and only one of the six attacking planes returned to its base at Naval Air Station on Midway. The Avenger ultimately became the most effective and widely used torpedo bomber of the war and functioned even more often as a level bomber than as a torpedo bomber. Avengers operated from both fleet carriers and escort carriers and were highly effective submarine killers in both the Atlantic and Pacific theaters. They shared credit for sinking the Japanese super-battleships IJN Yamato and Musashi. Dive bombers The Curtiss SB2C Helldiver dive bomber introduced in 1942 was faster than the Dauntless but regarded as difficult to handle. Making necessary improvements delayed its first use in combat until late 1943. By this time, the Allies were moving away from an aircraft type dedicated to dive bombing. Air-to-ground rockets had been introduced that offered accuracy that formerly had been the primary advantage of the dive bomber over level bombers. Such rockets could be fired from the other types of carrier aircraft and were ultimately carried by Hellcat fighters, Corsair fighter-bombers, and Avenger and Swordfish torpedo bombers as well as Helldiver dive bombers. Nonetheless, the Helldiver became widely used and participated in battles over the Marianas, Philippines, Formosa, Iwo Jima, and Okinawa and sank more tonnage of Japanese shipping than any other aircraft during the war. It shared credit with Avengers for sinking IJN Yamato and Musashi. United Kingdom aircraft Fighters In 1942, the British introduced another naval fighter by adapting a highly successful land-based aircraft, the Spitfire, to carrier use. The Supermarine Seafire was faster than its predecessors and began replacing Hawker Sea Hurricanes for front-line service. In light wind, however, it was subject to crash landings. Also having its engine-cooling air inlets on the underside of the fuselage made ditching more dangerous for the pilot, as was also the case with the Hurricane. The Seafire's range was limited, but could be extended using drop tanks. Seafires supported the Allied invasions of North Africa, Sicily, mainland Italy, and southern France. Temporarily assigned to land bases, they also supported the invasion of Normandy. In the Pacific as part of the British Pacific Fleet, Seafires were used for CAP. Overall, the adaptations of land aircraft had inferior performance to purpose-built carrier aircraft. Introduced late in the war, the Fairey Firefly was superior in performance and firepower to its predecessor, the Fairey Fulmer. It was conceived as early as 1938, but prolonged development delayed its combat use until mid-1944, by which time its performance had been eclipsed by both Axis and Allied fighters. The Firely was used for ground attack, reconnaissance and anti-submarine as well as a fighter aircraft. The Firefly participated in operations against the German battleship Tirpitz in Norway in July 1944. During operations against the Japanese oil refineries at Sumatra in early 1945, a Firefly shot down a Nakajima Ki-43 (“Oscar”) fighter. Fireflies also supported carrier-based actions against Japanese shipping and against positions in the Caroline and Japanese home islands. Less than 800 were produced during the war years. Bombers The Fairey Barracuda torpedo bomber/dive bomber was introduced in early 1943 and was the only RN aircraft designed to withstand the stresses of dive bombing since the retirement of the Skua. As the war progressed, the RN increasingly used US-made, purpose-built Hellcats, Corsairs, and Avengers for carrier operations in both the Atlantic and Pacific theaters. Japanese aircraft Fighters The Zero was among the world's best fighters at the time of the raid on Pearl Harbor. It was little improved over the war, while the Allies introduced more powerful planes with better protection. Over time, the Zero lost its competitive advantage due to development by the Allies of more capable aircraft as well as improved tactics. The IJN introduced a land-based fighter, the Kawanishi N1K1-J, in 1944 that had the power, maneuverability, and ruggedness to compete with the late-war Allied fighters. An improved version of the carrier-launched Zero, the Mitsubishi A6M6, included self-sealing fuel tanks, armor plate protection for the pilot, and a more powerful engine, but the additions made it heavier and less nimble. Only one prototype was built before the war ended. With its diminished value as a competitive fighter, the Zero became the first aircraft to be used as a kamikaze special attack plane and was used more than any other aircraft for this purpose. Torpedo bombers The Nakajima B6N "Jill" torpedo bomber incorporated considerable improvements over the Nakajima B5N "Kate" in speed and range but its introduction was delayed by development and production problems. By the time the Jill was introduced, the Allied thrust up the Solomon Islands caused IJN leadership, in late 1943, to transfer many carrier aircraft from their first line carriers to land-based service out of Rabaul. With the Allies having firmly established air superiority in the area, only a fraction of these planes made it back to their carriers two weeks later. In the following year, carrier-based Jills suffered huge losses at the Battle of the Philippine Sea in mid-1944. With so few IJN carriers remaining afloat after the Battle of Leyte Gulf, the Jills became mostly land-based and by early 1945 were in use as kamikazes. Dive bombers IJN plans to upgrade carrier bombers were also frustrated by development and production delays. The Yokosuka D4Y3 "Judy" dive bomber was introduced in mid-1942 and intended to replace the slower "Val" by the end of that year, but the Val was kept in service until 1944. The Judy could outrun the USN Wildcat but, by the time the Judy came into wide use, the even faster USN Hellcat had been introduced. Many Judys were among the several hundred IJN planes lost during the Battle of the Philippine Sea. Nonetheless, it was bombs from a Judy, then operating from a land-base in the Philippines, that sank the light carrier USN Princeton during the Battle of Leyte Gulf in October 1944. A bomb from another Judy almost sank USN Franklin in March 1945. Kamikaze special attack aircraft Japanese use of "kamikaze" suicide aircraft began at the Leyte Gulf battle, and the D4Y3 "Judy" served in that role, damaging several Allied fleet and escort carriers. As the Allies approached Japan in early 1945, the IJN introduced the Yokosuka D4Y4, specifically for use as a kamikaze. Operating from land-bases, this version caused damage to several Allied carriers. By the end of the war, all six of the monoplane IJN carrier aircraft models used extensively during the war had also been engaged as kamikazes. Non-kamikaze aircraft models continued in use, often providing escort protection for kamikazes en route to enemy fleets. Footnotes Citations Science and technology during World War II Aircraft carriers
Carrier aircraft used during World War II
Technology
5,735
8,786,058
https://en.wikipedia.org/wiki/Dynamic%20pricing
Dynamic pricing, also referred to as surge pricing, demand pricing, or time-based pricing, and variable pricing, is a revenue management pricing strategy in which businesses set flexible prices for products or services based on current market demands. It usually entails raising prices during periods of peak demand and lowering prices during periods of low demand. As a pricing strategy, it encourages consumers to make purchases during periods of low demand (such as buying tickets well in advance of an event or buying meals outside of lunch and dinner rushes) and disincentivizes them during periods of high demand (such as using less electricity during peak electricity hours). In some sectors, economists have characterized dynamic pricing as having welfare improvements over uniform pricing and contributing to more optimal allocation of limited resources. Its usage often stirs public controversy, as people frequently think of it as price gouging. Businesses are able to change prices based on algorithms that take into account competitor pricing, supply and demand, and other external factors in the market. Dynamic pricing is a common practice in several industries such as hospitality, tourism, entertainment, retail, electricity, and public transport. Each industry takes a slightly different approach to dynamic pricing based on its individual needs and the demand for the product. Methods Cost-plus pricing Cost-plus pricing is the most basic method of pricing. A store will simply charge consumers the cost required to produce a product plus a predetermined amount of profit. Cost-plus pricing is simple to execute, but it only considers internal information when setting the price and does not factor in external influencers like market reactions, the weather, or changes in consumer value. A dynamic pricing tool can make it easier to update prices, but will not make the updates often if the user doesn't account for external information like competitor market prices. Due to its simplicity, this is the most widely used method of pricing with around 74% of companies in the United States employing this dynamic pricing strategy. Although widely used, the usage is skewed, with companies facing a high degree of competition using this strategy the most, on the other hand, companies that deal with manufacturing tend to use this strategy the least. Pricing based on competitors Businesses that want to price competitively will monitor their competitors’ prices and adjust accordingly. This is called competitor-based pricing. In retail, the competitor that many companies watch is Amazon, which changes prices frequently throughout the day. Amazon is a market leader in retail that changes prices often, which encourages other retailers to alter their prices to stay competitive. Such online retailers use price-matching mechanisms like price trackers. The retailers give the end-user an option for the same, and upon selecting the option to price match, an online bot searches for the lowest price across various websites and offers a price lower than the lowest. Such pricing behavior depends on market conditions, as well as a firm's planning. Although a firm existing within a highly competitive market is compelled to cut prices, that is not always the case. In case of high competition, yet a stable market, and a long-term view, it was predicted that firms will tend to cooperate on a price basis rather than undercut each other. Pricing based on value or elasticity Ideally, companies should ask the price for a product that is equal to the value a consumer attaches to a product. This is called value-based pricing. As this value can differ from person to person, it is difficult to uncover the perfect value and have a differentiated price for every person. However, consumers' willingness to pay can be used as a proxy for the perceived value. With the price elasticity of products, companies can calculate how many consumers are willing to pay for the product at each price point. Products with high elasticities are highly sensitive to changes in price, while products with low elasticities are less sensitive to price changes (ceteris paribus). Subsequently, products with low elasticity are typically valued more by consumers if everything else is equal. The dynamic aspect of this pricing method is that elasticities change with respect to the product, category, time, location, and retailers. With the price elasticity of products and the margin of the product, retailers can use this method with their pricing strategy to aim for volume, revenue, or profit maximization strategies. Bundle pricing There are two types of bundle pricing strategies: one from the consumer's point of view, and one from the seller's point of view. From the seller's point of view, an end product's price depends on whether it is bundled with something else; which bundle it belongs to; and sometimes on which customers it is offered to. This strategy is adopted by print-media houses and other subscription-based services. The Wall Street Journal, for example, offers a standalone price if an electronic mode of delivery is purchased, and a discount when it is bundled with print delivery. Time-based Many industries, especially online retailers, change prices depending on the time of day. Most retail customers shop during weekly office hours (between 9 AM and 5 PM), so many retailers will raise prices during the morning and afternoon, then lower prices during the evening. Time-based pricing of services such as provision of electric power includes: Time-of-use pricing (TOU pricing), whereby electricity prices are set for a specific time period on an advance or forward basis, typically not changing more often than twice a year. Prices paid for energy consumed during these periods are pre-established and known to consumers in advance, allowing them to vary their usage in response to such prices and manage their energy costs by shifting usage to a lower-cost period, or reducing their consumption overall (demand response) Critical peak pricing, whereby time-of-use prices are in effect except for certain peak days, when prices may reflect the costs of generating and/or purchasing electricity at the wholesale level. Real-time pricing, whereby electricity prices may change as often as hourly (exceptionally more often). Prices may be signaled to a user on an advanced or forward basis, reflecting the utility's cost of generating and/or purchasing electricity at the wholesale level; and Peak-load reduction credits, for consumers with large loads who enter into pre-established peak-load-reduction agreements that reduce a utility's planned capacity obligations. Peak fit pricing is best used for products that are inelastic in supply, where suppliers are fully able to anticipate demand growth and thus be able to charge differently for service during systematic periods of time. A utility with regulated prices may develop a time-based pricing schedule on analysis of its long-run costs, such as operation and investment costs. A utility such as electricity (or another service), operating in a market environment, may be auctioned on a competitive market; time-based pricing will typically reflect price variations on the market. Such variations include both regular oscillations due to the demand patterns of users; supply issues (such as availability of intermittent natural resources like water flow or wind); and exceptional price peaks. Price peaks reflect strained conditions in the market (possibly augmented by market manipulation, as during the California electricity crisis), and convey a possible lack of investment. Extreme events include the default by Griddy after the 2021 Texas power crisis. By industry Hospitality Time-based pricing is the standard method of pricing in the tourism industry. Higher prices are charged during the peak season, or during special event periods. In the off-season, hotels may charge only the operating costs of the establishment, whereas investments and any profit are gained during the high season (this is the basic principle of long-run marginal cost pricing: see also long run and short run). Hotels and other players in the hospitality industry use dynamic pricing to adjust the cost of rooms and packages based on the supply and demand needs at a particular moment. The goal of dynamic pricing in this industry is to find the highest price that consumers are willing to pay. Another name for dynamic pricing in the industry is demand pricing. This form of price discrimination is used to try to maximize revenue based on the willingness to pay of different market segments. It features price increases when demand is high and decreases to stimulate demand when it is low. Having a variety of prices based on the demand at each point in the day makes it possible for hotels to generate more revenue by bringing in customers at the different price points they are willing to pay. Transportation Airlines change prices often depending on the day of the week, time of day, and the number of days before the flight. For airlines, dynamic pricing factors in different components such as: how many seats a flight has, departure time, and average cancellations on similar flights. A 2022 study in Econometrica estimated that dynamic pricing was beneficial for "early-arriving, leisure consumers at the expense of late-arriving, business travelers. Although dynamic pricing ensures seat availability for business travelers, these consumers are then charged higher prices. When aggregated over markets, welfare is higher under dynamic pricing than under uniform pricing." Congestion pricing is often used in public transportation and road pricing, where a higher price at peak periods is used to encourage more efficient use of the service or time-shifting to cheaper or free off-peak travel. For example, the San Francisco Bay Bridge charges a higher toll during rush hour and on the weekend, when drivers are more likely to be traveling. This is an effective way to boost revenue when demand is high, while also managing demand since drivers unwilling to pay the premium will avoid those times. The London congestion charge discourages automobile travel to Central London during peak periods. The Washington Metro and Long Island Rail Road charge higher fares at peak times. The tolls on the Custis Memorial Parkway vary automatically according to the actual number of cars on the roadway, and at times of severe congestion can reach almost $50. Dynamic pricing is also used by Uber and Lyft. Uber's system for "dynamically adjusting prices for service" measures supply (Uber drivers) and demand (passengers hailing rides by use of smartphones), and prices fares accordingly. Ride-sharing companies such as Uber and Lyft have increasingly incorporated dynamic pricing into their operations. This strategy enables these businesses to offer the best prices for both drivers and passengers by adjusting prices in real-time in response to supply and demand. When there is a strong demand for rides, rates go up to encourage more drivers to offer their services, and when there is a low demand, prices go down to draw in more passengers. Professional sports Some professional sports teams use dynamic pricing structures to boost revenue. Dynamic pricing is particularly important in baseball because MLB teams play around twice as many games as some other sports and in much larger venues. Sports that are outdoors have to factor weather into pricing strategy, in addition to the date of the game, date of purchase, and opponent. Tickets for a game during inclement weather will sell better at a lower price; conversely, when a team is on a winning streak, fans will be willing to pay more. Dynamic pricing was first introduced to sports by a start-up software company from Austin, Texas, Qcue and Major League Baseball club San Francisco Giants. The San Francisco Giants implemented a pilot of 2,000 seats in the View Reserved and Bleachers and moved on to dynamically pricing the entire venue for the 2010 season. Qcue currently works with two-thirds of Major League Baseball franchises, not all of which have implemented a full dynamic pricing structure, and for the 2012 postseason, the San Francisco Giants, Oakland Athletics, and St. Louis Cardinals became the first teams to dynamically price postseason tickets. While behind baseball in terms of adoption, the National Basketball Association, National Hockey League, and NCAA have also seen teams implement dynamic pricing. Outside of the U.S., it has since been adopted on a trial basis by some clubs in the Football League. Scottish Premier League club Heart of Midlothian introduced dynamic pricing for the sale of their season tickets in 2012, but supporters complained that they were being charged significantly more than the advertised price. Retail Retailers, and online retailers, in particular, adjust the price of their products according to competitors, time, traffic, conversion rates, and sales goals. Supermarkets often use dynamic pricing strategies to manage perishable inventory, such as fresh produce and meat products, that have a limited shelf life. By adjusting prices based on factors like expiration dates and current inventory levels, retailers can minimize waste and maximize revenue. Additionally, the widespread adoption of electronic shelf labels in grocery stores has made it easier to implement dynamic pricing strategies in real-time, enabling retailers to respond quickly to changing market conditions and consumer preferences. These labels also makes it easier for grocery stores to markup high demand items (e.g. making it more expensive to purchase ice in warmer weather). Theme parks Theme parks have also recently adopted this pricing model. Disneyland and Disney World adapted this practice in 2016, and Universal Studios followed suit. Since the supply of parks is limited and new rides cannot be added based on the surge of demand, the model followed by theme parks in regards to dynamic pricing resembles that followed by the hotel industry. During summertime, when demand is rather inelastic, the parks charge higher prices, whereas ticket prices in winter are less expensive. Criticism Dynamic pricing is often criticized as price gouging. Dynamic pricing is widely unpopular among consumers as some feel it tends to favour particular buyers. While the intent of surge pricing is generally driven by demand-supply dynamics, some instances have proven otherwise. Some businesses utilise modern technologies (Big data and IoT) to adopt dynamic pricing strategies, where collection and analysis of real-time private data occur almost instantaneously. As modern technology on data analysis is developing rapidly, enabling to detect one’s browsing history, age, gender, location and preference, some consumers fear “unwanted privacy invasions and data fraud” as the extent of their information being used is often undisclosed or ambiguous. Even with firms’ disclaimers stating private information will only be used strictly for data collection and promising no third-party distribution will occur, few cases of misconducting companies can disrupt consumers’ perceptions. Some consumers were simply skeptical on general information collection outright due to the potentiality of “data leakages and misuses”, possibly impacting suppliers’ long-term profitability stimulated by reduced customer loyalty. Consumers can also develop price fairness/unfairness perceptions, whereby different prices being offered to individuals for the same products can affect customers’ perceptions on price fairness. Studies discovered easiness of learning other individuals’ purchase price induced consumers to sense price unfairness and lower satisfaction when others paid less than themselves. However, when consumers were price-advantaged, development of trust and increased repurchase intentions were observed. Other research indicated price fairness perceptions varied depending on their privacy sensitivity and natures of dynamic pricing like, individual pricing, segment pricing, location data pricing and purchase history pricing. Amazon Amazon engaged in price discrimination for some customers in the year 2000, showing different prices at the same time for the same item to different customers, potentially violating the Robinson–Patman Act. When this incident was criticised, Amazon issued a public apology with refunds to almost 7000 customers but did not cease the practice. During the COVID-19 pandemic, prices of certain items in high demand were reported to shoot up by quadruple their original price, garnering negative attention. Although Amazon denied claims of any such manipulation and blamed a few sellers for shooting up prices for essentials such as sanitizers and masks, prices of essential products 'sold by Amazon' had also seen a hefty rise in prices. Amazon claimed this was a result of software malfunction. Uber Uber's surge pricing has also been criticized. In 2013, when New York was in the midst of a storm, Uber users saw fares go up eight times the usual fares. This incident attracted public backlash from public figures, with Salman Rushdie amongst others publicly criticizing this move. After this incident, the company started placing caps on how high surge pricing can go during times of emergency, starting in 2015. Drivers have been known to hold off on accepting rides in an area until surge pricing forces fares up to a level satisfactory to them. Wendy's In 2024, Wendy's announced plans to test dynamic pricing in certain American locations during 2025. This pricing method was included with plans to redesign menu boards and these changes were announced to stakeholders. The company received significant online backlash for this decision. In response, Wendy's stated that the intended implementation was limited to reducing prices during low traffic periods. See also Hedonic regression Pay what you want Demand shaping References Pricing Economics of regulation Economics and time
Dynamic pricing
Physics
3,382
52,422,510
https://en.wikipedia.org/wiki/Methoxydienone
Methoxydienone, also known as methoxygonadiene, as well as 3-methoxy-17-dehydro-18-methyl-19-nor-δ2,5(10)-testosterone or 13β-ethyl-3-methoxygona-2,5(10)-dien-17-one, is a synthetic anabolic-androgenic steroid (AAS) and progestogen of the 19-nortestosterone group related to levonorgestrel which was never marketed. It was synthesized in the 1960s and 1970s by chemist Herchel Smith and his colleagues while they were developing progestins for use in oral contraceptives. The drug is a potent anabolic when administered via injection with an anabolic:androgenic ratio of approximately 54:27 relative to testosterone propionate and 90:625 relative to nandrolone. Methoxydienone is not 17α-alkylated (instead featuring a ketone at the C17 position) and no data exist regarding its oral activity in humans. It has been sold on the Internet as a designer steroid. See also Bolandione Dehydroepiandrosterone Dienedione References Abandoned drugs Androgen ethers Anabolic–androgenic steroids Dienes Estranes Ketones Progestogens Chemistry articles by quality
Methoxydienone
Chemistry
291
76,056,447
https://en.wikipedia.org/wiki/Coriamyrtin
Coriamyrtin is a toxic γ-lactone naturally present in a multitude of plants. Natural occurrence Coriamyrtin can be found in Scurrula parasitica, Coriaria microphylla, and certain other plants. Toxicity Coriamyrtin is a convulsant. It appears to act via antagonism of GABAA receptors. Poisoning is usually from ingestion of parts of the plants containing it. A case of poisoning was able to be treated with repeated administration of diazepam, an anticonvulsant. References GABAA receptor antagonists Heterocyclic compounds with 4 rings Epoxides Spiro compounds Gamma-lactones Isopropenyl compounds Tertiary alcohols Plant toxins Convulsants
Coriamyrtin
Chemistry
156
8,354,338
https://en.wikipedia.org/wiki/James%20Crichton-Browne
Sir James Crichton-Browne FRS FRSE (29 November 1840 – 31 January 1938) was a leading Scottish psychiatrist, neurologist and eugenicist. He is known for studies on the relationship of mental illness to brain injury and for the development of public health policies in relation to mental health. Crichton-Browne's father was the asylum reformer Dr William A.F. Browne, a prominent member of the Edinburgh Phrenological Society and, from 1838 until 1857, the superintendent of the Crichton Royal at Dumfries where Crichton-Browne spent much of his childhood. Crichton-Browne edited the highly influential West Riding Lunatic Asylum Medical Reports (six volumes, 1871–76). He was one of Charles Darwin's leading collaborators – on The Expression of the Emotions in Man and Animals (1872) – and, like Duchenne de Boulogne (at the Salpêtrière in Paris) and Hugh Welch Diamond in Surrey, was a pioneer of neuropsychiatric photography. He based himself at the West Riding Lunatic Asylum in Wakefield from 1867 to 1875, and there he taught psychiatry to students from the nearby Leeds School of Medicine and, with David Ferrier, transformed the asylum into a world centre for neuropsychology. Crichton-Browne then served as Lord Chancellor's Visitor from 1875 till 1922. Throughout his career, Crichton-Browne emphasised the asymmetrical aspects of the human brain and behaviour; and also, like Emil Kraepelin and Alois Alzheimer, made some influential predictions about the neurological changes associated with severe psychiatric disorders. Crichton-Browne was also a forceful advocate of eugenics, and in 1908 became the first president of the Eugenics Education Society. In 1920, Crichton-Browne delivered the first Maudsley Lecture to the Medico-Psychological Association in the course of which he outlined his recollections of Henry Maudsley; and in the last fifteen years of his life, he published seven volumes of reminiscences. In 2015, UNESCO listed Crichton-Browne's clinical papers and photographs (about 5000 items in all) as items of international cultural importance. Family background and education 1840–1866 Crichton-Browne was born in Edinburgh at the family home of his mother, Magdalene Howden Balfour. She was the daughter of Dr Andrew Balfour and belonged to one of Scotland's foremost scientific families. The Balfour home (at St John's Hill near Salisbury Crags) had been constructed in 1770 for the unmarried geologist James Hutton (1726–1797) who was Magdalene Balfour's great-uncle. Crichton-Browne's father, the asylum reformer William A. F. Browne (1805–1885), was a prominent phrenologist and his younger brother, John Hutton Balfour-Browne KC (1845–1921), wrote a classic account of the legal relations of insanity. Crichton-Browne spent much of his childhood at The Crichton Royal in Dumfries where his father was the medical superintendent from 1838 to 1857. William A. F. Browne was a pioneering Victorian psychiatrist and an exponent of moral treatment with an interest in the psychological lives of his patients as illustrated by their group activities, dreams and art-works. W. A. F. Browne also hoarded a huge collection of patient art and this interest found a parallel in Crichton-Browne's later asylum photography. In his childhood, Crichton-Browne lost an older brother, William (aged 11) in 1846. He went to school at Dumfries Academy and then, in line with his mother's episcopalian outlook, to Glenalmond College. Shortly before his death, Crichton-Browne wrote a valuable account of his Dumfries childhood, including the visit of the American asylum reformer Dorothea Lynde Dix. Crichton-Browne studied Medicine at Edinburgh University, qualifying as an MD in 1862 with a thesis on hallucinations. Among his teachers was his father's friend Thomas Laycock (1812–1876) whose magnum opus Mind and Brain is an extended speculative essay on neurology and psychological life. Crichton-Browne also drew on the writings of the physicians Sir Andrew Halliday and Sir Henry Holland. Like his father, Crichton-Browne was elected one of the undergraduate Presidents of the Royal Medical Society and, in this capacity, he argued for the place of psychology in the medical curriculum. In 1863, he visited a number of asylums in Paris (including the Salpêtrière), and after working as assistant physician at asylums in Exeter (with John Charles Bucknill), Warwick and Derby, and a brief period on Tyneside, Crichton-Browne was appointed Physician-Superintendent of the West Riding Pauper Lunatic Asylum at Wakefield in 1866. This was also the year in which his father served as the President of the Medico-Psychological Association (now the Royal College of Psychiatrists). Ferrier, Darwin and the West Riding Asylum Reports 1866–1875 Ferrier's neurology: Crichton-Browne spent almost ten years at the West Riding Asylum. He believed that the asylum should be an educational as well as a therapeutic institution and set about a major research programme, bringing biological insights to bear on the causes of insanity. He supervised hundreds of post-mortem examinations of the brain and took a special interest in the clinical features of neurosyphilis. In 1872, Crichton-Browne developed his father's phrenological theories by inviting the Scottish neurologist David Ferrier (1843–1928) to direct the asylum laboratories and to conduct studies on the cortical localization of cerebral functions. (In 1832–34, William A. F. Browne had published a paper in the Phrenological Journal on language centres in the brain and in his later writings there is a reiterated emphasis on the relationships of brain injury, psychosis and language). Ferrier's work at Wakefield transformed the asylum into a world centre for neuropsychiatry and he summarised his research in the neurological classic The Functions of the Brain (1876). Darwin's correspondence: At the instigation of Henry Maudsley (1835–1918), Crichton-Browne corresponded with Charles Darwin from May 1869 until December 1875. The bulk of the correspondence occurred during the preparation of Crichton-Browne's famous West Riding Lunatic Asylum Medical Reports and of Darwin's The Expression of the Emotions in Man and Animals. On 8 June 1869, Darwin sent Crichton-Browne his copy of Duchenne's Mechanism of Human Facial Expression, asking for his comments. Crichton-Browne seems to have mislaid the book for almost a year at the Wakefield asylum; but, on 6 June 1870, he returned it with considerable embarrassment, and enclosed the one photograph which Darwin used in his book (see below). Darwin explored a huge range of subjects with Crichton-Browne, including references to Maudsley's Body and Mind, the psychology of blushing, the bristling of hair, the functions of the platysma muscle (Darwin's "bête noire"), and the clinical phenomena of bereavement and grief. Darwin's mysterious symptoms which included vomiting, sweating, sighing, and weeping, particularly troublesome in the early months of 1872, seem to have improved around the time that he completed his work on the emotions. Interestingly, Crichton-Browne declined Henry Maudsley's invitation to review The Descent of Man for The Journal of Mental Science; and it is notable that Darwin did not make a contribution to Crichton-Browne's Asylum Reports, nor did he visit the Wakefield Asylum when invited by Crichton-Browne in 1873. Mental Science: Building on the early asylum photography of Hugh Welch Diamond (1809 -1886) at Brookwood Hospital, Crichton-Browne sent about forty photographs of patients to Charles Darwin during the composition of his The Expression of the Emotions; however, Darwin used only one of these in the book (Figure 19) and this (Darwin Correspondence Project Letter 7220) was of a patient under the care of Dr James Gilchrist in the public wing of Crichton Royal at Dumfries. The complete correspondence between Crichton-Browne and Charles Darwin forms a remarkable contribution to the beginnings of behavioural science. Nevertheless, Crichton-Browne attached greater importance to his six volumes of West Riding Lunatic Asylum Medical Reports (1871–1876) – sending Darwin a copy of Volume One on 18 August 1871 – and to the neurological journal Brain which developed from them, in which he was assisted by John Hughlings Jackson (1835–1911). In 1875, Crichton-Browne ridiculed the classification of mental disorders advocated by the Edinburgh psychiatrist David Skae (1814–1873) which had been promoted by Skae's pupil Thomas Clouston (1840–1915); Skae sought to associate specific kinds of mental illness with variously disordered bodily organs. Crichton-Browne described it as: "philosophically unsound, scientifically inaccurate and practically useless". In 1879, Crichton-Browne published his own considerations of the neuropathology of insanity making some detailed predictions about the morbid anatomy of the brain in cases of severe psychiatric disorder. He proposed that, in the insane, the weight of the brain was reduced, that the lateral ventricles were enlarged, and that the burden of damage fell on the left cerebral hemisphere. These observations - made almost a century before the introduction of neuroleptics - involved an evolutionary view of cerebral localisation with an emphasis on the asymmetry of cerebral functions. He derived this from the clinical research of the French anatomist Paul Broca (1824–1880) on language centres in the brain – originally published in 1861 – and presented by Broca to the British Association for the Advancement of Science at its 1868 meeting in Norwich. The question of asymmetrical cerebral functions had been raised many years earlier by the Edinburgh phrenologist Hewett Cottrell Watson in the Phrenological Journal. Crichton-Browne's own views on psychosis and cerebral asymmetry have been appraised by Crow, 1995 and Compston, 2007. Lord Chancellor's Visitor in Lunacy 1875–1922 In 1875, Crichton-Browne was appointed as Lord Chancellor's Medical Visitor in Lunacy, a position which involved the regular examination of wealthy Chancery patients throughout England and Wales. He held this post until his retirement in 1922 and he combined it with the development of an extensive London consulting practice, becoming a familiar figure on the metropolitan medical scene. In 1878, he followed his father as President of the Medico-Psychological Association. In 1883, he was elected a Fellow of the Royal Society; and he served as Treasurer and Vice-President of the Royal Institution from 1889 until 1926. Crichton-Browne also made friendships in the literary world with the idiosyncratic historian Thomas Carlyle (1795–1881) whose marital reputation he defended against the allegations of James Anthony Froude; and, less controversially, with his exact contemporary, the novelist Thomas Hardy (1840–1928) who — concerned about his wife's health — consulted Crichton-Browne about the peculiarities of the female brain. Hardy presented Crichton-Browne with an inscribed copy of his Wessex Poems in 1898. Crichton-Browne was a notable stylist and orator and he often combined this with a kind of couthy vernacular evocative of his Dumfries childhood. He served as President of the Dumfriesshire and Galloway Natural History and Antiquarian Society from 1892 to 1896. In Dumfries, on 24 January 1895, he gave a remarkable and light-hearted Presidential lecture- On Emotional Expression – in which he discussed some reservations about Darwin's views, and touched on the role of the motor cortex in expression, on the relations of gender to expressive asymmetry, and on the relationship of language to the physical expression of the emotions. A few months later, on 30 June 1895 in London, Crichton-Browne delivered his famous Cavendish Lecture On Dreamy Mental States, in which he explored the relationship of trauma in the uniquely vulnerable temporal lobes to déjà vu, hallucinatory, and supernatural experiences; this caught the attention of William James (1842–1910), who referred – rather dismissively – to Crichton-Browne in his Gifford lectures on The Varieties of Religious Experience (delivered in Edinburgh in 1901–02): In the early years of the 20th century, Crichton-Browne delivered a number of lectures on the asymmetry of the human brain, publishing his conclusions in 1907. President of the Sanitary Inspectors' Association 1901–1921 Crichton-Browne was elected and re-elected President of the Sanitary Inspectors' Association on an unprecedented twenty occasions. Like his predecessors, Sir Edwin Chadwick, and Sir Benjamin Ward Richardson, he took a close interest in the affairs of the Association. He greatly assisted the Association's negotiations with the Local Government Board (predecessor of the Ministry of Health) in its attempts to secure the improved education and training of sanitary inspectors. These attempts faced opposition from some sectors of the medical profession which viewed the rise of the sanitary inspectors as a threat to Medical Officers of Health. He was regarded with much affection and respect by the sanitary inspectors and he was a frequently invited speaker at their conferences and dinners — although his speeches could be repetitive and lengthy. In 1914, on being re-elected for a further term as President, he responded: Elder statesman of mental science 1920–1938 In the early Summer of 1920, Crichton-Browne delivered the first Maudsley Lecture to the Royal Medico-Psychological Association, giving a generous tribute to Henry Maudsley whose enthusiasm and energy in the 1860s had been a source of inspiration to Crichton-Browne. Four years later, on 29 February 1924, Crichton-Browne gave the Ramsay Henderson Bequest Lecture in Edinburgh: The Story of the Brain. In this, he delivered a tribute to members of the Edinburgh Phrenological Society: to George Combe (1788–1858) author of The Constitution of Man (1828), to Andrew Combe (1797–1847) author of Observations on Mental Derangement (1831), and to Robert Chambers (1802–1871) who had sought to combine phrenology with evolutionary Lamarckism in his Vestiges of the Natural History of Creation – written in St Andrews as Chambers recuperated from depression, and published in 1844. Chambers simply inverted Hutton's aphorism "no vestige of a beginning". However, Crichton-Browne did not mention that his Henderson lecture was delivered a century (almost to the day) after his father had joined the Edinburgh Phrenological Society. With increasing age and the death of his first wife (Emily Halliday; following her death in 1903, Crichton-Browne married Audrey Emily Bulwer in 1912), and with the loss of two grandsons in the first world war, Crichton-Browne's rhetoric took on a more strident tone and his engagement with eugenics tarnished his reputation in the last two decades of his life. Death He died, suffering from heart failure, in Dumfries on 31 January 1938. He was predeceased by his son Colonel Harold Crichton-Browne (1866–1937). Positions held Elected a Fellow of the Royal Society of Edinburgh (1870) Elected a Fellow of the Royal Society (1883) Knighted by Queen Victoria (1886) President of the Medico-Psychological Association President of the Neurological Society President of the Medical Society of London President of the National Health Society Treasurer and Vice-President of the Royal Institution President of the Eugenics Education Society Legacy Medical Psychology: Crichton-Browne often described himself as a medical psychologist but in spite of the pervasive influence of his West Riding Lunatic Asylum Medical Reports, he remains a rather shadowy figure in the history of British neuroscience. However, his unusual longevity, taken together with his father's distinguished psychiatric career, brought the world of the Edinburgh phrenologists into contact with developing neuroscience in the course of the 20th century; and Crichton-Browne's considerations of the cerebral basis of psychotic disorder were well ahead of their time. His collaboration with David Ferrier on cerebral localisation, and the development of the journal Brain, give him a central role in early British neurology; and his protracted correspondence with Charles Darwin - over a period of several years - highlights the mutual engagement of psychiatry and evolutionary theory in the later nineteenth century. In 2015, UNESCO listed Crichton-Browne's clinical papers and photographs as items of international cultural importance. Social Policy: Very early in his career, Crichton-Browne stressed the importance of psychiatric disorders in childhood and, much later, he was to emphasise the distinction between organic and functional illness in the elderly. He was considered an expert in many aspects of psychological medicine, public health and social reform. He supported a campaign for the open-air treatment of tuberculosis, housing and sanitary reform, and a practical approach to sexually transmitted diseases. He condemned the corporal punishment of children. He stressed the importance of the asymmetric lateralization of brain function in the development of language, and deplored the fads relating to ambidexterity advocated by (among others) Robert Baden-Powell. He was critical of public education systems for their repetitive and fact-bound character, warning of mental exhaustion ("overpressure") in otherwise happy and healthy children. He was openly – even offensively – sceptical concerning the claims of psychic investigators (including Frederic William Henry Myers) and spiritualists, (see The Times articles of 1897/1899 concerning the Ballechin House controversy), and of dietary faddists and vegetarians. He argued that the therapeutic benefits of Freudian psychotherapy had been assessed with insufficient rigour. He advocated the fluoridation of human dietary intake in 1892, and recommended prenatal fluoride. He worried about the consequences of mass transportation by motor vehicles. Retirement: In his later years, Crichton-Browne enjoyed lengthy interludes at the Dumfries home ("Crindau", on the River Nith) which he had inherited from his father. Here, he worked on a number of projects, including a notable study of Robert Burns' medical problems, and seven volumes of memoirs, drawing on his personal commonplace notebooks, and ranging widely over medical, psychological, biographical and Scottish themes. These notebooks provide a unique psychiatric commentary on later Victorian culture and society. Crichton-Browne was twice married and, like his mother, cherished a lifelong affection for the traditions of the Anglican liturgy; he was a loyal member of the congregation at the Church of St John the Evangelist, Dumfries. Through his granddaughter Sybil Cookson, he became friendly with the painter Gluck (1895–1978) who created an arresting portrait of Sir James in 1928, now in the National Portrait Gallery. Also in the National Portrait Gallery is a 1917 photographic portrait by Walter Stoneman. Another portrait by Sir Oswald Birley, painted in 1934, is in the Crichton Royal Collection in Dumfries. Crichton-Browne was elected a Fellow of The Royal Society in 1883 with posthumous support from Charles Darwin, and he was knighted in 1886. At his death on 31 January 1938 at the age of 97, Crichton-Browne – like Robert Burns, Thomas Carlyle and James Clerk Maxwell – was acclaimed as one of the greatest sons of South-West Scotland; as one of the last men in Britain to sport Dundreary whiskers – and as one of the last Victorians. See also Crichton-Browne sign Council housing Gustav Fritsch Eduard Hitzig Teleology Eugenics References External links American Journal of Public Health Sir James Crichton-Browne: Victorian Psychiatrist and Public Health Reformer (biography) 1840 births 1938 deaths 19th-century Scottish medical doctors 20th-century Scottish medical doctors 19th-century Scottish photographers 20th-century Scottish memoirists Alumni of the University of Edinburgh British neurologists Scottish psychiatrists Scottish eugenicists Charles Darwin Fellows of the Royal Society People educated at Dumfries Academy People educated at Glenalmond College Mental health activists Scottish medical writers Knights Bachelor
James Crichton-Browne
Biology
4,195
41,608,446
https://en.wikipedia.org/wiki/Chemical%20Society%20Located%20in%20Taipei
Chemical Society Located in Taipei (CSLT; ) is a Taiwanese scholarly organization dedicated to chemistry. The organization traces its roots to the establishment of Chinese Chemical Society in Nanjing in 1932 and was reestablished in Taiwan in 1950. For political reasons, the organization's English name was changed to Chemical Society Located in Taipei although it still retains the name "Chinese Chemical Society" () in Chinese. Publications CSLT and Wiley publish a monthly periodical, the Journal of the Chinese Chemical Society. See also Education in Taiwan Chinese Chemical Society (Beijing) Organic nomenclature in Chinese References 1932 establishments in China 1950 establishments in Taiwan Chemistry societies Organizations based in Taipei Science and technology in Taiwan Scientific organizations established in 1931
Chemical Society Located in Taipei
Chemistry
138
12,928,115
https://en.wikipedia.org/wiki/Hydroelasticity
In fluid dynamics and elasticity, hydroelasticity or flexible fluid-structure interaction (FSI), is a branch of science which is concerned with the motion of deformable bodies through liquids. The theory of hydroelasticity has been adapted from aeroelasticity, to describe the effect of structural response of the body on the fluid around it. Definition It is the analysis of the time-dependent interaction of hydrodynamic and elastic structural forces. Vibration of floating and submerged ocean structures/vessels encompasses this field of naval architecture. Importance Hydroelasticity is of concern in various areas of marine technology such as: High-speed craft. Ships with the phenomena springing and whipping affecting fatigue and extreme loading Large scale floating structures such as floating airports, floating bridges and buoyant tunnels. Marine Risers. Cable systems and umbilicals for remotely operated or tethered underwater vehicles. Seismic cable systems. Flexible containers for water transport, oil spill recovery and other purposes. Areas of research Analytical and numerical methods in FSI. Techniques for laboratory and in-service investigations. Stochastic methods. Hydroelasticity-based prediction of Wave Loads and Responses. Impact, sloshing and shock. Flow induced vibration (FIV). Tsunami and seaquake induced responses of large marine structures. Devices for energy extraction. Current research Analysis and design of marine structures or systems necessitates integration of hydrodynamics and structural mechanics; i.e. hydroelasticity plays the key role. There has been significant recent progress in research into the hydroelastic phenomena, and the topic of hydroelasticity is of considerable current interest. Institutes and laboratories Norwegian University of Science and Technology (NTNU), Trondheim, Norway University of Southampton, Southampton, UK. MARINTEK : Marine Technology Centre, Trondheim, Norway MARIN : Maritime Research Institute Netherlands. MIT University of Michigan. Indian Institute of Technology Kharagpur, India. Saint Petersburg State University, Russia. National Maritime Research Institute, Japan. Research Institute of Applied Mechanics, Kyushu University, Japan. Computational Fluid Dynamics Laboratory, National Taiwan University of Science and Technology, Taiwan. Lee Dynamics, Houston, Texas, USA Conferences HYDROELAS : International conference on Hydroelasticity in marine technology. FSI : International conference on fluid-structure interaction. OT : Offshore Technology Conference. ISOPE : International Society of Offshore and Polar Engineers conference. Journals Journal of Sound and Vibration. Journal of Ship Research. Applied Ocean research. Journal of Engineering Mechanics. IEEE Journal of Oceanic Engineering. Journal of Fluids and Structures References R.E.D.Bishop and W.G.Price, "Hydroelasticity of ships"; Cambridge University Press, 1979, . Fumiki Kitō, "Principles of hydro-elasticity", Tokyo : Memorial Committee for Retirement of Dr. F. Kito; Distributed by Yokendo Co., 1970, LCCN 79566961. Edited by S.K.Chakrabarti and C.A.Brebbia, "Fluid structure interaction", Southampton; Boston: WIT, c2001, . Edited by S.K.Chakrabarti and C.A.Brebbia, "Fluid structure interaction and moving boundary problems IV", Southampton : WIT, c2007, . Edited by Subrata K. Chakrabarti, "Handbook of offshore engineering", Amsterdam; London : Elsevier, 2005, . Subrata K. Chakrabarti, "Hydrodynamics of offshore structures", Southampton : Computational Mechanics; Berlin : Springer Verlag, c1987, . Subrata K. Chakrabarti, "Nonlinear methods in offshore engineering", Amsterdam; New York : Elsevier, 1990, . Edited by S.K. Chakrabarti, "Numerical models in fluid-structure interaction", Southampton, UK; Boston : WIT, c2005, . Subrata Kumar Chakrabarti, "Offshore structure modeling", Singapore; River Edge, N.J. : World Scientific, c1994, (OCoLC)ocm30491315. Subrata K. Chakrabarti, "The theory and practice of hydrodynamics and vibration", River Edge, N.J. : World Scientific, c2002, . D. Karmakar, J. Bhattacharjee and T. Sahoo, "Expansion formulae for wave structure interaction problems with applications in hydroelasticity ", Intl. J. Engng. Science, 2007: 45(10), 807–828. Storhaug, Gaute, "Experimental investigation of wave induced vibrations and their effect on the fatigue loading of ships", PhD dissertation, NTNU, 2007:133, . Storhaug, Gaute et al. "Measurements of wave induced hull girder vibrations of an ore carrier in different trades", Journal of Offshore Mechanics and Arctic Engineering, Nov. 2007. Ottó Haszpra, "Modelling hydroelastic vibrations", London; San Francisco : Pitman, 1979, . Hirdaris, S.E., Price, W.G and Temarel, P. (2003). Two- and three-dimensional hydroelastic modelling of a bulker in regular waves. Marine Structures 16(8):627-658, doi:10.1016/j.marstruc.2004.01.005 Hirdaris, S.E. and Temarel, P. (2009). Hydroelasticity of Ships - recent advances and future trends. Proceedings (Part M) of the Institution of Mechanical Engineers : Journal of Engineering for the Maritime Environment, 223(3):305-330, doi:10.1243/14750902JEME160 Temarel, P. and Hirdaris, S.E. Eds.(2009). Hydroelasticity in Marine Technology - Proceedings of the 5th International Conference HYELAS'09, Published by the University of Southampton - UK, Fluid dynamics
Hydroelasticity
Chemistry,Engineering
1,249
25,399,724
https://en.wikipedia.org/wiki/Cetyl%20palmitate
Hexadecyl hexadecanoate, also known as cetyl palmitate, is the ester derived from hexadecanoic acid and 1-hexadecanol. This white waxy solid is the primary constituent of spermaceti, the once highly prized wax found in the skull of sperm whales. Cetyl palmitate is a component of some solid lipid nanoparticles. Stony corals, which build the coral reefs, contain large amounts of cetyl palmitate wax in their tissues, which may function in part as an antifeedant. Applications Cetyl palmitate is used in cosmetics as a thickener and emulsifier. References Fatty acid esters Palmitate esters Waxes
Cetyl palmitate
Physics
154
62,867,101
https://en.wikipedia.org/wiki/Apilan%20and%20kota%20mara
Apilan and kota mara are two Malay nautical terms which refers to the structure on a vessel where the cannon is installed. This term is used especially on Malay ships and boats. Apilan Apilan (or ampilan) is the wooden gunshield found in Malay prahus where cannons are placed. It has a hole to place long gun, and sometimes swivel gun can be placed over the top of the apilan. Apilan is not permanent, it can be assembled, disassembled, and moved. The crew of Malay prahu operated the long gun behind an apilan. The apilan usually situated at the bow of a prahu. This gun-shield was only put on when the ship went into action. Sunting apilan is the name given to two lelas or light guns standing on the gun-shield of a heavy gun. Etymology Apilan is a true Malay word, it was not descended from any word. It is also a standalone word, due to the fact that the syllable is api-lan instead of apil-an. Kota mara Kota mara is the breastwork or casement of Malay prahus. The function is to protect the gunner. Contrary to apilan, the kota mara cannot be moved. It is the permanent bulwark of the battery in a Malay piratical ship. The term saga kota mara refers to a peculiar props keeping the gun shield (apilan) in position. The word benteng is also used for this permanent breastwork. Ambong-ambong are blocks of wood forming part of the framework of the battery in a Malay piratical perahu. These blocks support the base of the benteng. The kota mara is already existed since at least the 8th century A.D., being shown in Borobudur ship bas relief. Etymology The term comes from Malay word kota which in turn comes from the Sanskrit word कोट्ट (kota) which means fort, fortress, castle, fortified house, fortification, works, city, town, or place encircled by walls. The word mara may come from Malay word meaning "appear before", "forward", "come", "moved to the front", "forward", and "advanced". Thus it can be interpreted as "breastwork before a cannon" or "breastwork at the front". According to the Great Indonesian Dictionary (KBBI), kota mara means (1) Wall on a ship to protect men mounting the cannon (2) Terrace or wall over a castle which a cannon is mounted. According to H. Warington Smyth, kota mara means transverse deck bulkhead at stem and stern (of a ship). Benteng itself means fort, battery, or redoubt. Example on records Singapore resident John Crawfurd recorded Malay piracy near Singapore waters. The Malay pirate ships of the time were long with beam. The decks were made of split nibong wood. Smaller pirate craft put up thick plank bulwarks [apilan] when fighting, while larger ones like those of the Lanun people had bamboo ledges hanging over their gunwales, with a protecting breastwork [kota mara] of plaited rattan about high. A crew might consist of 20–30 men, augmented with oarsmen of captured slaves. Small craft would have nine oars per side; larger ones would be double-banked, with an upper tier of oarsmen seated on the bulwark projection hidden behind rattan breastwork. Pirate armament included a stockade near the bow, with iron or brass 4-pounders, and another stockade aft, generally with two swivel guns. They also might have four or five brass swivels, or rantaka, on each side. They have bamboo shields, and were armed with spears, keris, muskets and other firearms they could get. H. H. Frese description of personal ship of the Sultan of Riau from 1883: Lieutenant T.J. Newbold record about the Malay pirate prahu: See also References Watercraft components Shipbuilding Naval warfare Naval artillery
Apilan and kota mara
Engineering
842
239,038
https://en.wikipedia.org/wiki/Construction
Construction is a general term meaning the art and science of forming objects, systems, or organizations. It comes from the Latin word constructio (from com- "together" and struere "to pile up") and Old French construction. To 'construct' is a verb: the act of building, and the noun is construction: how something is built or the nature of its structure. In its most widely used context, construction covers the processes involved in delivering buildings, infrastructure, industrial facilities, and associated activities through to the end of their life. It typically starts with planning, financing, and design that continues until the asset is built and ready for use. Construction also covers repairs and maintenance work, any works to expand, extend and improve the asset, and its eventual demolition, dismantling or decommissioning. The construction industry contributes significantly to many countries' gross domestic products (GDP). Global expenditure on construction activities was about $4 trillion in 2012. In 2022, expenditure on the construction industry exceeded $11 trillion a year, equivalent to about 13 percent of global GDP. This spending was forecasted to rise to around $14.8 trillion in 2030. The construction industry promotes economic development and brings many non-monetary benefits to many countries, but it is one of the most hazardous industries. For example, about 20% (1,061) of US industry fatalities in 2019 happened in construction. History The first huts and shelters were constructed by hand or with simple tools. As cities grew during the Bronze Age, a class of professional craftsmen, like bricklayers and carpenters, appeared. Occasionally, slaves were used for construction work. In the Middle Ages, the artisan craftsmen were organized into guilds. In the 19th century, steam-powered machinery appeared, and later, diesel- and electric-powered vehicles such as cranes, excavators and bulldozers. Fast-track construction has been increasingly popular in the 21st century. Some estimates suggest that 40% of construction projects are now fast-track construction. Construction industry sectors Broadly, there are three sectors of construction: buildings, infrastructure and industrial: Building construction is usually further divided into residential and non-residential. Infrastructure, also called 'heavy civil' or 'heavy engineering', includes large public works, dams, bridges, highways, railways, water or wastewater and utility distribution. Industrial construction includes offshore construction (mainly of energy installations), mining and quarrying, refineries, chemical processing, mills and manufacturing plants. The industry can also be classified into sectors or markets. For example, Engineering News-Record (ENR), a US-based construction trade magazine, has compiled and reported data about the size of design and construction contractors. In 2014, it split the data into nine market segments: transportation, petroleum, buildings, power, industrial, water, manufacturing, sewage/waste, telecom, hazardous waste, and a tenth category for other projects. ENR used data on transportation, sewage, hazardous waste and water to rank firms as heavy contractors. The Standard Industrial Classification and the newer North American Industry Classification System classify companies that perform or engage in construction into three subsectors: building construction, heavy and civil engineering construction, and specialty trade contractors. There are also categories for professional services firms (e.g., engineering, architecture, surveying, project management). Building construction Building construction is the process of adding structures to areas of land, also known as real property sites. Typically, a project is instigated by or with the owner of the property (who may be an individual or an organisation); occasionally, land may be compulsorily purchased from the owner for public use. Residential construction Residential construction may be undertaken by individual land-owners (self-built), by specialist housebuilders, by property developers, by general contractors, or by providers of public or social housing (e.g.: local authorities, housing associations). Where local zoning or planning policies allow, mixed-use developments may comprise both residential and non-residential construction (e.g.: retail, leisure, offices, public buildings, etc.). Residential construction practices, technologies, and resources must conform to local building authority's regulations and codes of practice. Materials readily available in the area generally dictate the construction materials used (e.g.: brick versus stone versus timber). Costs of construction on a per square meter (or per square foot) basis for houses can vary dramatically based on site conditions, access routes, local regulations, economies of scale (custom-designed homes are often more expensive to build) and the availability of skilled tradespeople. Non-residential construction Depending upon the type of building, non-residential building construction can be procured by a wide range of private and public organisations, including local authorities, educational and religious bodies, transport undertakings, retailers, hoteliers, property developers, financial institutions and other private companies. Most construction in these sectors is undertaken by general contractors. Infrastructure construction Civil engineering covers the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, tunnels, airports, water and sewerage systems, pipelines, and railways. Some general contractors have expertise in civil engineering; civil engineering contractors are firms dedicated to work in this sector, and may specialise in particular types of infrastructure. Industrial construction Industrial construction includes offshore construction (mainly of energy installations: oil and gas platforms, wind power), mining and quarrying, refineries, breweries, distilleries and other processing plants, power stations, steel mills, warehouses and factories. Construction processes Some construction projects are small renovations or repair jobs, like repainting or fixing leaks, where the owner may act as designer, paymaster and laborer for the entire project. However, more complex or ambitious projects usually require additional multi-disciplinary expertise and manpower, so the owner may commission one or more specialist businesses to undertake detailed planning, design, construction and handover of the work. Often the owner will appoint one business to oversee the project (this may be a designer, a contractor, a construction manager, or other advisors); such specialists are normally appointed for their expertise in project delivery and construction management and will help the owner define the project brief, agree on a budget and schedule, liaise with relevant public authorities, and procure materials and the services of other specialists (the supply chain, comprising subcontractors and materials suppliers). Contracts are agreed for the delivery of services by all businesses, alongside other detailed plans aimed at ensuring legal, timely, on-budget and safe delivery of the specified works. Design, finance, and legal aspects overlap and interrelate. The design must be not only structurally sound and appropriate for the use and location, but must also be financially possible to build, and legal to use. The financial structure must be adequate to build the design provided and must pay amounts that are legally owed. Legal structures integrate design with other activities and enforce financial and other construction processes. These processes also affect procurement strategies. Clients may, for example, appoint a business to design the project, after which a competitive process is undertaken to appoint a lead contractor to construct the asset (design–bid–build); they may appoint a business to lead both design and construction (design-build); or they may directly appoint a designer, contractor and specialist subcontractors (construction management). Some forms of procurement emphasize collaborative relationships (partnering, alliancing) between the client, the contractor, and other stakeholders within a construction project, seeking to ameliorate often highly competitive and adversarial industry practices. DfMA (design for manufacture and assembly) approaches also emphasize early collaboration with manufacturers and suppliers regarding products and components. Construction or refurbishment work in a "live" environment (where residents or businesses remain living in or operating on the site) requires particular care, planning and communication. Planning When applicable, a proposed construction project must comply with local land-use planning policies including zoning and building code requirements. A project will normally be assessed (by the 'authority having jurisdiction', AHJ, typically the municipality where the project will be located) for its potential impacts on neighbouring properties, and upon existing infrastructure (transportation, social infrastructure, and utilities including water supply, sewerage, electricity, telecommunications, etc.). Data may be gathered through site analysis, site surveys and geotechnical investigations. Construction normally cannot start until planning permission has been granted, and may require preparatory work to ensure relevant infrastructure has been upgraded before building work can commence. Preparatory works will also include surveys of existing utility lines to avoid damage-causing outages and other hazardous situations. Some legal requirements come from malum in se considerations, or the desire to prevent indisputably bad phenomena, e.g. explosions or bridge collapses. Other legal requirements come from malum prohibitum considerations, or factors that are a matter of custom or expectation, such as isolating businesses from a business district or residences from a residential district. An attorney may seek changes or exemptions in the law that governs the land where the building will be built, either by arguing that a rule is inapplicable (the bridge design will not cause a collapse), or that the custom is no longer needed (acceptance of live-work spaces has grown in the community). During the construction of a building, a municipal building inspector usually inspects the ongoing work periodically to ensure that construction adheres to the approved plans and the local building code. Once construction is complete, any later changes made to a building or other asset that affect safety, including its use, expansion, structural integrity, and fire protection, usually require municipality approval. Finance Depending on the type of project, mortgage bankers, accountants, and cost engineers may participate in creating an overall plan for the financial management of a construction project. The presence of the mortgage banker is highly likely, even in relatively small projects since the owner's equity in the property is the most obvious source of funding for a building project. Accountants act to study the expected monetary flow over the life of the project and to monitor the payouts throughout the process. Professionals including cost engineers, estimators and quantity surveyors apply expertise to relate the work and materials involved to a proper valuation. Financial planning ensures adequate safeguards and contingency plans are in place before the project is started, and ensures that the plan is properly executed over the life of the project. Construction projects can suffer from preventable financial problems. Underbids happen when builders ask for too little money to complete the project. Cash flow problems exist when the present amount of funding cannot cover the current costs for labour and materials; such problems may arise even when the overall budget is adequate, presenting a temporary issue. Cost overruns with government projects have occurred when the contractor identified change orders or project changes that increased costs, which are not subject to competition from other firms as they have already been eliminated from consideration after the initial bid. Fraud is also an issue of growing significance within construction. Large projects can involve highly complex financial plans and often start with a conceptual cost estimate performed by a building estimator. As portions of a project are completed, they may be sold, supplanting one lender or owner for another, while the logistical requirements of having the right trades and materials available for each stage of the building construction project carry forward. Public–private partnerships (PPPs) or private finance initiatives (PFIs) may also be used to help deliver major projects. According to McKinsey in 2019, the "vast majority of large construction projects go over budget and take 20% longer than expected". Legal A construction project is a complex net of construction contracts and other legal obligations, each of which all parties must carefully consider. A contract is the exchange of a set of obligations between two or more parties, and provides structures to manage issues. For example, construction delays can be costly, so construction contracts set out clear expectations and clear paths to manage delays. Poorly drafted contracts can lead to confusion and costly disputes. At the start of a project, legal advisors seek to identify ambiguities and other potential sources of trouble in the contract structures, and to present options for preventing problems. During projects, they work to avoid and resolve conflicts that arise. In each case, the lawyer facilitates an exchange of obligations that matches the reality of the project. Procurement Traditional or Design-bid-build Design-bid-build is the most common and well-established method of construction procurement. In this arrangement, the architect, engineer or builder acts for the client as the project coordinator. They design the works, prepare specifications and design deliverables (models, drawings, etc.), administer the contract, tender the works, and manage the works from inception to completion. In parallel, there are direct contractual links between the client and the main contractor, who, in turn, has direct contractual relationships with subcontractors. The arrangement continues until the project is ready for handover. Design-build Design-build became more common from the late 20th century, and involves the client contracting a single entity to provide design and construction. In some cases, the design-build package can also include finding the site, arranging funding and applying for all necessary statutory consents. Typically, the client invites several Design & Build (D&B) contractors to submit proposals to meet the project brief and then selects a preferred supplier. Often this will be a consortium involving a design firm and a contractor (sometimes more than one of each). In the United States, departments of transportation usually use design-build contracts as a way of progressing projects where states lack the skills or resources, particularly for very large projects. Construction management In a construction management arrangement, the client enters into separate contracts with the designer (architect or engineer), a construction manager, and individual trade contractors. The client takes on the contractual role, while the construction or project manager provides the active role of managing the separate trade contracts, and ensuring that they complete all work smoothly and effectively together. This approach is often used to speed up procurement processes, to allow the client greater flexibility in design variation throughout the contract, to enable the appointment of individual work contractors, to separate contractual responsibility on each individual throughout the contract, and to provide greater client control. Design In the industrialized world, construction usually involves the translation of designs into reality. Most commonly (i.e.: in a design-bid-build project), the design team is employed by (i.e. in contract with) the property owner. Depending upon the type of project, a design team may include architects, civil engineers, mechanical engineers, electrical engineers, structural engineers, fire protection engineers, planning consultants, architectural consultants, and archaeological consultants. A 'lead designer' will normally be identified to help coordinate different disciplinary inputs to the overall design. This may be aided by integration of previously separate disciplines (often undertaken by separate firms) into multi-disciplinary firms with experts from all related fields, or by firms establishing relationships to support design-build processes. The increasing complexity of construction projects creates the need for design professionals trained in all phases of a project's life-cycle and develop an appreciation of the asset as an advanced technological system requiring close integration of many sub-systems and their individual components, including sustainability. For buildings, building engineering is an emerging discipline that attempts to meet this new challenge. Traditionally, design has involved the production of sketches, architectural and engineering drawings, and specifications. Until the late 20th century, drawings were largely hand-drafted; adoption of computer-aided design (CAD) technologies then improved design productivity, while the 21st-century introduction of building information modeling (BIM) processes has involved the use of computer-generated models that can be used in their own right or to generate drawings and other visualisations as well as capturing non-geometric data about building components and systems. On some projects, work on-site will not start until design work is largely complete; on others, some design work may be undertaken concurrently with the early stages of on-site activity (for example, work on a building's foundations may commence while designers are still working on the detailed designs of the building's internal spaces). Some projects may include elements that are designed for off-site construction (see also prefabrication and modular building) and are then delivered to the site ready for erection, installation or assembly. On-site construction Once contractors and other relevant professionals have been appointed and designs are sufficiently advanced, work may commence on the project site. Typically, a construction site will include a secure perimeter to restrict unauthorised access, site access control points, office and welfare accommodation for personnel from the main contractor and other firms involved in the project team, and storage areas for materials, machinery and equipment. According to the McGraw-Hill Dictionary of Architecture and Construction's definition, construction may be said to have started when the first feature of the permanent structure has been put in place, such as pile driving, or the pouring of slabs or footings. Commissioning and handover Commissioning is the process of verifying that all subsystems of a new building (or other assets) work as intended to achieve the owner's project requirements and as designed by the project's architects and engineers. Defects liability period A period after handover (or practical completion) during which the owner may identify any shortcomings in relation to the building specification ('defects'), with a view to the contractor correcting the defect. Maintenance, repair and improvement Maintenance involves functional checks, servicing, repairing or replacing of necessary devices, equipment, machinery, building infrastructure, and supporting utilities in industrial, business, governmental, and residential installations. Demolition Demolition is the discipline of safely and efficiently tearing down buildings and other artificial structures. Demolition contrasts with deconstruction, which involves taking a building apart while carefully preserving valuable elements for reuse purposes (recycling – see also circular economy). Industry scale and characteristics Economic activity The output of the global construction industry was worth an estimated $10.8 trillion in 2017, and in 2018 was forecast to rise to $12.9 trillion by 2022, and to around $14.8 trillion in 2030. As a sector, construction accounts for more than 10% of global GDP (in developed countries, construction comprises 6–9% of GDP), and employs around 7% of the total employed workforce around the globe (accounting for over 273 million full- and part-time jobs in 2014). Since 2010, China has been the world's largest single construction market. The United States is the second largest construction market with a 2018 output of $1.581 trillion. In the United States in February 2020, around $1.4 trillion worth of construction work was in progress, according to the Census Bureau, of which just over $1.0 trillion was for the private sector (split roughly 55:45% between residential and nonresidential); the remainder was public sector, predominantly for state and local government. In Armenia, the construction sector experienced growth during the latter part of 2000s. Based on National Statistical Service, Armenia's construction sector generated approximately 20% of Armenia's GDP during the first and second quarters of 2007. In 2009, according to the World Bank, 30% of Armenia's economy was from construction sector. In Vietnam, the construction industry plays an important role in the national economy. The Vietnamese construction industry has been one of the fastest growing in the Asia-Pacific region in recent years. The market was valued at nearly $60 billion in 2021. In the first half of 2022, Vietnam's construction industry growth rate reached 5.59%. In 2022, Vietnam's construction industry accounted for more than 6% of the country's GDP, equivalent to over 589.7 billion Vietnamese dong. The industry of industry and construction accounts for 38.26% of Vietnam's GDP. At the same time, the industry is one of the most attractive industries for foreign direct investment (FDI) in recent years. Construction is a major source of employment in most countries; high reliance on small businesses, and under-representation of women are common traits. For example: In the US, construction employed around 11.4m people in 2020, with a further 1.8m employed in architectural, engineering, and related professional services – equivalent to just over 8% of the total US workforce. The construction workers were employed in over 843,000 organisations, of which 838,000 were privately held businesses. In March 2016, 60.4% of construction workers were employed by businesses with fewer than 50 staff. Women are substantially underrepresented (relative to their share of total employment), comprising 10.3% of the US construction workforce, and 25.9% of professional services workers, in 2019. The United Kingdom construction sector contributed £117 billion (6%) to UK GDP in 2018, and in 2019 employed 2.4m workers (6.6% of all jobs). These worked either for 343,000 'registered' construction businesses, or for 'unregistered' businesses, typically self-employed contractors; just over one million small/medium-sized businesses, mainly self-employed individuals, worked in the sector in 2019, comprising about 18% of all UK businesses. Women comprised 12.5% of the UK construction workforce. According to McKinsey research, productivity growth per worker in construction has lagged behind many other industries across different countries including in the United States and in European countries. In the United States, construction productivity per worker has declined by half since the 1960s. Construction GVA by country Employment Some workers may be engaged in manual labour as unskilled or semi-skilled workers; they may be skilled tradespeople; or they may be supervisory or managerial personnel. Under safety legislation in the United Kingdom, for example, construction workers are defined as people "who work for or under the control of a contractor on a construction site"; in Canada, this can include people whose work includes ensuring conformance with building codes and regulations, and those who supervise other workers. Laborers comprise a large grouping in most national construction industries. In the United States, for example, in May 2023, the construction sector employed just over 7.9 million people, of whom 859,000 were laborers, while 3.7 million were construction trades workers (including 603,000 carpenters, 559,000 electricians, 385,000 plumbers, and 321,000 equipment operators). Like most business sectors, there is also substantial white-collar employment in construction - out of 7.9 million US construction sector workers, 681,000 were recorded by the United States Department of Labor in May 2023 as in 'office and administrative support occupations', 620,000 in 'management occupations' and 480,000 in 'business and financial operations occupations'. Large-scale construction requires collaboration across multiple disciplines. A project manager normally manages the budget on the job, and a construction manager, design engineer, construction engineer or architect supervises it. Those involved with the design and execution must consider zoning requirements and legal issues, environmental impact of the project, scheduling, budgeting and bidding, construction site safety, availability and transportation of building materials, logistics, and inconvenience to the public, including those caused by construction delays. Some models and policy-making organisations promote the engagement of local labour in construction projects as a means of tackling social exclusion and addressing skill shortages. In the UK, the Joseph Rowntree Foundation reported in 2000 on 25 projects which had aimed to offer training and employment opportunities for locally based school leavers and unemployed people. The Foundation published "a good practice resource book" in this regard at the same time.<ref>Macfarlane, R., Using local labour in construction: A good practice resource book, The Policy Press/Joseph Rowntree Foundation, published 17 November 2000, accessed 17 February 2024</ref> Use of local labour and local materials were specified for the construction of the Danish Storebaelt bridge, but there were legal issues which were challenged in court and addressed by the European Court of Justice in 1993. The court held that a contract condition requiring use of local labour and local materials was incompatible with EU treaty principles. Later UK guidance noted that social and employment clauses, where used, must be compatible with relevant EU regulation. Employment of local labour was identified as one of several social issues which could potentially be incorporated in a sustainable procurement approach, although the interdepartmental Sustainable Procurement Group recognised that "there is far less scope to incorporate [such] social issues in public procurement than is the case with environmental issues". There are many routes to the different careers within the construction industry. There are three main tiers of construction workers based on educational background and training, which vary by country: Unskilled and semi-skilled workers Unskilled and semi-skilled workers provide general site labor, often have few or no construction qualifications, and may receive basic site training. Skilled tradespeople Skilled tradespeople have typically served apprenticeships (sometimes in labor unions) or received technical training; this group also includes on-site managers who possess extensive knowledge and experience in their craft or profession. Skilled manual occupations include carpenters, electricians, plumbers, ironworkers, heavy equipment operators and masons, as well as those involved in project management. In the UK these require further education qualifications, often in vocational subject areas, undertaken either directly after completing compulsory education or through "on the job" apprenticeships. Professional, technical or managerial personnel Professional, technical and managerial personnel often have higher education qualifications, usually graduate degrees, and are trained to design and manage construction processes. These roles require more training as they demand greater technical knowledge, and involve more legal responsibility. Example roles (and qualification routes) include: Architect – Will usually have studied architecture to degree level, and then undertaken further study and gained professional experience. In many countries, the title of "architect" is protected by law, strictly limiting its use to qualified people. Civil engineer – Typically holds a degree in a related subject and may only be eligible for membership of a professional institution (such as the UK's ICE) following completion of additional training and experience. In some jurisdictions, a new university graduate must hold a master's degree to become chartered, and persons with bachelor's degrees may become Incorporated Engineers. Building services engineer – May also be referred to as an "M&E" or "mechanical, electrical, and plumbing (MEP) engineer" and typically holds a degree in mechanical or electrical engineering. Project manager – Typically holds a 4-year or greater higher education qualification, but are often also qualified in another field such as architecture, civil engineering or quantity surveying. Structural engineer – Typically holds a bachelor's or master's degree in structural engineering. Quantity surveyor – Typically holds a bachelor's degree in quantity surveying. UK chartered status is gained from the Royal Institution of Chartered Surveyors. Safety Construction is one of the most dangerous occupations in the world, incurring more occupational fatalities than any other sector in both the United States and in the European Union. In the US in 2019, 1,061, or about 20%, of worker fatalities in private industry occurred in construction. In 2017, more than a third of US construction fatalities (366 out of 971 total fatalities) were the result of falls; in the UK, half of the average 36 fatalities per annum over a five-year period to 2021 were attributed to falls from height. Proper safety equipment such as harnesses, hard hats and guardrails and procedures such as securing ladders and inspecting scaffolding can curtail the risk of occupational injuries in the construction industry. Other major causes of fatalities in the construction industry include electrocution, transportation accidents, and trench cave-ins. Other safety risks for workers in construction include hearing loss due to high noise exposure, musculoskeletal injury, chemical exposure, and high levels of stress. Besides that, the high turnover of workers in construction industry imposes a huge challenge of accomplishing the restructuring of work practices in individual workplaces or with individual workers. Construction has been identified by the National Institute for Occupational Safety and Health (NIOSH) as a priority industry sector in the National Occupational Research Agenda (NORA) to identify and provide intervention strategies regarding occupational health and safety issues. A study conducted in 2022 found “significant effect of air pollution exposure on construction-related injuries and fatalities”, especially with the exposure of nitrogen dioxide. Sustainability Sustainability is an aspect of "green building", defined by the United States Environmental Protection Agency (EPA) as "the practice of creating structures and using processes that are environmentally responsible and resource-efficient throughout a building's life-cycle from siting to design, construction, operation, maintenance, renovation and deconstruction." Decarbonising construction The construction industry may require transformation at pace and at scale if it is to successfully contribute to achieving the target set out in The Paris Agreement of limiting global temperature rise to 1.5C above industrial levels. The World Green Building Council has stated the buildings and infrastructure around the world can reach 40% less embodied carbon emissions but that this can only be achieved through urgent transformation. Conclusions from industry leaders have suggested that the net zero transformation is likely to be challenging for the construction industry, but it does present an opportunity. Action is demanded from governments, standards bodies, the construction sector, and the engineering profession to meet the decarbonising targets. In 2021, the National Engineering Policy Centre published its report Decarbonising Construction: Building a new net zero industry,'' which outlined key areas to decarbonise the construction sector and the wider built environment. This report set out around 20 different recommendations to transform and decarbonise the construction sector, including recommendations for engineers, the construction industry and decision makers, plus outlined six-overarching ‘system levers’ where action taken now will result in rapid decarbonisation of the construction sector. These levels are: Setting and stipulating progressive targets for carbon reduction Embedding quantitative whole-life carbon assessment into public procurement Increasing design efficiency, materials reuse and retrofit of buildings Improving whole-life carbon performance Improving skills for net zero Adopting a joined up, systems approach to decarbonisation across the construction sector and with other sectors Progress is being made internationally to decarbonise the sector including improvements to sustainable procurement practice such as the CO2 performance ladder in the Netherlands and the Danish Partnership for Green Public Procurement. There are also now demonstrations of applying the principles of circular economy practices in practice such as Circl, ABN AMRO's sustainable pavilion and the Brighton Waste House. See also Notes References
Construction
Engineering
6,276
20,598,932
https://en.wikipedia.org/wiki/Hilbert%20space
In mathematics, Hilbert spaces (named after David Hilbert) allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space. A Hilbert space is a special case of a Banach space. Hilbert spaces were studied beginning in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer), and ergodic theory (which forms the mathematical underpinning of thermodynamics). John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis. Apart from the classical Euclidean vector spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, and Hardy spaces of holomorphic functions. Geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space. At a deeper level, perpendicular projection onto a linear subspace plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to an orthonormal basis, in analogy with Cartesian coordinates in classical geometry. When this basis is countably infinite, it allows identifying the Hilbert space with the space of the infinite sequences that are square-summable. The latter space is often in the older literature referred to as the Hilbert space. Definition and illustration Motivating example: Euclidean vector space One of the most familiar examples of a Hilbert space is the Euclidean vector space consisting of three-dimensional vectors, denoted by , and equipped with the dot product. The dot product takes two vectors and , and produces a real number . If and are represented in Cartesian coordinates, then the dot product is defined by The dot product satisfies the properties It is symmetric in and : . It is linear in its first argument: for any scalars , , and vectors , , and . It is positive definite: for all vectors , , with equality if and only if . An operation on pairs of vectors that, like the dot product, satisfies these three properties is known as a (real) inner product. A vector space equipped with such an inner product is known as a (real) inner product space. Every finite-dimensional inner product space is also a Hilbert space. The basic feature of the dot product that connects it with Euclidean geometry is that it is related to both the length (or norm) of a vector, denoted , and to the angle between two vectors and by means of the formula Multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist. A mathematical series consisting of vectors in is absolutely convergent provided that the sum of the lengths converges as an ordinary series of real numbers: Just as with a series of scalars, a series of vectors that converges absolutely also converges to some limit vector in the Euclidean space, in the sense that This property expresses the completeness of Euclidean space: that a series that converges absolutely also converges in the ordinary sense. Hilbert spaces are often taken over the complex numbers. The complex plane denoted by is equipped with a notion of magnitude, the complex modulus , which is defined as the square root of the product of with its complex conjugate: If is a decomposition of into its real and imaginary parts, then the modulus is the usual Euclidean two-dimensional length: The inner product of a pair of complex numbers and is the product of with the complex conjugate of : This is complex-valued. The real part of gives the usual two-dimensional Euclidean dot product. A second example is the space whose elements are pairs of complex numbers . Then an inner product of with another such vector is given by The real part of is then the four-dimensional Euclidean dot product. This inner product is Hermitian symmetric, which means that the result of interchanging and is the complex conjugate: Definition A is a real or complex inner product space that is also a complete metric space with respect to the distance function induced by the inner product. To say that a complex vector space is a means that there is an inner product associating a complex number to each pair of elements of that satisfies the following properties: The inner product is conjugate symmetric; that is, the inner product of a pair of elements is equal to the complex conjugate of the inner product of the swapped elements: Importantly, this implies that is a real number. The inner product is linear in its first argument. For all complex numbers and The inner product of an element with itself is positive definite: It follows from properties 1 and 2 that a complex inner product is , also called , in its second argument, meaning that A is defined in the same way, except that is a real vector space and the inner product takes real values. Such an inner product will be a bilinear map and will form a dual system. The norm is the real-valued function and the distance between two points in is defined in terms of the norm by That this function is a distance function means firstly that it is symmetric in and secondly that the distance between and itself is zero, and otherwise the distance between and must be positive, and lastly that the triangle inequality holds, meaning that the length of one leg of a triangle cannot exceed the sum of the lengths of the other two legs: This last property is ultimately a consequence of the more fundamental Cauchy–Schwarz inequality, which asserts with equality if and only if and are linearly dependent. With a distance function defined in this way, any inner product space is a metric space, and sometimes is known as a . Any pre-Hilbert space that is additionally also a complete space is a Hilbert space. The of is expressed using a form of the Cauchy criterion for sequences in : a pre-Hilbert space is complete if every Cauchy sequence converges with respect to this norm to an element in the space. Completeness can be characterized by the following equivalent condition: if a series of vectors converges absolutely in the sense that then the series converges in , in the sense that the partial sums converge to an element of . As a complete normed space, Hilbert spaces are by definition also Banach spaces. As such they are topological vector spaces, in which topological notions like the openness and closedness of subsets are well defined. Of special importance is the notion of a closed linear subspace of a Hilbert space that, with the inner product induced by restriction, is also complete (being a closed set in a complete metric space) and therefore a Hilbert space in its own right. Second example: sequence spaces The sequence space consists of all infinite sequences of complex numbers such that the following series converges: The inner product on is defined by: This second series converges as a consequence of the Cauchy–Schwarz inequality and the convergence of the previous series. Completeness of the space holds provided that whenever a series of elements from converges absolutely (in norm), then it converges to an element of . The proof is basic in mathematical analysis, and permits mathematical series of elements of the space to be manipulated with the same ease as series of complex numbers (or vectors in a finite-dimensional Euclidean space). History Prior to the development of Hilbert spaces, other generalizations of Euclidean spaces were known to mathematicians and physicists. In particular, the idea of an abstract linear space (vector space) had gained some traction towards the end of the 19th century: this is a space whose elements can be added together and multiplied by scalars (such as real or complex numbers) without necessarily identifying these elements with "geometric" vectors, such as position and momentum vectors in physical systems. Other objects studied by mathematicians at the turn of the 20th century, in particular spaces of sequences (including series) and spaces of functions, can naturally be thought of as linear spaces. Functions, for instance, can be added together or multiplied by constant scalars, and these operations obey the algebraic laws satisfied by addition and scalar multiplication of spatial vectors. In the first decade of the 20th century, parallel developments led to the introduction of Hilbert spaces. The first of these was the observation, which arose during David Hilbert and Erhard Schmidt's study of integral equations, that two square-integrable real-valued functions and on an interval have an inner product that has many of the familiar properties of the Euclidean dot product. In particular, the idea of an orthogonal family of functions has meaning. Schmidt exploited the similarity of this inner product with the usual dot product to prove an analog of the spectral decomposition for an operator of the form where is a continuous function symmetric in and . The resulting eigenfunction expansion expresses the function as a series of the form where the functions are orthogonal in the sense that for all . The individual terms in this series are sometimes referred to as elementary product solutions. However, there are eigenfunction expansions that fail to converge in a suitable sense to a square-integrable function: the missing ingredient, which ensures convergence, is completeness. The second development was the Lebesgue integral, an alternative to the Riemann integral introduced by Henri Lebesgue in 1904. The Lebesgue integral made it possible to integrate a much broader class of functions. In 1907, Frigyes Riesz and Ernst Sigismund Fischer independently proved that the space of square Lebesgue-integrable functions is a complete metric space. As a consequence of the interplay between geometry and completeness, the 19th century results of Joseph Fourier, Friedrich Bessel and Marc-Antoine Parseval on trigonometric series easily carried over to these more general spaces, resulting in a geometrical and analytical apparatus now usually known as the Riesz–Fischer theorem. Further basic results were proved in the early 20th century. For example, the Riesz representation theorem was independently established by Maurice Fréchet and Frigyes Riesz in 1907. John von Neumann coined the term abstract Hilbert space in his work on unbounded Hermitian operators. Although other mathematicians such as Hermann Weyl and Norbert Wiener had already studied particular Hilbert spaces in great detail, often from a physically motivated point of view, von Neumann gave the first complete and axiomatic treatment of them. Von Neumann later used them in his seminal work on the foundations of quantum mechanics, and in his continued work with Eugene Wigner. The name "Hilbert space" was soon adopted by others, for example by Hermann Weyl in his book on quantum mechanics and the theory of groups. The significance of the concept of a Hilbert space was underlined with the realization that it offers one of the best mathematical formulations of quantum mechanics. In short, the states of a quantum mechanical system are vectors in a certain Hilbert space, the observables are hermitian operators on that space, the symmetries of the system are unitary operators, and measurements are orthogonal projections. The relation between quantum mechanical symmetries and unitary operators provided an impetus for the development of the unitary representation theory of groups, initiated in the 1928 work of Hermann Weyl. On the other hand, in the early 1930s it became clear that classical mechanics can be described in terms of Hilbert space (Koopman–von Neumann classical mechanics) and that certain properties of classical dynamical systems can be analyzed using Hilbert space techniques in the framework of ergodic theory. The algebra of observables in quantum mechanics is naturally an algebra of operators defined on a Hilbert space, according to Werner Heisenberg's matrix mechanics formulation of quantum theory. Von Neumann began investigating operator algebras in the 1930s, as rings of operators on a Hilbert space. The kind of algebras studied by von Neumann and his contemporaries are now known as von Neumann algebras. In the 1940s, Israel Gelfand, Mark Naimark and Irving Segal gave a definition of a kind of operator algebras called C*-algebras that on the one hand made no reference to an underlying Hilbert space, and on the other extrapolated many of the useful features of the operator algebras that had previously been studied. The spectral theorem for self-adjoint operators in particular that underlies much of the existing Hilbert space theory was generalized to C*-algebras. These techniques are now basic in abstract harmonic analysis and representation theory. Examples Lebesgue spaces Lebesgue spaces are function spaces associated to measure spaces , where is a set, is a σ-algebra of subsets of , and is a countably additive measure on . Let be the space of those complex-valued measurable functions on for which the Lebesgue integral of the square of the absolute value of the function is finite, i.e., for a function in , and where functions are identified if and only if they differ only on a set of measure zero. The inner product of functions and in is then defined as or where the second form (conjugation of the first element) is commonly found in the theoretical physics literature. For and in , the integral exists because of the Cauchy–Schwarz inequality, and defines an inner product on the space. Equipped with this inner product, is in fact complete. The Lebesgue integral is essential to ensure completeness: on domains of real numbers, for instance, not enough functions are Riemann integrable. The Lebesgue spaces appear in many natural settings. The spaces and of square-integrable functions with respect to the Lebesgue measure on the real line and unit interval, respectively, are natural domains on which to define the Fourier transform and Fourier series. In other situations, the measure may be something other than the ordinary Lebesgue measure on the real line. For instance, if is any positive measurable function, the space of all measurable functions on the interval satisfying is called the weighted space , and is called the weight function. The inner product is defined by The weighted space is identical with the Hilbert space where the measure of a Lebesgue-measurable set is defined by Weighted spaces like this are frequently used to study orthogonal polynomials, because different families of orthogonal polynomials are orthogonal with respect to different weighting functions. Sobolev spaces Sobolev spaces, denoted by or , are Hilbert spaces. These are a special kind of function space in which differentiation may be performed, but that (unlike other Banach spaces such as the Hölder spaces) support the structure of an inner product. Because differentiation is permitted, Sobolev spaces are a convenient setting for the theory of partial differential equations. They also form the basis of the theory of direct methods in the calculus of variations. For a non-negative integer and , the Sobolev space contains functions whose weak derivatives of order up to are also . The inner product in is where the dot indicates the dot product in the Euclidean space of partial derivatives of each order. Sobolev spaces can also be defined when is not an integer. Sobolev spaces are also studied from the point of view of spectral theory, relying more specifically on the Hilbert space structure. If is a suitable domain, then one can define the Sobolev space as the space of Bessel potentials; roughly, Here is the Laplacian and is understood in terms of the spectral mapping theorem. Apart from providing a workable definition of Sobolev spaces for non-integer , this definition also has particularly desirable properties under the Fourier transform that make it ideal for the study of pseudodifferential operators. Using these methods on a compact Riemannian manifold, one can obtain for instance the Hodge decomposition, which is the basis of Hodge theory. Spaces of holomorphic functions Hardy spaces The Hardy spaces are function spaces, arising in complex analysis and harmonic analysis, whose elements are certain holomorphic functions in a complex domain. Let denote the unit disc in the complex plane. Then the Hardy space is defined as the space of holomorphic functions on such that the means remain bounded for . The norm on this Hardy space is defined by Hardy spaces in the disc are related to Fourier series. A function is in if and only if where Thus consists of those functions that are L2 on the circle, and whose negative frequency Fourier coefficients vanish. Bergman spaces The Bergman spaces are another family of Hilbert spaces of holomorphic functions. Let be a bounded open set in the complex plane (or a higher-dimensional complex space) and let be the space of holomorphic functions in that are also in in the sense that where the integral is taken with respect to the Lebesgue measure in . Clearly is a subspace of ; in fact, it is a closed subspace, and so a Hilbert space in its own right. This is a consequence of the estimate, valid on compact subsets of , that which in turn follows from Cauchy's integral formula. Thus convergence of a sequence of holomorphic functions in implies also compact convergence, and so the limit function is also holomorphic. Another consequence of this inequality is that the linear functional that evaluates a function at a point of is actually continuous on . The Riesz representation theorem implies that the evaluation functional can be represented as an element of . Thus, for every , there is a function such that for all . The integrand is known as the Bergman kernel of . This integral kernel satisfies a reproducing property A Bergman space is an example of a reproducing kernel Hilbert space, which is a Hilbert space of functions along with a kernel that verifies a reproducing property analogous to this one. The Hardy space also admits a reproducing kernel, known as the Szegő kernel. Reproducing kernels are common in other areas of mathematics as well. For instance, in harmonic analysis the Poisson kernel is a reproducing kernel for the Hilbert space of square-integrable harmonic functions in the unit ball. That the latter is a Hilbert space at all is a consequence of the mean value theorem for harmonic functions. Applications Many of the applications of Hilbert spaces exploit the fact that Hilbert spaces support generalizations of simple geometric concepts like projection and change of basis from their usual finite dimensional setting. In particular, the spectral theory of continuous self-adjoint linear operators on a Hilbert space generalizes the usual spectral decomposition of a matrix, and this often plays a major role in applications of the theory to other areas of mathematics and physics. Sturm–Liouville theory In the theory of ordinary differential equations, spectral methods on a suitable Hilbert space are used to study the behavior of eigenvalues and eigenfunctions of differential equations. For example, the Sturm–Liouville problem arises in the study of the harmonics of waves in a violin string or a drum, and is a central problem in ordinary differential equations. The problem is a differential equation of the form for an unknown function on an interval , satisfying general homogeneous Robin boundary conditions The functions , , and are given in advance, and the problem is to find the function and constants for which the equation has a solution. The problem only has solutions for certain values of , called eigenvalues of the system, and this is a consequence of the spectral theorem for compact operators applied to the integral operator defined by the Green's function for the system. Furthermore, another consequence of this general result is that the eigenvalues of the system can be arranged in an increasing sequence tending to infinity. Partial differential equations Hilbert spaces form a basic tool in the study of partial differential equations. For many classes of partial differential equations, such as linear elliptic equations, it is possible to consider a generalized solution (known as a weak solution) by enlarging the class of functions. Many weak formulations involve the class of Sobolev functions, which is a Hilbert space. A suitable weak formulation reduces to a geometrical problem, the analytic problem of finding a solution or, often what is more important, showing that a solution exists and is unique for given boundary data. For linear elliptic equations, one geometrical result that ensures unique solvability for a large class of problems is the Lax–Milgram theorem. This strategy forms the rudiment of the Galerkin method (a finite element method) for numerical solution of partial differential equations. A typical example is the Poisson equation with Dirichlet boundary conditions in a bounded domain in . The weak formulation consists of finding a function such that, for all continuously differentiable functions in vanishing on the boundary: This can be recast in terms of the Hilbert space consisting of functions such that , along with its weak partial derivatives, are square integrable on , and vanish on the boundary. The question then reduces to finding in this space such that for all in this space where is a continuous bilinear form, and is a continuous linear functional, given respectively by Since the Poisson equation is elliptic, it follows from Poincaré's inequality that the bilinear form is coercive. The Lax–Milgram theorem then ensures the existence and uniqueness of solutions of this equation. Hilbert spaces allow for many elliptic partial differential equations to be formulated in a similar way, and the Lax–Milgram theorem is then a basic tool in their analysis. With suitable modifications, similar techniques can be applied to parabolic partial differential equations and certain hyperbolic partial differential equations. Ergodic theory The field of ergodic theory is the study of the long-term behavior of chaotic dynamical systems. The protypical case of a field that ergodic theory applies to is thermodynamics, in which—though the microscopic state of a system is extremely complicated (it is impossible to understand the ensemble of individual collisions between particles of matter)—the average behavior over sufficiently long time intervals is tractable. The laws of thermodynamics are assertions about such average behavior. In particular, one formulation of the zeroth law of thermodynamics asserts that over sufficiently long timescales, the only functionally independent measurement that one can make of a thermodynamic system in equilibrium is its total energy, in the form of temperature. An ergodic dynamical system is one for which, apart from the energy—measured by the Hamiltonian—there are no other functionally independent conserved quantities on the phase space. More explicitly, suppose that the energy is fixed, and let be the subset of the phase space consisting of all states of energy (an energy surface), and let denote the evolution operator on the phase space. The dynamical system is ergodic if every invariant measurable functions on is constant almost everywhere. An invariant function is one for which for all on and all time . Liouville's theorem implies that there exists a measure on the energy surface that is invariant under the time translation. As a result, time translation is a unitary transformation of the Hilbert space consisting of square-integrable functions on the energy surface with respect to the inner product The von Neumann mean ergodic theorem states the following: If is a (strongly continuous) one-parameter semigroup of unitary operators on a Hilbert space , and is the orthogonal projection onto the space of common fixed points of , , then For an ergodic system, the fixed set of the time evolution consists only of the constant functions, so the ergodic theorem implies the following: for any function , That is, the long time average of an observable is equal to its expectation value over an energy surface. Fourier analysis One of the basic goals of Fourier analysis is to decompose a function into a (possibly infinite) linear combination of given basis functions: the associated Fourier series. The classical Fourier series associated to a function defined on the interval is a series of the form where The example of adding up the first few terms in a Fourier series for a sawtooth function is shown in the figure. The basis functions are sine waves with wavelengths (for integer ) shorter than the wavelength of the sawtooth itself (except for , the fundamental wave). A significant problem in classical Fourier series asks in what sense the Fourier series converges, if at all, to the function . Hilbert space methods provide one possible answer to this question. The functions form an orthogonal basis of the Hilbert space . Consequently, any square-integrable function can be expressed as a series and, moreover, this series converges in the Hilbert space sense (that is, in the mean). The problem can also be studied from the abstract point of view: every Hilbert space has an orthonormal basis, and every element of the Hilbert space can be written in a unique way as a sum of multiples of these basis elements. The coefficients appearing on these basis elements are sometimes known abstractly as the Fourier coefficients of the element of the space. The abstraction is especially useful when it is more natural to use different basis functions for a space such as . In many circumstances, it is desirable not to decompose a function into trigonometric functions, but rather into orthogonal polynomials or wavelets for instance, and in higher dimensions into spherical harmonics. For instance, if are any orthonormal basis functions of , then a given function in can be approximated as a finite linear combination The coefficients are selected to make the magnitude of the difference as small as possible. Geometrically, the best approximation is the orthogonal projection of onto the subspace consisting of all linear combinations of the , and can be calculated by That this formula minimizes the difference is a consequence of Bessel's inequality and Parseval's formula. In various applications to physical problems, a function can be decomposed into physically meaningful eigenfunctions of a differential operator (typically the Laplace operator): this forms the foundation for the spectral study of functions, in reference to the spectrum of the differential operator. A concrete physical application involves the problem of hearing the shape of a drum: given the fundamental modes of vibration that a drumhead is capable of producing, can one infer the shape of the drum itself? The mathematical formulation of this question involves the Dirichlet eigenvalues of the Laplace equation in the plane, that represent the fundamental modes of vibration in direct analogy with the integers that represent the fundamental modes of vibration of the violin string. Spectral theory also underlies certain aspects of the Fourier transform of a function. Whereas Fourier analysis decomposes a function defined on a compact set into the discrete spectrum of the Laplacian (which corresponds to the vibrations of a violin string or drum), the Fourier transform of a function is the decomposition of a function defined on all of Euclidean space into its components in the continuous spectrum of the Laplacian. The Fourier transformation is also geometrical, in a sense made precise by the Plancherel theorem, that asserts that it is an isometry of one Hilbert space (the "time domain") with another (the "frequency domain"). This isometry property of the Fourier transformation is a recurring theme in abstract harmonic analysis (since it reflects the conservation of energy for the continuous Fourier Transform), as evidenced for instance by the Plancherel theorem for spherical functions occurring in noncommutative harmonic analysis. Quantum mechanics In the mathematically rigorous formulation of quantum mechanics, developed by John von Neumann, the possible states (more precisely, the pure states) of a quantum mechanical system are represented by unit vectors (called state vectors) residing in a complex separable Hilbert space, known as the state space, well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system; for example, the position and momentum states for a single non-relativistic spin zero particle is the space of all square-integrable functions, while the states for the spin of a single proton are unit elements of the two-dimensional complex Hilbert space of spinors. Each observable is represented by a self-adjoint linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. The inner product between two state vectors is a complex number known as a probability amplitude. During an ideal measurement of a quantum mechanical system, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the absolute value of the probability amplitudes between the initial and final states. The possible results of a measurement are the eigenvalues of the operator—which explains the choice of self-adjoint operators, for all the eigenvalues must be real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator. For a general system, states are typically not pure, but instead are represented as statistical mixtures of pure states, or mixed states, given by density matrices: self-adjoint operators of trace one on a Hilbert space. Moreover, for general quantum mechanical systems, the effects of a single measurement can influence other parts of a system in a manner that is described instead by a positive operator valued measure. Thus the structure both of the states and observables in the general theory is considerably more complicated than the idealization for pure states. Probability theory In probability theory, Hilbert spaces also have diverse applications. Here a fundamental Hilbert space is the space of random variables on a given probability space, having class (finite first and second moments). A common operation in statistics is that of centering a random variable by subtracting its expectation. Thus if is a random variable, then is its centering. In the Hilbert space view, this is the orthogonal projection of onto the kernel of the expectation operator, which a continuous linear functional on the Hilbert space (in fact, the inner product with the constant random variable 1), and so this kernel is a closed subspace. The conditional expectation has a natural interpretation in the Hilbert space. Suppose that a probability space is given, where is a sigma algebra on the set , and is a probability measure on the measure space . If is a sigma subalgebra of , then the conditional expectation is the orthogonal projection of onto the subspace of consisting of the -measurable functions. If the random variable in is independent of the sigma algebra then conditional expectation , i.e., its projection onto the -measurable functions is constant. Equivalently, the projection of its centering is zero. In particular, if two random variables and (in ) are independent, then the centered random variables and are orthogonal. (This means that the two variables have zero covariance: they are uncorrelated.) In that case, the Pythagorean theorem in the kernel of the expectation operator implies that the variances of and satisfy the identity: sometimes called the Pythagorean theorem of statistics, and is of importance in linear regression. As puts it, "the analysis of variance may be viewed as the decomposition of the squared length of a vector into the sum of the squared lengths of several vectors, using the Pythagorean Theorem." The theory of martingales can be formulated in Hilbert spaces. A martingale in a Hilbert space is a sequence of elements of a Hilbert space such that, for each , is the orthogonal projection of onto the linear hull of . If the are random variables, this reproduces the usual definition of a (discrete) martingale: the expectation of , conditioned on , is equal to . Hilbert spaces are also used throughout the foundations of the Itô calculus. To any square-integrable martingale, it is possible to associate a Hilbert norm on the space of equivalence classes of progressively measurable processes with respect to the martingale (using the quadratic variation of the martingale as the measure). The Itô integral can be constructed by first defining it for simple processes, and then exploiting their density in the Hilbert space. A noteworthy result is then the Itô isometry, which attests that for any martingale M having quadratic variation measure , and any progressively measurable process H: whenever the expectation on the right-hand side is finite. A deeper application of Hilbert spaces that is especially important in the theory of Gaussian processes is an attempt, due to Leonard Gross and others, to make sense of certain formal integrals over infinite dimensional spaces like the Feynman path integral from quantum field theory. The problem with integral like this is that there is no infinite dimensional Lebesgue measure. The notion of an abstract Wiener space allows one to construct a measure on a Banach space that contains a Hilbert space , called the Cameron–Martin space, as a dense subset, out of a finitely additive cylinder set measure on . The resulting measure on is countably additive and invariant under translation by elements of , and this provides a mathematically rigorous way of thinking of the Wiener measure as a Gaussian measure on the Sobolev space . Color perception Any true physical color can be represented by a combination of pure spectral colors. As physical colors can be composed of any number of spectral colors, the space of physical colors may aptly be represented by a Hilbert space over spectral colors. Humans have three types of cone cells for color perception, so the perceivable colors can be represented by 3-dimensional Euclidean space. The many-to-one linear mapping from the Hilbert space of physical colors to the Euclidean space of human perceivable colors explains why many distinct physical colors may be perceived by humans to be identical (e.g., pure yellow light versus a mix of red and green light, see Metamerism). Properties Pythagorean identity Two vectors and in a Hilbert space are orthogonal when . The notation for this is . More generally, when is a subset in , the notation means that is orthogonal to every element from . When and are orthogonal, one has By induction on , this is extended to any family of orthogonal vectors, Whereas the Pythagorean identity as stated is valid in any inner product space, completeness is required for the extension of the Pythagorean identity to series. A series of orthogonal vectors converges in if and only if the series of squares of norms converges, and Furthermore, the sum of a series of orthogonal vectors is independent of the order in which it is taken. Parallelogram identity and polarization By definition, every Hilbert space is also a Banach space. Furthermore, in every Hilbert space the following parallelogram identity holds: Conversely, every Banach space in which the parallelogram identity holds is a Hilbert space, and the inner product is uniquely determined by the norm by the polarization identity. For real Hilbert spaces, the polarization identity is For complex Hilbert spaces, it is The parallelogram law implies that any Hilbert space is a uniformly convex Banach space. Best approximation This subsection employs the Hilbert projection theorem. If is a non-empty closed convex subset of a Hilbert space and a point in , there exists a unique point that minimizes the distance between and points in , This is equivalent to saying that there is a point with minimal norm in the translated convex set . The proof consists in showing that every minimizing sequence is Cauchy (using the parallelogram identity) hence converges (using completeness) to a point in that has minimal norm. More generally, this holds in any uniformly convex Banach space. When this result is applied to a closed subspace of , it can be shown that the point closest to is characterized by This point is the orthogonal projection of onto , and the mapping is linear (see ). This result is especially significant in applied mathematics, especially numerical analysis, where it forms the basis of least squares methods. In particular, when is not equal to , one can find a nonzero vector orthogonal to (select and ). A very useful criterion is obtained by applying this observation to the closed subspace generated by a subset of . A subset of spans a dense vector subspace if (and only if) the vector 0 is the sole vector orthogonal to . Duality The dual space is the space of all continuous linear functions from the space into the base field. It carries a natural norm, defined by This norm satisfies the parallelogram law, and so the dual space is also an inner product space where this inner product can be defined in terms of this dual norm by using the polarization identity. The dual space is also complete so it is a Hilbert space in its own right. If is a complete orthonormal basis for then the inner product on the dual space of any two is where all but countably many of the terms in this series are zero. The Riesz representation theorem affords a convenient description of the dual space. To every element of , there is a unique element of , defined by where moreover, The Riesz representation theorem states that the map from to defined by is surjective, which makes this map an isometric antilinear isomorphism. So to every element of the dual there exists one and only one in such that for all . The inner product on the dual space satisfies The reversal of order on the right-hand side restores linearity in from the antilinearity of . In the real case, the antilinear isomorphism from to its dual is actually an isomorphism, and so real Hilbert spaces are naturally isomorphic to their own duals. The representing vector is obtained in the following way. When , the kernel is a closed vector subspace of , not equal to , hence there exists a nonzero vector orthogonal to . The vector is a suitable scalar multiple of . The requirement that yields This correspondence is exploited by the bra–ket notation popular in physics. It is common in physics to assume that the inner product, denoted by , is linear on the right, The result can be seen as the action of the linear functional (the bra) on the vector (the ket). The Riesz representation theorem relies fundamentally not just on the presence of an inner product, but also on the completeness of the space. In fact, the theorem implies that the topological dual of any inner product space can be identified with its completion. An immediate consequence of the Riesz representation theorem is also that a Hilbert space is reflexive, meaning that the natural map from into its double dual space is an isomorphism. Weakly convergent sequences In a Hilbert space , a sequence is weakly convergent to a vector when for every . For example, any orthonormal sequence converges weakly to 0, as a consequence of Bessel's inequality. Every weakly convergent sequence is bounded, by the uniform boundedness principle. Conversely, every bounded sequence in a Hilbert space admits weakly convergent subsequences (Alaoglu's theorem). This fact may be used to prove minimization results for continuous convex functionals, in the same way that the Bolzano–Weierstrass theorem is used for continuous functions on . Among several variants, one simple statement is as follows: If is a convex continuous function such that tends to when tends to , then admits a minimum at some point . This fact (and its various generalizations) are fundamental for direct methods in the calculus of variations. Minimization results for convex functionals are also a direct consequence of the slightly more abstract fact that closed bounded convex subsets in a Hilbert space are weakly compact, since is reflexive. The existence of weakly convergent subsequences is a special case of the Eberlein–Šmulian theorem. Banach space properties Any general property of Banach spaces continues to hold for Hilbert spaces. The open mapping theorem states that a continuous surjective linear transformation from one Banach space to another is an open mapping meaning that it sends open sets to open sets. A corollary is the bounded inverse theorem, that a continuous and bijective linear function from one Banach space to another is an isomorphism (that is, a continuous linear map whose inverse is also continuous). This theorem is considerably simpler to prove in the case of Hilbert spaces than in general Banach spaces. The open mapping theorem is equivalent to the closed graph theorem, which asserts that a linear function from one Banach space to another is continuous if and only if its graph is a closed set. In the case of Hilbert spaces, this is basic in the study of unbounded operators (see Closed operator). The (geometrical) Hahn–Banach theorem asserts that a closed convex set can be separated from any point outside it by means of a hyperplane of the Hilbert space. This is an immediate consequence of the best approximation property: if is the element of a closed convex set closest to , then the separating hyperplane is the plane perpendicular to the segment passing through its midpoint. Operators on Hilbert spaces Bounded operators The continuous linear operators from a Hilbert space to a second Hilbert space are bounded in the sense that they map bounded sets to bounded sets. Conversely, if an operator is bounded, then it is continuous. The space of such bounded linear operators has a norm, the operator norm given by The sum and the composite of two bounded linear operators is again bounded and linear. For y in H2, the map that sends to is linear and continuous, and according to the Riesz representation theorem can therefore be represented in the form for some vector in . This defines another bounded linear operator , the adjoint of . The adjoint satisfies . When the Riesz representation theorem is used to identify each Hilbert space with its continuous dual space, the adjoint of can be shown to be identical to the transpose of , which by definition sends to the functional The set of all bounded linear operators on (meaning operators ), together with the addition and composition operations, the norm and the adjoint operation, is a C*-algebra, which is a type of operator algebra. An element of is called 'self-adjoint' or 'Hermitian' if . If is Hermitian and for every , then is called 'nonnegative', written ; if equality holds only when , then is called 'positive'. The set of self adjoint operators admits a partial order, in which if . If has the form for some , then is nonnegative; if is invertible, then is positive. A converse is also true in the sense that, for a non-negative operator , there exists a unique non-negative square root such that In a sense made precise by the spectral theorem, self-adjoint operators can usefully be thought of as operators that are "real". An element of is called normal if . Normal operators decompose into the sum of a self-adjoint operator and an imaginary multiple of a self adjoint operator that commute with each other. Normal operators can also usefully be thought of in terms of their real and imaginary parts. An element of is called unitary if is invertible and its inverse is given by . This can also be expressed by requiring that be onto and for all . The unitary operators form a group under composition, which is the isometry group of . An element of is compact if it sends bounded sets to relatively compact sets. Equivalently, a bounded operator is compact if, for any bounded sequence , the sequence has a convergent subsequence. Many integral operators are compact, and in fact define a special class of operators known as Hilbert–Schmidt operators that are especially important in the study of integral equations. Fredholm operators differ from a compact operator by a multiple of the identity, and are equivalently characterized as operators with a finite dimensional kernel and cokernel. The index of a Fredholm operator is defined by The index is homotopy invariant, and plays a deep role in differential geometry via the Atiyah–Singer index theorem. Unbounded operators Unbounded operators are also tractable in Hilbert spaces, and have important applications to quantum mechanics. An unbounded operator on a Hilbert space is defined as a linear operator whose domain is a linear subspace of . Often the domain is a dense subspace of , in which case is known as a densely defined operator. The adjoint of a densely defined unbounded operator is defined in essentially the same manner as for bounded operators. Self-adjoint unbounded operators play the role of the observables in the mathematical formulation of quantum mechanics. Examples of self-adjoint unbounded operators on the Hilbert space are: A suitable extension of the differential operator where is the imaginary unit and is a differentiable function of compact support. The multiplication-by- operator: These correspond to the momentum and position observables, respectively. Neither nor is defined on all of , since in the case of the derivative need not exist, and in the case of the product function need not be square integrable. In both cases, the set of possible arguments form dense subspaces of . Constructions Direct sums Two Hilbert spaces and can be combined into another Hilbert space, called the (orthogonal) direct sum, and denoted consisting of the set of all ordered pairs where , , and inner product defined by More generally, if is a family of Hilbert spaces indexed by , then the direct sum of the , denoted consists of the set of all indexed families in the Cartesian product of the such that The inner product is defined by Each of the is included as a closed subspace in the direct sum of all of the . Moreover, the are pairwise orthogonal. Conversely, if there is a system of closed subspaces, , , in a Hilbert space , that are pairwise orthogonal and whose union is dense in , then is canonically isomorphic to the direct sum of . In this case, is called the internal direct sum of the . A direct sum (internal or external) is also equipped with a family of orthogonal projections onto the th direct summand . These projections are bounded, self-adjoint, idempotent operators that satisfy the orthogonality condition The spectral theorem for compact self-adjoint operators on a Hilbert space states that splits into an orthogonal direct sum of the eigenspaces of an operator, and also gives an explicit decomposition of the operator as a sum of projections onto the eigenspaces. The direct sum of Hilbert spaces also appears in quantum mechanics as the Fock space of a system containing a variable number of particles, where each Hilbert space in the direct sum corresponds to an additional degree of freedom for the quantum mechanical system. In representation theory, the Peter–Weyl theorem guarantees that any unitary representation of a compact group on a Hilbert space splits as the direct sum of finite-dimensional representations. Tensor products If and , then one defines an inner product on the (ordinary) tensor product as follows. On simple tensors, let This formula then extends by sesquilinearity to an inner product on . The Hilbertian tensor product of and , sometimes denoted by , is the Hilbert space obtained by completing for the metric associated to this inner product. An example is provided by the Hilbert space . The Hilbertian tensor product of two copies of is isometrically and linearly isomorphic to the space of square-integrable functions on the square . This isomorphism sends a simple tensor to the function on the square. This example is typical in the following sense. Associated to every simple tensor product is the rank one operator from to that maps a given as This mapping defined on simple tensors extends to a linear identification between and the space of finite rank operators from to . This extends to a linear isometry of the Hilbertian tensor product with the Hilbert space of Hilbert–Schmidt operators from to . Orthonormal bases The notion of an orthonormal basis from linear algebra generalizes over to the case of Hilbert spaces. In a Hilbert space , an orthonormal basis is a family of elements of satisfying the conditions: Orthogonality: Every two different elements of are orthogonal: for all with . Normalization: Every element of the family has norm 1: for all . Completeness: The linear span of the family , , is dense in H. A system of vectors satisfying the first two conditions basis is called an orthonormal system or an orthonormal set (or an orthonormal sequence if is countable). Such a system is always linearly independent. Despite the name, an orthonormal basis is not, in general, a basis in the sense of linear algebra (Hamel basis). More precisely, an orthonormal basis is a Hamel basis if and only if the Hilbert space is a finite-dimensional vector space. Completeness of an orthonormal system of vectors of a Hilbert space can be equivalently restated as: for every , if for all , then . This is related to the fact that the only vector orthogonal to a dense linear subspace is the zero vector, for if is any orthonormal set and is orthogonal to , then is orthogonal to the closure of the linear span of , which is the whole space. Examples of orthonormal bases include: the set forms an orthonormal basis of with the dot product; the sequence with forms an orthonormal basis of the complex space ; In the infinite-dimensional case, an orthonormal basis will not be a basis in the sense of linear algebra; to distinguish the two, the latter basis is also called a Hamel basis. That the span of the basis vectors is dense implies that every vector in the space can be written as the sum of an infinite series, and the orthogonality implies that this decomposition is unique. Sequence spaces The space of square-summable sequences of complex numbers is the set of infinite sequences of real or complex numbers such that This space has an orthonormal basis: This space is the infinite-dimensional generalization of the space of finite-dimensional vectors. It is usually the first example used to show that in infinite-dimensional spaces, a set that is closed and bounded is not necessarily (sequentially) compact (as is the case in all finite dimensional spaces). Indeed, the set of orthonormal vectors above shows this: It is an infinite sequence of vectors in the unit ball (i.e., the ball of points with norm less than or equal one). This set is clearly bounded and closed; yet, no subsequence of these vectors converges to anything and consequently the unit ball in is not compact. Intuitively, this is because "there is always another coordinate direction" into which the next elements of the sequence can evade. One can generalize the space in many ways. For example, if is any set, then one can form a Hilbert space of sequences with index set , defined by The summation over B is here defined by the supremum being taken over all finite subsets of . It follows that, for this sum to be finite, every element of has only countably many nonzero terms. This space becomes a Hilbert space with the inner product for all . Here the sum also has only countably many nonzero terms, and is unconditionally convergent by the Cauchy–Schwarz inequality. An orthonormal basis of is indexed by the set , given by Bessel's inequality and Parseval's formula Let be a finite orthonormal system in . For an arbitrary vector , let Then for every . It follows that is orthogonal to each , hence is orthogonal to . Using the Pythagorean identity twice, it follows that Let , be an arbitrary orthonormal system in . Applying the preceding inequality to every finite subset of gives Bessel's inequality: (according to the definition of the sum of an arbitrary family of non-negative real numbers). Geometrically, Bessel's inequality implies that the orthogonal projection of onto the linear subspace spanned by the has norm that does not exceed that of . In two dimensions, this is the assertion that the length of the leg of a right triangle may not exceed the length of the hypotenuse. Bessel's inequality is a stepping stone to the stronger result called Parseval's identity, which governs the case when Bessel's inequality is actually an equality. By definition, if is an orthonormal basis of , then every element of may be written as Even if is uncountable, Bessel's inequality guarantees that the expression is well-defined and consists only of countably many nonzero terms. This sum is called the Fourier expansion of , and the individual coefficients are the Fourier coefficients of . Parseval's identity then asserts that Conversely, if is an orthonormal set such that Parseval's identity holds for every , then is an orthonormal basis. Hilbert dimension As a consequence of Zorn's lemma, every Hilbert space admits an orthonormal basis; furthermore, any two orthonormal bases of the same space have the same cardinality, called the Hilbert dimension of the space. For instance, since has an orthonormal basis indexed by , its Hilbert dimension is the cardinality of (which may be a finite integer, or a countable or uncountable cardinal number). The Hilbert dimension is not greater than the Hamel dimension (the usual dimension of a vector space). The two dimensions are equal if and only if one of them is finite. As a consequence of Parseval's identity, if is an orthonormal basis of , then the map defined by is an isometric isomorphism of Hilbert spaces: it is a bijective linear mapping such that for all . The cardinal number of is the Hilbert dimension of . Thus every Hilbert space is isometrically isomorphic to a sequence space for some set . Separable spaces By definition, a Hilbert space is separable provided it contains a dense countable subset. Along with Zorn's lemma, this means a Hilbert space is separable if and only if it admits a countable orthonormal basis. All infinite-dimensional separable Hilbert spaces are therefore isometrically isomorphic to the square-summable sequence space In the past, Hilbert spaces were often required to be separable as part of the definition. In quantum field theory Most spaces used in physics are separable, and since these are all isomorphic to each other, one often refers to any infinite-dimensional separable Hilbert space as "the Hilbert space" or just "Hilbert space". Even in quantum field theory, most of the Hilbert spaces are in fact separable, as stipulated by the Wightman axioms. However, it is sometimes argued that non-separable Hilbert spaces are also important in quantum field theory, roughly because the systems in the theory possess an infinite number of degrees of freedom and any infinite Hilbert tensor product (of spaces of dimension greater than one) is non-separable. For instance, a bosonic field can be naturally thought of as an element of a tensor product whose factors represent harmonic oscillators at each point of space. From this perspective, the natural state space of a boson might seem to be a non-separable space. However, it is only a small separable subspace of the full tensor product that can contain physically meaningful fields (on which the observables can be defined). Another non-separable Hilbert space models the state of an infinite collection of particles in an unbounded region of space. An orthonormal basis of the space is indexed by the density of the particles, a continuous parameter, and since the set of possible densities is uncountable, the basis is not countable. Orthogonal complements and projections If is a subset of a Hilbert space , the set of vectors orthogonal to is defined by The set is a closed subspace of (can be proved easily using the linearity and continuity of the inner product) and so forms itself a Hilbert space. If is a closed subspace of , then is called the of . In fact, every can then be written uniquely as , with and . Therefore, is the internal Hilbert direct sum of and . The linear operator that maps to is called the onto . There is a natural one-to-one correspondence between the set of all closed subspaces of and the set of all bounded self-adjoint operators such that . Specifically, This provides the geometrical interpretation of : it is the best approximation to x by elements of V. Projections and are called mutually orthogonal if . This is equivalent to and being orthogonal as subspaces of . The sum of the two projections and is a projection only if and are orthogonal to each other, and in that case . The composite is generally not a projection; in fact, the composite is a projection if and only if the two projections commute, and in that case . By restricting the codomain to the Hilbert space , the orthogonal projection gives rise to a projection mapping ; it is the adjoint of the inclusion mapping meaning that for all and . The operator norm of the orthogonal projection onto a nonzero closed subspace is equal to 1: Every closed subspace V of a Hilbert space is therefore the image of an operator of norm one such that . The property of possessing appropriate projection operators characterizes Hilbert spaces: A Banach space of dimension higher than 2 is (isometrically) a Hilbert space if and only if, for every closed subspace , there is an operator of norm one whose image is such that . While this result characterizes the metric structure of a Hilbert space, the structure of a Hilbert space as a topological vector space can itself be characterized in terms of the presence of complementary subspaces: A Banach space is topologically and linearly isomorphic to a Hilbert space if and only if, to every closed subspace , there is a closed subspace such that is equal to the internal direct sum . The orthogonal complement satisfies some more elementary results. It is a monotone function in the sense that if , then with equality holding if and only if is contained in the closure of . This result is a special case of the Hahn–Banach theorem. The closure of a subspace can be completely characterized in terms of the orthogonal complement: if is a subspace of , then the closure of is equal to . The orthogonal complement is thus a Galois connection on the partial order of subspaces of a Hilbert space. In general, the orthogonal complement of a sum of subspaces is the intersection of the orthogonal complements: If the are in addition closed, then Spectral theory There is a well-developed spectral theory for self-adjoint operators in a Hilbert space, that is roughly analogous to the study of symmetric matrices over the reals or self-adjoint matrices over the complex numbers. In the same sense, one can obtain a "diagonalization" of a self-adjoint operator as a suitable sum (actually an integral) of orthogonal projection operators. The spectrum of an operator , denoted , is the set of complex numbers such that lacks a continuous inverse. If is bounded, then the spectrum is always a compact set in the complex plane, and lies inside the disc . If is self-adjoint, then the spectrum is real. In fact, it is contained in the interval where Moreover, and are both actually contained within the spectrum. The eigenspaces of an operator are given by Unlike with finite matrices, not every element of the spectrum of must be an eigenvalue: the linear operator may only lack an inverse because it is not surjective. Elements of the spectrum of an operator in the general sense are known as spectral values. Since spectral values need not be eigenvalues, the spectral decomposition is often more subtle than in finite dimensions. However, the spectral theorem of a self-adjoint operator takes a particularly simple form if, in addition, is assumed to be a compact operator. The spectral theorem for compact self-adjoint operators states: A compact self-adjoint operator has only countably (or finitely) many spectral values. The spectrum of has no limit point in the complex plane except possibly zero. The eigenspaces of decompose into an orthogonal direct sum: Moreover, if denotes the orthogonal projection onto the eigenspace , then where the sum converges with respect to the norm on . This theorem plays a fundamental role in the theory of integral equations, as many integral operators are compact, in particular those that arise from Hilbert–Schmidt operators. The general spectral theorem for self-adjoint operators involves a kind of operator-valued Riemann–Stieltjes integral, rather than an infinite summation. The spectral family associated to associates to each real number λ an operator , which is the projection onto the nullspace of the operator , where the positive part of a self-adjoint operator is defined by The operators are monotone increasing relative to the partial order defined on self-adjoint operators; the eigenvalues correspond precisely to the jump discontinuities. One has the spectral theorem, which asserts The integral is understood as a Riemann–Stieltjes integral, convergent with respect to the norm on . In particular, one has the ordinary scalar-valued integral representation A somewhat similar spectral decomposition holds for normal operators, although because the spectrum may now contain non-real complex numbers, the operator-valued Stieltjes measure must instead be replaced by a resolution of the identity. A major application of spectral methods is the spectral mapping theorem, which allows one to apply to a self-adjoint operator any continuous complex function defined on the spectrum of by forming the integral The resulting continuous functional calculus has applications in particular to pseudodifferential operators. The spectral theory of unbounded self-adjoint operators is only marginally more difficult than for bounded operators. The spectrum of an unbounded operator is defined in precisely the same way as for bounded operators: is a spectral value if the resolvent operator fails to be a well-defined continuous operator. The self-adjointness of still guarantees that the spectrum is real. Thus the essential idea of working with unbounded operators is to look instead at the resolvent where is nonreal. This is a bounded normal operator, which admits a spectral representation that can then be transferred to a spectral representation of itself. A similar strategy is used, for instance, to study the spectrum of the Laplace operator: rather than address the operator directly, one instead looks as an associated resolvent such as a Riesz potential or Bessel potential. A precise version of the spectral theorem in this case is: There is also a version of the spectral theorem that applies to unbounded normal operators. In popular culture In Gravity's Rainbow (1973), a novel by Thomas Pynchon, one of the characters is called "Sammy Hilbert-Spaess", a pun on "Hilbert Space". The novel refers also to Gödel's incompleteness theorems. See also Remarks Notes References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ; originally published Monografje Matematyczne, vol. 7, Warszawa, 1937. . . . . . . . . . . . . . . . . . External links Hilbert space at Mathworld 245B, notes 5: Hilbert spaces by Terence Tao Functional analysis Linear algebra Operator theory Space
Hilbert space
Physics,Mathematics
13,221
28,306,559
https://en.wikipedia.org/wiki/NAS%20Award%20in%20Molecular%20Biology
The NAS Award in Molecular Biology is awarded by the U.S. National Academy of Sciences "for recent notable discovery in molecular biology by a young scientist who is a citizen of the United States." It has been awarded annually since its inception in 1962. List of NAS Award in Molecular Biology winners Source: NAS 1962 Marshall Nirenberg for his studies of the molecular mechanisms for the biosynthesis of protein. 1963 Matthew Meselson for his leading role in developing and applying methods to measure the transmission of genetic information in the cell. 1964 Charles Yanofsky for his achievements in demonstrating how changes in the gene produce changes in the way protein is made in the body. 1965 Robert Stuart Edgar for his development and application of the method of "conditional lethal mutants" for the analysis of the genetic control of morpho-genesis at the molecular level. 1966 Norton D. Zinder for his discovery of RNA bacteriophages, a new class of bacteria-attacking viruses, which have provided researchers with a highly valuable and convenient method of studying fundamental processes in all living cells. 1967 Robert W. Holley for his elucidation of the full sequence of nucleotides in the molecule of a soluble RNA. 1968 Walter Gilbert for his signal contribution to the understanding of the regulatory mechanisms operative in genetic control of protein synthesis. 1969 for his genetic dissection of the mechanism of assembly of the bacterial virus particle and reconstruction of the virus in vitro. 1970 A. Dale Kaiser for his discovery that pure phage lambda DNA can infect susceptible bacterial cells and produce progeny, and for the effect of this discovery on the whole field of bacterial virus genetics. 1971 Masayasu Nomura for his studies on the structure and function of ribosomes and their molecular components. 1972 Howard M. Temin for his work leading to the discovery of reverse transcription. 1973 Donald D. Brown for his studies of the structure, regulation, and evolution of genes in animals, particularly the genes specifying ribosomal RNA in Xenopus and silk fibroin in Bombix. 1974 David Baltimore for his distinguished leadership in virus research, and for his discoveries on the reproduction and enzymology of RNA viruses that has greatly advanced the science of molecular biology. 1975 Bruce Alberts for the isolation of proteins required for DNA replication and genetic recombination and the elucidation of how they interact with DNA. 1976 Daniel Nathans for his innovative use of molecular and cell biological tools to analyze the genome of an oncogenic virus. 1977 for his contributions to the understanding of eukaryotic, viral, and cellular messenger RNAs. 1978 Günter Blobel for elucidating mechanisms of passage of secreted proteins into and across membranes. 1979 Mark Ptashne for his outstanding contributions to our understanding of gene regulation through the studies of the virus Lambda. 1980 Phillip A. Sharp for his pioneering and continuing contributions to our understanding of messenger RNA biogenesis in mammalian cells. 1981 Ronald W. Davis and Gerald Fink for their outstanding contributions to the molecular biology of the simple eukaryote Saccharomyces cerevisiae. Both have opened vistas of genetic analysis by the development of new methods, in particular, the development and utilization of molecular cloning in yeast. 1982 Joan A. Steitz for contributing to our understanding of how RNA molecules are recognized by enzymes and discovering the roles played by small ribonucleoprotein molecules in RNA processing. 1983 James C. Wang for his ingenious studies of the topological properties of the DNA double helix and his discovery of the important class of enzymes known as DNA topoisomerases. 1984 Geoffrey M. Cooper and Robert A. Weinberg for the identification and characterization of cellular oncogenes of human and animal tumors, thereby providing seminal insights into the mechanisms of carcinogenesis. 1985 Gerald M. Rubin and Allan C. Spradling for adding a new dimension to eukaryotic genetics and developmental biology by developing a method to introduce and stably integrate cloned genes into the germ cells of living Drosophila. 1986 Robert G. Roeder for his pioneering studies of eukaryotic RNA polymerases and the factors that regulate their activity. 1987 Thomas R. Cech for the astonishing discovery of RNA-catalyzed self-splicing of introns and the analysis of the chemistry of RNA-catalyzed reactions. 1988 H. Robert Horvitz for significant contributions to the genetic analysis of the development of cell lineages in the nematode Caenorhabditis elegans. 1989 Kiyoshi Mizuuchi for bringing about remarkable advances in our understanding of transposition and other forms of genetic recombination. 1990 Elizabeth H. Blackburn for her discovery of the nature of DNA at the ends of eukaryotic chromosomes and the enzyme that is necessary to complete chromosomal replication. 1991 Steven McKnight and Robert Tjian for advancing our understanding of transcriptional regulation by devising novel strategies and applying elegant biochemistry to reveal fundamental mechanisms underlying gene expression and development. 1992 Bruce S. Baker and for their creative use of genetics and molecular biology to define how sex is determined in Drosophila. Their experiments have shown how the ratio of sex chromosomes to autosomes can initiate a novel regulatory pathway involving RNA processing. 1993 Peter S. Kim for his pathfinding research in structural biology, which has elucidated both the pathway of protein folding and mechanisms of macromolecular recognition. 1994 Gerald F. Joyce and Jack W. Szostak for independently developing in vitro evolution of RNA catalysts. Their work produced RNA enzymes with novel specificities, while illuminating our view of natural selection. 1995 for his elucidation, by experiments elegant in their simplicity, of the relationship between the ends of yeast chromosomes and transcriptional silencing. 1996 Michael S. Levine for his insightful contributions to our understanding of gene regulation networks and molecular mechanisms governing the development of organisms with a segmented body plan. 1997 Richard H. Scheller and Thomas C. Südhof for their performance of elegant experiments to resolve the molecular components responsible for controlling neurotransmitter vesicle release and chemical communication within the nervous system. 1998 Philip Beachy for his studies of a developmental morphogen, its processing and structure, and its covalent attachment to cholesterol. 1999 Clifford Tabin for his contributions in analyzing genes that establish asymmetric body patterns and control limb development in vertebrates. 2000 Patrick O. Brown for his intellectual leadership in functional genomics, most notably the development of a reliable and accessible DNA microarray system to measure genome-wide gene expression. 2001 Erin K. O'Shea for contributions to our understanding of signal transduction, regulation of protein movement into and out of the nucleus, and how phosphorylation controls protein activity. 2002 Stephen J. Elledge for his innovative contributions at the forefront of the field of cell cycle checkpoints and his elucidation of pathways and mechanisms involved in DNA damage responses. 2003 Andrew Z. Fire and Craig C. Mello for inventing methods to inactivate genes by RNA interference and helping to elucidate their underlying mechanism and biological function. 2004 Xiaodong Wang for his biochemical studies of apoptosis which have resolved a molecular pathway leading in and out of the mitochondrion. 2005 David Bartel for his discoveries on the repertoire of catalytic RNA and the analysis of micro RNA genes and their targets. 2006 Ronald Breaker and for establishing a new mode of regulation of gene expression in which metabolites regulate the activity of their cognate pathways by directly binding to mRNA. 2007 Gregory J. Hannon for elucidation of the enzymatic engine for RNA interference. 2008 Angelika Amon for groundbreaking studies that have provided insight into the mechanism of the central process of chromosome segregation and the regulation of segregation. 2009 Stephen P. Bell for groundbreaking studies illuminating the mechanisms of DNA replication in eukaryotic cells. 2010 Jeannie T. Lee by using X-chromosome inactivation as a model system, Lee has made unique contributions to our understanding of epigenetic regulation on a global scale, including the role of long, non-coding RNAs, interchromosomal interactions, and nuclear compartmentalization. 2011 James M. Berger for elucidating the structures of topoisomerases and helicases and providing insights into the biochemical mechanisms that mediate the replication and transcription of DNA. 2012 Zhijian James Chen for his creative use of elegant biochemistry both in elucidating an unsuspected role for polyubiquitin in a kinase-signaling cascade important for cancer and immunity and in discovering a novel link between innate immunity and a mitochondrial membrane protein that forms prion-like polymers to trigger antiviral responses. 2013 Sue Biggins (2013) for the isolation and in vitro characterization of a functional kinetochore complex, and for the use of that system to explore kinetochore function. 2014 David M. Sabatini for his discovery of components and regulators of the mTOR kinase pathway and his elucidation of the important roles of this signaling pathway in nutrient sensing, cell physiology, and cancer. 2015 Xiaowei Zhuang for the development of a high-resolution microscopy method (STORM) that allows molecular-scale resolution, by bypassing the ‘diffraction limit’ that has long shackled light microscopy. In addition, she developed the photo-switchable fluorescent dyes that have made this method a powerful and critical tool in many areas of biological research and neuroscience. 2016 Dianne K. Newman for her discovery of microbial mechanisms underlying geologic processes, thereby launching the field of molecular geomicrobiology and transforming our understanding of how the Earth evolved. 2017 Rodolphe Barrangou for his landmark discovery that bacteria have adaptive immune systems, groundbreaking work that catalyzed the manipulation of the CRISPR-Cas9 pathway for genome engineering. 2018 Howard Y. Chang for the discovery of long noncoding RNAs and the invention of genomic technologies. 2019 David Reich for creative use of molecular biology to trace ancient human migrations, reveal how population mixtures shaped modern humans, and illuminate disease risk factors across populations. 2020 Hashim Al-Hashimi for pioneering studies into RNA and DNA function on the atomic level. 2021 Joseph Mougous, for his discoveries relating to the toxins and molecular machines mediating antagonism between bacteria, and his demonstration that such processes are fundamental in shaping microbial communities. 2022 Carrie Partch for elucidating the protein-based signaling mechanisms and structural assemblies that give rise to circadian rhythms. 2023 Jason McLellan for pioneering work in the molecular and structural biology of viral surface proteins. 2024 Shu-ou Shan for elucidating how newly synthesized proteins are transported to cell membranes, advancing our understanding of molecular mechanisms in complex biological pathways. See also List of biology awards References Awards established in 1962 Biology awards Awards of the United States National Academy of Sciences
NAS Award in Molecular Biology
Technology
2,248
38,900,154
https://en.wikipedia.org/wiki/Tricholoma%20aestuans
Tricholoma aestuans is a mushroom of the agaric genus Tricholoma. First described formally by Elias Magnus Fries in 1821, it was transferred to the genus Tricholoma by Claude Casimir Gillet in 1874. See also List of North American Tricholoma List of Tricholoma species References Fungi described in 1821 Fungi of Europe Fungi of North America aestuans Taxa named by Elias Magnus Fries Fungus species
Tricholoma aestuans
Biology
93
1,515,407
https://en.wikipedia.org/wiki/Comparison%20of%20command%20shells
A command shell is a command-line interface to interact with and manipulate a computer's operating system. General characteristics Interactive features Background execution Background execution allows a shell to run a command without user interaction in the terminal, freeing the command line for additional work with the shell. POSIX shells and other Unix shells allow background execution by using the & character at the end of command. In PowerShell, the Start-Process or Start-Job cmdlets can be used. Completions Completion features assist the user in typing commands at the command line, by looking for and suggesting matching words for incomplete ones. Completion is generally requested by pressing the completion key (often the key). Command name completion is the completion of the name of a command. In most shells, a command can be a program in the command path (usually $PATH), a builtin command, a function or alias. Path completion is the completion of the path to a file, relative or absolute. Wildcard completion is a generalization of path completion, where an expression matches any number of files, using any supported syntax for file matching. Variable completion is the completion of the name of a variable name (environment variable or shell variable). Bash, zsh, and fish have completion for all variable names. PowerShell has completions for environment variable names, shell variable names and — from within user-defined functions — parameter names. Command argument completion is the completion of a specific command's arguments. There are two types of arguments, named and positional: Named arguments, often called options, are identified by their name or letter preceding a value, whereas positional arguments consist only of the value. Some shells allow completion of argument names, but few support completing values. Bash, zsh and fish offer parameter name completion through a definition external to the command, distributed in a separate completion definition file. For command parameter name/value completions, these shells assume path/filename completion if no completion is defined for the command. Completion can be set up to suggest completions by calling a shell function. The fish shell additionally supports parsing of man pages to extract parameter information that can be used to improve completions/suggestions. In PowerShell, all types of commands (cmdlets, functions, script files) inherently expose data about the names, types and valid value ranges/lists for each argument. This metadata is used by PowerShell to automatically support argument name and value completion for built-in commands/functions, user-defined commands/functions as well as for script files. Individual cmdlets can also define dynamic completion of argument values where the completion values are computed dynamically on the running system. Command history Users of a shell may find themselves typing something similar to what they have typed before. Support for command history means that a user can recall a previous command into the command-line editor and edit it before issuing the potentially modified command. Shells that support completion may also be able to directly complete the command from the command history given a partial/initial part of the previous command. Most modern shells support command history. Shells which support command history in general also support completion from history rather than just recalling commands from the history. In addition to the plain command text, PowerShell also records execution start- and end time and execution status in the command history. Mandatory argument prompt Mandatory arguments/parameters are arguments/parameters which must be assigned a value upon invocation of the command, function or script file. A shell that can determine ahead of invocation that there are missing mandatory values, can assist the interactive user by prompting for those values instead of letting the command fail. Having the shell prompt for missing values will allow the author of a script, command or function to mark a parameter as mandatory instead of creating script code to either prompt for the missing values (after determining that it is being run interactively) or fail with a message. PowerShell allows commands, functions and scripts to define arguments/parameters as mandatory. The shell determines prior to invocation if there is any mandatory arguments/parameters which have not been bound, and will then prompt the user for the value(s) before actual invocation. Automatic suggestions Shells featuring automatic suggestions display optional command-line completions as the user types. The PowerShell and fish shells natively support this feature; pressing the key inserts the completion. Implementations of this feature can differ between shells; for example, PowerShell and zsh use an external module to provide completions, and fish derives its completions from the user's command history. Directory history, stack or similar features Shells may record a history of directories the user has been in and allow for fast switching to any recorded location. This is referred to as a "directory stack". The concept had been realized as early as 1978 in the release of the C shell (csh). PowerShell allows multiple named stacks to be used. Locations (directories) can be pushed onto/popped from the current stack or a named stack. Any stack can become the current (default) stack. Unlike most other shells, PowerShell's location concept allow location stacks to hold file system locations as well as other location types like e.g. Active Directory organizational units/groups, SQL Server databases/tables/objects, Internet Information Server applications/sites/virtual directories. Command line interpreters 4DOS and its graphical successor Take Command Console also feature a directory stack. Implicit directory change A directory name can be used directly as a command which implicitly changes the current location to the directory. This must be distinguished from an unrelated load drive feature supported by Concurrent DOS, Multiuser DOS, System Manager and REAL/32, where the drive letter L: will be implicitly updated to point to the load path of a loaded application, thereby allowing applications to refer to files residing in their load directory under a standardized drive letter instead of under an absolute path. Autocorrection When a command line does not match a command or arguments directly, spell checking can automatically correct common typing mistakes (such as case sensitivity, missing letters). There are two approaches to this; the shell can either suggest probable corrections upon command invocation, or this can happen earlier as part of a completion or autosuggestion. The tcsh and zsh shells feature optional spell checking/correction, upon command invocation. Fish does the autocorrection upon completion and autosuggestion. The feature is therefore not in the way when typing out the whole command and pressing enter, whereas extensive use of the tab and right-arrow keys makes the shell mostly case insensitive. The PSReadLine PowerShell module (which is shipped with version 5.0) provides the option to specify a CommandValidationHandler ScriptBlock which runs before submitting the command. This allows for custom correcting of commonly mistyped commands, and verification before actually running the command. Progress indicator A shell script (or job) can report progress of long running tasks to the interactive user. Unix/Linux systems may offer other tools support using progress indicators from scripts or as standalone-commands, such as the program "pv". These are not integrated features of the shells, however. PowerShell has a built-in command and API functions (to be used when authoring commands) for writing/updating a progress bar. Progress bar messages are sent separates from regular command output and the progress bar is always displayed at the ultimate interactive users console regardless of whether the progress messages originates from an interactive script, from a background job or from a remote session. Colored directory listings JP Software command-line processors provide user-configurable colorization of file and directory names in directory listings based on their file extension and/or attributes through an optionally defined environment variable. For the Unix/Linux shells, this is a feature of the command and the terminal. Text highlighting The command line processors in DOS Plus, Multiuser DOS, REAL/32 and in all versions of DR-DOS support a number of optional environment variables to define escape sequences allowing to control text highlighting, reversion or colorization for display or print purposes in commands like TYPE. All mentioned command line processors support %$ON% and %$OFF%. If defined, these sequences will be emitted before and after filenames. A typical sequence for would be in conjunction with ANSI.SYS, for an ASCII terminal or for an IBM or ESC/P printer. Likewise, typical sequences for would be , , , respectively. The variables %$HEADER% and %$FOOTER% are only supported by COMMAND.COM in DR-DOS 7.02 and higher to define sequences emitted before and after text blocks in order to control text highlighting, pagination or other formatting options. For the Unix/Linux shells, this is a feature of the terminal. Syntax highlighting A defining feature of the fish shell is built-in syntax highlighting, As the user types, text is colored to represent whether the input is a valid command or not (the executable exists and the user has permissions to run it), and valid file paths are underlined. An independent project offers syntax highlighting as an add-on to the Z Shell (zsh). This is not part of the shell, however. PowerShell provides customizable syntax highlighting on the command line through the PSReadLine module. This module can be used with PowerShell v3.0+, and is bundled with v5.0 onwards. It is loaded by default in the command line host "powershell.exe" since v5.0. Take Command Console (TCC) offers syntax highlighting in the integrated environment. Context sensitive help 4DOS, 4OS2, 4NT / Take Command Console and PowerShell (in PowerShell ISE) looks up context-sensitive help information when is pressed. Zsh provides various forms of configurable context-sensitive help as part of its widget, command, or in the completion of options for some commands. The fish shell provides brief descriptions of a command's flags during tab completion. Programming features String processing and filename matching Inter-process communication Keystroke stacking In anticipation of what a given running application may accept as keyboard input, the user of the shell instructs the shell to generate a sequence of simulated keystrokes, which the application will interpret as a keyboard input from an interactive user. By sending keystroke sequences the user may be able to direct the application to perform actions that would be impossible to achieve through input redirection or would otherwise require an interactive user. For example, if an application acts on keystrokes, which cannot be redirected, distinguishes between normal and extended keys, flushes the queue before accepting new input on startup or under certain conditions, or because it does not read through standard input at all. Keystroke stacking typically also provides means to control the timing of simulated keys being sent or to delay new keys until the queue was flushed etc. It also allows to simulate keys which are not present on a keyboard (because the corresponding keys do not physically exist or because a different keyboard layout is being used) and therefore would be impossible to type by a user. Security features Secure prompt Some shell scripts need to query the user for sensitive information such as passwords, private digital keys, PIN codes or other confidential information. Sensitive input should not be echoed back to the screen/input device where it could be gleaned by unauthorized persons. Plaintext memory representation of sensitive information should also be avoided as it could allow the information to be compromised, e.g., through swap files, core dumps etc. The shells bash, zsh and PowerShell offer this as a specific feature. Shells which do not offer this as a specific feature may still be able to turn off echoing through some other means. Shells executing on a Unix/Linux operating system can use the external command to switch off/on echoing of input characters. In addition to not echoing back the characters, PowerShell's option also encrypts the input character-by-character during the input process, ensuring that the string is never represented unencrypted in memory where it could be compromised through memory dumps, scanning, transcription etc. Execute permission Some operating systems define an execute permission which can be granted to users/groups for a file when the file system itself supports it. On Unix systems, the execute permission controls access to invoking the file as a program, and applies both to executables and scripts. As the permission is enforced in the program loader, no obligation is needed from the invoking program, nor the invoked program, in enforcing the execute permission this also goes for shells and other interpreter programs. The behaviour is mandated by the POSIX C library that is used for interfacing with the kernel. POSIX specifies that the exec family of functions shall fail with EACCESS (permission denied) if the file denies execution permission (see ). The execute permission only applies when the script is run directly. If a script is invoked as an argument to the interpreting shell, it will be executed regardless of whether the user holds the execute permission for that script. Although Windows also specifies an execute permission, none of the Windows-specific shells block script execution if the permission has not been granted. Restricted shell subset Several shells can be started or be configured to start in a mode where only a limited set of commands and actions is available to the user. While not a security boundary (the command accessing a resource is blocked rather than the resource) this is nevertheless typically used to restrict users' actions before logging in. A restricted mode is part of the POSIX specification for shells, and most of the Linux/Unix shells support such a mode where several of the built-in commands are disabled and only external commands from a certain directory can be invoked. PowerShell supports restricted modes through session configuration files or session configurations. A session configuration file can define visible (available) cmdlets, aliases, functions, path providers and more. Safe data subset Scripts that invoke other scripts can be a security risk as they can potentially execute foreign code in the context of the user who launched the initial script. Scripts will usually be designed to exclusively include scripts from known safe locations; but in some instances, e.g. when offering the user a way to configure the environment or loading localized messages, the script may need to include other scripts/files. One way to address this risk is for the shell to offer a safe subset of commands which can be executed by an included script. PowerShell data sections can contain constants and expressions using a restricted subset of operators and commands. PowerShell data sections are used when e.g. localized strings needs to be read from an external source while protecting against unwanted side effects. Notes References External links Command shells Shells
Comparison of command shells
Technology
3,025
9,071,235
https://en.wikipedia.org/wiki/COM%20Express
COM Express is a form factor for computer-on-modules (COMs), which are highly integrated and compact computers that can be used in design applications much like integrated circuit components. Each module integrates core CPU and memory functionality, the common I/O of a PC/AT, USB, audio, graphics (PEG), and Ethernet. All I/O signals are mapped to two high density, low profile connectors on the bottom side of the module. COM Express employs a mezzanine-based approach. The COM modules plug into a baseboard that is typically customized to the application. Over time, the COM Express mezzanine modules can be upgraded to newer, backwards-compatible versions. COM Express is commonly used in Industrial, military, aerospace, gaming, medical, transportation, Internet of things, and general computing embedded applications. History The COM Express standard was first released in 2005 by the PCI Industrial Computer Manufacturers Group (PICMG). It defined five module types, each implementing different pinout configurations and feature sets on one or two 220-pin connectors. It also defined 2 module sizes (later expanded to 4) to serve more applications while maintaining compatibility within each module type. COM Express is used in railway, industrial, and military applications. There are also efforts for a Rugged COM Express specification through VITA. Types There are 8 different pin outs defined in the specification. The most commonly used pin outs are Type 6 and Type 10. The latest pin-out added in revision 3.0 of the COM Express specification (available from www.picmg.org) is Type 7. The Type 7 provides up to four 10 GbE interfaces and up to 32 PCIe lanes, making COM Express 3.0 appropriate for data center, server, and high-bandwidth video applications. COM Express Rev 3.0 removed legacy Type 1, Type 2, Type 3, Type 4, and Type 5, recommending that new designs should use Type 6, 7 or 10. Maximum available interfaces for the defined types: (*1) Option on previously allocated SATA2 and SATA3 pins. Implementor specific. (*2) DDI can be adapted to DisplayPort, HDMI, DVI or SDVO (legacy, no longer supported for types 6, 7 and 10) in the carrier board. Legend: PEG - PCI Express Graphics. Legacy - not recommended for new designs. Size The specification defines 4 module sizes: Mini: Compact: Basic: Extended: Specification The COM Express specification is hosted by PICMG. It is not freely available but a paper copy may be purchased for $150USD from the PICMG website. However, the COM Express Design Guide is free to download. The original revision 1.0 was released July 10, 2005. Revision 3.0 (PICMG COM.0 R3.0) was released in March 2017. COM Express also specifies an API to control embedded functionalities like watchdog timer or I2C. This is a separate document which is freely available (EAPI 1.0). It also defines a carrier board eeprom to hold configuration information. This is also a separate and free available document (EeeP R1.0). See also ETX XTX Qseven Smart Mobility Architecture (SMARC), another standard for computer-on-modules COM-HPC (working group within PICMG) References External links PICMG website The Economics and use of COM Express in Embedded Applications COM Express Carrier Design Guide - Guidelines for designing COM Express Carrier Boards Purchase specification (scroll down to "PICMG COM.0 R3.0") Free available short for specification COM.0 R3.0 Free available Embedded API Specification EAPI R1.0 Free available Embedded EEPROM Specification EeeP R1.0 COM Express Plug-and-Play Initiative COM Express: Scalability and flexibility for UAS sensor processing COM-HPC preview Motherboard form factors Computer hardware standards
COM Express
Technology
806
75,711,077
https://en.wikipedia.org/wiki/Institut%20a%C3%A9rotechnique
The Institut aérotechnique (IAT) is a French public research laboratory part of the Conservatoire national des arts et métiers, specializing in aerodynamic studies, located in Saint-Cyr-l'École (Yvelines). The creation of this institute is thanks to an initiative of Henri Deutsch de la Meurthe, also founder of the Aéro-Club de France. Its inauguration took place on July 8, 1911. It currently has several wind tunnels, some of which specialize in the automotive, railway and aerospace sectors. Concerning aeronautics, the laboratory has a partnership with the Institut polytechnique des sciences avancées. References External links Official website Research institutes in France Research institutes established in 1911 1911 establishments in France Aerodynamics Aerospace engineering organizations
Institut aérotechnique
Chemistry,Engineering
156
468,829
https://en.wikipedia.org/wiki/Thermal%20paste
Thermal paste (also called thermal compound, thermal grease, thermal interface material (TIM), thermal gel, heat paste, heat sink compound, heat sink paste or CPU grease) is a thermally conductive (but usually not electrically conductive) chemical compound, which is commonly used as an interface between heat sinks and heat sources such as high-power semiconductor devices. The main role of thermal paste is to eliminate air gaps or spaces (which act as thermal insulation) from the interface area in order to maximize heat transfer and dissipation. Thermal paste is an example of a thermal interface material. As opposed to thermal adhesive, thermal paste does not add mechanical strength to the bond between heat source and heat sink. It has to be coupled with a fastener such as screws to hold the heat sink in place and to apply pressure, spreading and thinning the thermal paste. Composition Thermal paste consists of a polymerizable liquid matrix and large volume fractions of electrically insulating, but thermally conductive filler. Typical matrix materials are epoxies, silicones (silicone grease), urethanes, and acrylates; solvent-based systems, hot-melt adhesives, and pressure-sensitive adhesive tapes are also available. Aluminum oxide, boron nitride, zinc oxide, diamond and increasingly aluminum nitride are used as fillers for these types of adhesives. The filler loading can be as high as 70–80% by mass, and raises the thermal conductivity of the base matrix from 0.17–0.3 W/(m·K) (watts per meter-kelvin) up to about 4 W/(m·K), according to a 2008 paper. Silver thermal compounds may have a conductivity of 3 to 8 W/(m·K) or more, and consist of micronized silver particles suspended in a silicone/ceramic medium. However, metal-based thermal paste can be electrically conductive and capacitive; if some flows onto the circuits, it can lead to malfunction and damage. The most effective (and most expensive) pastes consist almost entirely of liquid metal, usually a variation of the alloy galinstan, and have thermal conductivities in excess of 13 W/(m·K). These are difficult to apply evenly and have the greatest risk of causing malfunction due to spillage. Furthermore, these pastes contain gallium which is highly corrosive to aluminium and thus cannot be used on aluminium heat sinks. Uses Thermal paste is used to improve the heat coupling between different components. A common application is to drain away waste heat generated by electrical resistance in semiconductor devices including power transistors, CPUs, GPUs, and LED COBs. Cooling these devices is essential because excess heat rapidly degrades their performance and can cause a runaway to catastrophic failure of the device due to the negative temperature coefficient property of semiconductors. Factory PCs and laptopsalthough seldom tablets or smartphonestypically incorporate thermal paste between the top of the CPU case and a heat sink for cooling. Thermal paste is sometimes also used between the CPU die and its integrated heat spreader, though solder is sometimes used instead. When a CPU heat spreader is coupled to the die via thermal paste, performance enthusiasts such as overclockers are able to, in a process known as "delidding", pry the heat spreader, or CPU "lid", from the die. This allows them to replace the thermal paste, which is usually of low-quality, with a thermal paste having greater thermal conductivity. Generally, liquid metal thermal pastes are used in such instances. Challenges The consistency of thermal paste makes it susceptible to failure mechanisms distinct from some other thermal interface materials. A common one is pump-out, which is the loss of thermal paste from between the die and the heat sink due to their differing rates of thermal expansion and contraction. Over a large number of power cycles, thermal paste extrudes from between the die and heat sink, which eventually causes degradation of thermal performance inasmuch as there is less paste in place. Another issue with some compounds is the separation of the polymer and filler matrix components occurs under high temperatures. The loss of polymeric material can result in poor wettability, leading to increased thermal resistance. Health hazards Zinc oxide emits toxic fumes that must not be inhaled and a particulate respirator is necessary for any use. The chemical is also highly toxic to aquatic organisms and may cause long-term negative effects to aquatic environments. See also Computer cooling Hot-melt adhesive Phase-change material Thermally conductive pad List of thermal conductivities References External links Adhesives Cooling technology Computer hardware cooling Conduction
Thermal paste
Physics,Chemistry
972