id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
7,587,036
https://en.wikipedia.org/wiki/AAALAC%20International
AAALAC International is a private, nonprofit organization headquartered in Frederick, Maryland, that promotes the humane treatment of animals in science through voluntary accreditation and assessment programs. The accreditation program started in 1965, when some veterinarians and researchers organized the American Association for Accreditation of Laboratory Animal Care, or AAALAC. In 1996, AAALAC changed its name to the Association for Assessment and Accreditation of Laboratory Animal Care International (AAALAC International). The organization said the name change reflected the organization's growth in other countries and its commitment to enhancing life sciences and quality animal care around the world. Since 2016, the organization is officially known only by its acronym, AAALAC International. There are currently about 1,000 organizations worldwide accredited through its program. Along with meeting all applicable local and national regulations, AAALAC accredited institutions must also demonstrate that they are achieving the standards outlined in the Guide for the Care and Use of Laboratory Animals, a document published since 1996 by the National Research Council of the U.S. National Academy of Sciences. The Guide includes standards go beyond what is required by law. See also Animal welfare Animal testing American Association for Laboratory Animal Science, a U.S. nonprofit organization Animal Welfare Act of 1966, a U.S. law Food Security Act of 1985, a U.S. law In vivo References Animal testing Animal welfare organizations based in the United States Research methods Laboratory animals Accreditation organizations
AAALAC International
Chemistry
287
8,121,089
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%20gauge
In particle physics, the Wess–Zumino gauge is a particular choice of a gauge transformation in a gauge theory with supersymmetry. In this gauge, the supersymmetrized gauge transformation is chosen in such a way that most components of the vector superfield vanish, except for the usual physical ones when the function of the superspace is expanded in terms of components. See also Supersymmetric gauge theory Supersymmetric quantum field theory Gauge theories
Wess–Zumino gauge
Physics
97
11,274
https://en.wikipedia.org/wiki/Elementary%20particle
In particle physics, an elementary particle or fundamental particle is a subatomic particle that is not composed of other particles. The Standard Model presently recognizes seventeen distinct particles—twelve fermions and five bosons. As a consequence of flavor and color combinations and antimatter, the fermions and bosons are known to have 48 and 13 variations, respectively. Among the 61 elementary particles embraced by the Standard Model number: electrons and other leptons, quarks, and the fundamental bosons. Subatomic particles such as protons or neutrons, which contain two or more elementary particles, are known as composite particles. Ordinary matter is composed of atoms, themselves once thought to be indivisible elementary particles. The name atom comes from the Ancient Greek word ἄτομος (atomos) which means indivisible or uncuttable. Despite the theories about atoms that had existed for thousands of years, the factual existence of atoms remained controversial until 1905. In that year, Albert Einstein published his paper on Brownian motion, putting to rest theories that had regarded molecules as mathematical illusions. Einstein subsequently identified matter as ultimately composed of various concentrations of energy. Subatomic constituents of the atom were first identified toward the end of the 19th century, beginning with the electron, followed by the proton in 1919, the photon in the 1920s, and the neutron in 1932. By that time, the advent of quantum mechanics had radically altered the definition of a "particle" by putting forward an understanding in which they carried out a simultaneous existence as matter waves. Many theoretical elaborations upon, and beyond, the Standard Model have been made since its codification in the 1970s. These include notions of supersymmetry, which double the number of elementary particles by hypothesizing that each known particle associates with a "shadow" partner far more massive. However, like an additional elementary boson mediating gravitation, such superpartners remain undiscovered as of 2013. Overview All elementary particles are either bosons or fermions. These classes are distinguished by their quantum statistics: fermions obey Fermi–Dirac statistics and bosons obey Bose–Einstein statistics. Their spin is differentiated via the spin–statistics theorem: it is half-integer for fermions, and integer for bosons. In the Standard Model, elementary particles are represented for predictive utility as point particles. Though extremely successful, the Standard Model is limited by its omission of gravitation and has some parameters arbitrarily added but unexplained. Cosmic abundance of elementary particles According to the current models of Big Bang nucleosynthesis, the primordial composition of visible matter of the universe should be about 75% hydrogen and 25% helium-4 (in mass). Neutrons are made up of one up and two down quarks, while protons are made of two up and one down quark. Since the other common elementary particles (such as electrons, neutrinos, or weak bosons) are so light or so rare when compared to atomic nuclei, we can neglect their mass contribution to the observable universe's total mass. Therefore, one can conclude that most of the visible mass of the universe consists of protons and neutrons, which, like all baryons, in turn consist of up quarks and down quarks. Some estimates imply that there are roughly baryons (almost entirely protons and neutrons) in the observable universe. The number of protons in the observable universe is called the Eddington number. In terms of number of particles, some estimates imply that nearly all the matter, excluding dark matter, occurs in neutrinos, which constitute the majority of the roughly elementary particles of matter that exist in the visible universe. Other estimates imply that roughly elementary particles exist in the visible universe (not including dark matter), mostly photons and other massless force carriers. Standard Model The Standard Model of particle physics contains 12 flavors of elementary fermions, plus their corresponding antiparticles, as well as elementary bosons that mediate the forces and the Higgs boson, which was reported on July 4, 2012, as having been likely detected by the two main experiments at the Large Hadron Collider (ATLAS and CMS). The Standard Model is widely considered to be a provisional theory rather than a truly fundamental one, however, since it is not known if it is compatible with Einstein's general relativity. There may be hypothetical elementary particles not described by the Standard Model, such as the graviton, the particle that would carry the gravitational force, and sparticles, supersymmetric partners of the ordinary particles. Fundamental fermions The 12 fundamental fermions are divided into 3 generations of 4 particles each. Half of the fermions are leptons, three of which have an electric charge of −1 e, called the electron (), the muon (), and the tau (); the other three leptons are neutrinos (, , ), which are the only elementary fermions with neither electric nor color charge. The remaining six particles are quarks (discussed below). Generations Mass The following table lists current measured masses and mass estimates for all the fermions, using the same scale of measure: millions of electron-volts relative to square of light speed (MeV/c2). For example, the most accurately known quark mass is of the top quark () at , estimated using the on-shell scheme. Estimates of the values of quark masses depend on the version of quantum chromodynamics used to describe quark interactions. Quarks are always confined in an envelope of gluons that confer vastly greater mass to the mesons and baryons where quarks occur, so values for quark masses cannot be measured directly. Since their masses are so small compared to the effective mass of the surrounding gluons, slight differences in the calculation make large differences in the masses. Antiparticles There are also 12 fundamental fermionic antiparticles that correspond to these 12 particles. For example, the antielectron (positron) is the electron's antiparticle and has an electric charge of +1 e. Quarks Isolated quarks and antiquarks have never been detected, a fact explained by confinement. Every quark carries one of three color charges of the strong interaction; antiquarks similarly carry anticolor. Color-charged particles interact via gluon exchange in the same way that charged particles interact via photon exchange. Gluons are themselves color-charged, however, resulting in an amplification of the strong force as color-charged particles are separated. Unlike the electromagnetic force, which diminishes as charged particles separate, color-charged particles feel increasing force. Nonetheless, color-charged particles may combine to form color neutral composite particles called hadrons. A quark may pair up with an antiquark: the quark has a color and the antiquark has the corresponding anticolor. The color and anticolor cancel out, forming a color neutral meson. Alternatively, three quarks can exist together, one quark being "red", another "blue", another "green". These three colored quarks together form a color-neutral baryon. Symmetrically, three antiquarks with the colors "antired", "antiblue" and "antigreen" can form a color-neutral antibaryon. Quarks also carry fractional electric charges, but, since they are confined within hadrons whose charges are all integral, fractional charges have never been isolated. Note that quarks have electric charges of either  e or  e, whereas antiquarks have corresponding electric charges of either  e or  e. Evidence for the existence of quarks comes from deep inelastic scattering: firing electrons at nuclei to determine the distribution of charge within nucleons (which are baryons). If the charge is uniform, the electric field around the proton should be uniform and the electron should scatter elastically. Low-energy electrons do scatter in this way, but, above a particular energy, the protons deflect some electrons through large angles. The recoiling electron has much less energy and a jet of particles is emitted. This inelastic scattering suggests that the charge in the proton is not uniform but split among smaller charged particles: quarks. Fundamental bosons In the Standard Model, vector (spin-1) bosons (gluons, photons, and the W and Z bosons) mediate forces, whereas the Higgs boson (spin-0) is responsible for the intrinsic mass of particles. Bosons differ from fermions in the fact that multiple bosons can occupy the same quantum state (Pauli exclusion principle). Also, bosons can be either elementary, like photons, or a combination, like mesons. The spin of bosons are integers instead of half integers. Gluons Gluons mediate the strong interaction, which join quarks and thereby form hadrons, which are either baryons (three quarks) or mesons (one quark and one antiquark). Protons and neutrons are baryons, joined by gluons to form the atomic nucleus. Like quarks, gluons exhibit color and anticolor – unrelated to the concept of visual color and rather the particles' strong interactions – sometimes in combinations, altogether eight variations of gluons. Electroweak bosons There are three weak gauge bosons: W+, W−, and Z0; these mediate the weak interaction. The W bosons are known for their mediation in nuclear decay: The W− converts a neutron into a proton then decays into an electron and electron-antineutrino pair. The Z0 does not convert particle flavor or charges, but rather changes momentum; it is the only mechanism for elastically scattering neutrinos. The weak gauge bosons were discovered due to momentum change in electrons from neutrino-Z exchange. The massless photon mediates the electromagnetic interaction. These four gauge bosons form the electroweak interaction among elementary particles. Higgs boson Although the weak and electromagnetic forces appear quite different to us at everyday energies, the two forces are theorized to unify as a single electroweak force at high energies. This prediction was clearly confirmed by measurements of cross-sections for high-energy electron-proton scattering at the HERA collider at DESY. The differences at low energies is a consequence of the high masses of the W and Z bosons, which in turn are a consequence of the Higgs mechanism. Through the process of spontaneous symmetry breaking, the Higgs selects a special direction in electroweak space that causes three electroweak particles to become very heavy (the weak bosons) and one to remain with an undefined rest mass as it is always in motion (the photon). On 4 July 2012, after many years of experimentally searching for evidence of its existence, the Higgs boson was announced to have been observed at CERN's Large Hadron Collider. Peter Higgs who first posited the existence of the Higgs boson was present at the announcement. The Higgs boson is believed to have a mass of approximately . The statistical significance of this discovery was reported as 5 sigma, which implies a certainty of roughly 99.99994%. In particle physics, this is the level of significance required to officially label experimental observations as a discovery. Research into the properties of the newly discovered particle continues. Graviton The graviton is a hypothetical elementary spin-2 particle proposed to mediate gravitation. While it remains undiscovered due to the difficulty inherent in its detection, it is sometimes included in tables of elementary particles. The conventional graviton is massless, although some models containing massive Kaluza–Klein gravitons exist. Beyond the Standard Model Although experimental evidence overwhelmingly confirms the predictions derived from the Standard Model, some of its parameters were added arbitrarily, not determined by a particular explanation, which remain mysterious, for instance the hierarchy problem. Theories beyond the Standard Model attempt to resolve these shortcomings. Grand unification One extension of the Standard Model attempts to combine the electroweak interaction with the strong interaction into a single 'grand unified theory' (GUT). Such a force would be spontaneously broken into the three forces by a Higgs-like mechanism. This breakdown is theorized to occur at high energies, making it difficult to observe unification in a laboratory. The most dramatic prediction of grand unification is the existence of X and Y bosons, which cause proton decay. The non-observation of proton decay at the Super-Kamiokande neutrino observatory rules out the simplest GUTs, however, including SU(5) and SO(10). Supersymmetry Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos, and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders would not be powerful enough to produce them. Some physicists believe that sparticles will be detected by the Large Hadron Collider at CERN. String theory String theory is a model of physics whereby all "particles" that make up matter are composed of strings (measuring at the Planck length) that exist in an 11-dimensional (according to M-theory, the leading version) or 12-dimensional (according to F-theory) universe. These strings vibrate at different frequencies that determine mass, electric charge, color charge, and spin. A "string" can be open (a line) or closed in a loop (a one-dimensional sphere, that is, a circle). As a string moves through space it sweeps out something called a world sheet. String theory predicts 1- to 10-branes (a 1-brane being a string and a 10-brane being a 10-dimensional object) that prevent tears in the "fabric" of space using the uncertainty principle (e.g., the electron orbiting a hydrogen atom has the probability, albeit small, that it could be anywhere else in the universe at any given moment). String theory proposes that our universe is merely a 4-brane, inside which exist the three space dimensions and the one time dimension that we observe. The remaining 7 theoretical dimensions either are very tiny and curled up (and too small to be macroscopically accessible) or simply do not/cannot exist in our universe (because they exist in a grander scheme called the "multiverse" outside our known universe). Some predictions of the string theory include existence of extremely massive counterparts of ordinary particles due to vibrational excitations of the fundamental string and existence of a massless spin-2 particle behaving like the graviton. Technicolor Technicolor theories try to modify the Standard Model in a minimal way by introducing a new QCD-like interaction. This means one adds a new theory of so-called Techniquarks, interacting via so called Technigluons. The main idea is that the Higgs boson is not an elementary particle but a bound state of these objects. Preon theory According to preon theory there are one or more orders of particles more fundamental than those (or most of those) found in the Standard Model. The most fundamental of these are normally called preons, which is derived from "pre-quarks". In essence, preon theory tries to do for the Standard Model what the Standard Model did for the particle zoo that came before it. Most models assume that almost everything in the Standard Model can be explained in terms of three to six more fundamental particles and the rules that govern their interactions. Interest in preons has waned since the simplest models were experimentally ruled out in the 1980s. Acceleron theory Accelerons are the hypothetical subatomic particles that integrally link the newfound mass of the neutrino to the dark energy conjectured to be accelerating the expansion of the universe. In this theory, neutrinos are influenced by a new force resulting from their interactions with accelerons, leading to dark energy. Dark energy results as the universe tries to pull neutrinos apart. Accelerons are thought to interact with matter more infrequently than they do with neutrinos. See also Asymptotic freedom List of particles Physical ontology Quantum field theory Quantum gravity Quantum triviality UV fixed point Notes Further reading General readers Textbooks An undergraduate text for those not majoring in physics. External links The most important address about the current experimental and theoretical knowledge about elementary particle physics is the Particle Data Group, where different international institutions collect all experimental data and give short reviews over the contemporary theoretical understanding. other pages are: particleadventure.org, a well-made introduction also for non physicists CERNCourier: Season of Higgs and melodrama Interactions.org, particle physics news Symmetry Magazine, a joint Fermilab/SLAC publication Elementary Particles made thinkable, an interactive visualisation allowing physical properties to be compared Quantum mechanics Quantum field theory Subatomic particles
Elementary particle
Physics
3,663
1,998,254
https://en.wikipedia.org/wiki/Granary
A granary, also known as a grain house and historically as a granarium in Latin, is a post-harvest storage building primarily for grains or seeds. Granaries are typically built above the ground to prevent spoilage and protect the stored grains or seeds from rodents, pests, floods, and adverse weather conditions. They also assist in drying the grains to prevent mold growth. Modern granaries may incorporate advanced ventilation and temperature control systems to preserve the quality of the stored grains. Early origins From ancient times grain has been stored in bulk. The oldest granaries yet found date back to 9500 BC and are located in the Pre-Pottery Neolithic A settlements in the Jordan Valley. The first were located in places between other buildings. However beginning around 8500 BC, they were moved inside houses, and by 7500 BC storage occurred in special rooms. The first granaries measured 3 x 3 m on the outside and had suspended floors that protected the grain from rodents and insects and provided air circulation. These granaries are followed by those in Mehrgarh in the Indus Valley from 6000 BC. The ancient Egyptians made a practice of preserving grain in years of plenty against years of scarcity. The climate of Egypt is very dry, grain could be stored in pits for a long time without discernible loss of quality. Historically, a silo was a pit for storing grain. It is distinct from a granary, which is an above-ground structure. East Asia Simple storage granaries raised on four or more posts appeared in the Yangshao culture in China and after the onset of intensive agriculture in the Korean peninsula during the Mumun pottery period (c. 1000 B.C.) as well as in the Japanese archipelago during the Final Jōmon/Early Yayoi periods (c. 800 B.C.). In the archaeological vernacular of Northeast Asia, these features are lumped with those that may have also functioned as residences and together are called 'raised floor buildings'. China built an elaborate system designed to minimize famine deaths. The system was destroyed in the Taiping Rebellion of the 1850s. Southeast Asia In vernacular architecture of Indonesian archipelago granaries are made of wood and bamboo materials and most of them are built and raised on four or more posts to avoid rodents and pests. Examples of Indonesian granaries styles are the Sundanese leuit and Minang rangkiang. Great Britain In the South Hams in southwest Great Britain, small granaries were built on mushroom-shaped stumps called staddle stones. They were built of timber-frame construction and often had slate roofs. Larger ones were similar to linhays but with the upper floor enclosed. Access to the first floor was usually via a stone staircase on the outside wall. Towards the close of the 19th century, warehouses specially intended for holding grain began to multiply in Great Britain. There are climatic difficulties in the way of storing grain in Great Britain on a large scale, but these difficulties have been largely overcome. Moisture control Grain must be kept away from moisture for as long as possible to preserve it in good condition and prevent mold growth. Newly harvested grain brought into a granary tends to contain excess moisture, which encourages mold growth leading to fermentation and heating, both of which are undesirable and affect quality. Fermentation generally spoils grain and may cause chemical changes that create poisonous mycotoxins. One traditional remedy is to spread the grain in thin layers on a floor, where it is turned to aerate it thoroughly. Once the grain is sufficiently dry it can be transferred to a granary for storage. Today, this can be done using a mechanical grain auger to move grain from one granary to another. In modern silos, grain is typically force-aerated in situ or circulated through external grain drying equipment. Modern Modern grain farming operations often use manufactured steel granaries to store grain on-site until it can be trucked to major storage facilities in anticipation of shipping. The large mechanized facilities, particularly seen in Russia and North America are known as grain elevators. Examples See also Hambar Hórreo Horreum Raccard Storage silo Corn crib Groote Schuur, the stately South African home was originally a granary. Rice barn Treppenspeicher Ghorfa Parish granary References 10th-millennium BC establishments Food storage containers Grain production Rooms Vernacular architecture
Granary
Engineering
889
33,068,494
https://en.wikipedia.org/wiki/Walther%20recursion
In computer programming, Walther recursion (named after Christoph Walther) is a method of analysing recursive functions that can determine if the function is definitely terminating, given finite inputs. It allows a more natural style of expressing computation than simply using primitive recursive functions. Since the halting problem cannot be solved in general, there must still be programs that terminate, but which Walther recursion cannot prove to terminate. Walther recursion may be used in total functional languages in order to allow a more liberal style of showing primitive recursion. See also BlooP and FlooP Termination analysis Total Turing machine References Recursion
Walther recursion
Mathematics
136
59,854,750
https://en.wikipedia.org/wiki/Guido%20Bargellini
Guido Bargellini (1879–1963) was an Italian organic chemist. He specialized in natural product chemistry, in particular, flavonoid dyes and coumarins, and the compound santonin. He was admitted to the Accademia dei Lincei in 1946. The Bargellini reaction is named for him. References Eintrag bei treccani.it 1879 births 1963 deaths Italian chemists Organic chemists People from Roccastrada
Guido Bargellini
Chemistry
95
24,956,896
https://en.wikipedia.org/wiki/AquaMaps
AquaMaps is a collaborative project with the aim of producing computer-generated (and ultimately, expert reviewed) predicted global distribution maps for marine species on a 0.5 × 0.5 degree grid of the oceans based on data available through online species databases such as FishBase and SeaLifeBase and species occurrence records from OBIS or GBIF and using an environmental envelope model (see niche modelling) in conjunction with expert input. The underlying model represents a modified version of the relative environmental suitability (RES) model developed by Kristin Kaschner to generate global predictions of marine mammal occurrences. According to the AquaMaps website in August 2013, the project held standardized distribution maps for over 17,300 species of fishes, marine mammals and invertebrates. The project is also expanding to incorporate freshwater species, with more than 600 biodiversity maps for freshwater fishes of the Americas available as at November 2009. AquaMaps predictions have been validated successfully for a number of species using independent data sets and the model was shown to perform equally well or better than other standard species distribution models, when faced with the currently existing suboptimal input data sets. In addition to displaying individual maps per species, AquaMaps provides tools to generate species richness maps by higher taxon, plus a spatial search for all species overlapping a specified grid square. There is also the facility to create custom maps for any species via the web by modifying the input parameters and re-running the map generating algorithm in real time, and a variety of other tools including the investigation of effects of climate change on species distributions (see relevant section of the AquaMaps search page). Coordination The project is coordinated by Dr Rainer Froese of IFM-GEOMAR and involves contributions from other research institutes including the Evolutionary Biology and Ecology Lab, Albert-Ludwigs-University Freiburg, University of British Columbia (UBC), the Swedish Museum of Natural History (NRM - Naturhistoriska Riksmuseet), the WorldFish Center in Malaysia, and CSIRO Marine and Atmospheric Research in Australia. The creation of AquaMaps is supported by MARA, Pew Fellows Program in Marine Conservation, INCOFISH, Sea Around Us Project, Biogeoinformatics of Hexacorals, FishBase and SeaLifeBase. Research use A multi-author study by E. Sala et al. utilizing Aquamaps modelled data for marine fishes and invertebrates, entitled "Protecting the global ocean for biodiversity, food and climate", was published in the prestigious journal Nature in 2021. See also Environmental niche modelling Biogeography Biodiversity informatics Marine biology C-squares - global grid system utilized by AquaMaps for data storage and map creation References Further reading External links AquaMaps home page Freshwater Biodiversity AquaMaps AquaMaps entry in the D4Science Virtual Research Environment, released March 2009 AquaMaps Virtual Research Environment in the iMarine Infrastructure; Ecology
AquaMaps
Biology
587
216,192
https://en.wikipedia.org/wiki/New%20product%20development
New product development (NPD) or product development in business and engineering covers the complete process of launching a new product to the market. Product development also includes the renewal of an existing product and introducing a product into a new market. A central aspect of NPD is product design. New product development is the realization of a market opportunity by making a product available for purchase. The products developed by an commercial organisation provide the means to generate income. Many technology-intensive organisations exploit technological innovation in a rapidly changing consumer market. A product can be a tangible asset or intangible. A service or user experience is intangible. In law, sometimes services and other processes are distinguished from "products". NPD requires an understanding of customer needs and wants, the competitive environment, and the nature of the market. Cost, time, and quality are the main variables that drive customer needs. Aiming at these three variables, innovative companies develop continuous practices and strategies to better satisfy customer requirements and to increase their own market share by a regular development of new products. There are many uncertainties and challenges which companies must face throughout the process. Process structure The product development process typically consists of several activities that firms employ in the complex process of delivering new products to the market. A process management approach is used to provide a structure. Product development often overlaps much with the engineering design process, particularly if the new product being developed involves application of math and/or science. Every new product will pass through a series of stages/phases, including ideation among other aspects of design, as well as manufacturing and market introduction. In highly complex engineered products (e.g. aircraft, automotive, machinery), the NPD process can be likewise complex regarding management of personnel, milestones, and deliverables. Such projects typically use an integrated product team approach. The process for managing large-scale complex engineering products is much slower (often 10-plus years) than that deployed for many types of consumer goods. The development process is articulated and broken down in many different ways, many of which often include the following phases/stages: PHASE 1. Fuzzy front-end (FFE) is the set of activities employed before the more formal and well defined requirements specification is completed. Requirements speak to what the product should do or have, at varying degrees of specificity, in order to meet the perceived market or business need The fuzzy front end (FFE) is the messy "getting started" period of new product engineering development processes. It is also referred to as the "Front End of Innovation", or "Idea Management". It is in the front end where the organization formulates a concept of the product to be developed and decides whether or not to invest resources in the further development of an idea. It is the phase between first consideration of an opportunity and when it is judged ready to enter the structured development process (Kim and Wilemon, 2007; Koen et al., 2001). It includes all activities from the search for new opportunities through the formation of a germ of an idea to the development of a precise concept. The Fuzzy Front End phase ends when an organization approves and begins formal development of the concept. Although the fuzzy front end may not be an expensive part of product development, it can consume 50% of development time (see Chapter 3 of the Smith and Reinertsen reference below), and it is where major commitments are typically made involving time, money, and the product's nature, thus setting the course for the entire project and final end product. Consequently, this phase should be considered as an essential part of development rather than something that happens "before development", and its cycle time should be included in the total development cycle time. Koen et al. (2001) distinguish five different front-end elements (not necessarily in a particular order): Opportunity Identification Opportunity Analysis Idea Genesis Idea Selection Idea and Technology Development The first element is the opportunity identification. In this element, large or incremental business and technological chances are identified in a more or less structured way. Using the guidelines established here, resources will eventually be allocated to new projects, which then leads to a structured NPPD (New Product & Process Development) strategy. The second element is the opportunity analysis. It is done to translate the identified opportunities into implications for the business and technology specific context of the company. Here extensive efforts may be made to align ideas to target customer groups and do market studies and/or technical trials and research. The third element is the idea genesis, which is described as evolutionary and iterative process progressing from birth to maturation of the opportunity into a tangible idea. The process of the idea genesis can be made internally or come from outside inputs, e.g. a supplier offering a new material/technology or from a customer with an unusual request. The fourth element is the idea selection. Its purpose is to choose whether to pursue an idea by analyzing its potential business value. The fifth element is the idea and technology development. During this part of the front-end, the business case is developed based on estimates of the total available market, customer needs, investment requirements, competition analysis and project uncertainty. Some organizations consider this to be the first stage of the NPPD process (i.e., Stage 0). A universally acceptable definition for Fuzzy Front End or a dominant framework has not been developed so far. In a glossary by the Product Development and Management Association, it is mentioned that the fuzzy front end generally consists of three tasks: strategic planning, idea generation, and pre-technical evaluation. These activities are often chaotic, unpredictable, and unstructured. In comparison, the subsequent new product development process is typically structured, predictable, and formal. The term fuzzy front end was first popularized by Smith and Reinertsen (1991). R.G. Cooper (1988) it describes the early stages of NPPD as a four-step process in which ideas are generated (I), subjected to a preliminary technical and market assessment (II) and merged to coherent product concepts (III) which are finally judged for their fit with existing product strategies and portfolios (IV). PHASE 2: Product design is the development of both the high-level and detailed-level design of the product: which turns the what of the requirements into a specific how this particular product will meet those requirements. This typically has the most overlap with the engineering design process, but can also include industrial design and even purely aesthetic aspects of design. On the marketing and planning side, this phase ends at pre-commercialization analysis stage. PHASE 3: Product implementation often refers to later stages of detailed engineering design (e.g. refining mechanical or electrical hardware, or software, or goods or other product forms), as well as test process that may be used to validate that the prototype actually meets all design specifications that were established. PHASE 4: Fuzzy back-end or commercialization phase represent the action steps where the production and market launch occur. The front-end marketing phases have been very well researched, with valuable models proposed. Peter Koen et al. provides a five-step front-end activity called front-end innovation: opportunity identification, opportunity analysis, idea genesis, idea selection, and idea and technology development. He also includes an engine in the middle of the five front-end stages and the possible outside barriers that can influence the process outcome. The engine represents the management driving the activities described. The front end of the innovation is the greatest area of weakness in the NPD process. This is mainly because the FFE is often chaotic, unpredictable and unstructured. Engineering design is the process whereby a technical solution is developed iteratively to solve a given problem. The design stage is very important because at this stage most of the product life cycle costs are engaged. Previous research shows that 70–80% of the final product quality and 70% of the product entire life-cycle cost are determined in the product design phase, therefore the design-manufacturing interface represent the greatest opportunity for cost reduction. Design projects last from a few weeks to three years with an average of one year. Design and Commercialization phases usually start a very early collaboration. When the concept design is finished it will be sent to manufacturing plant for prototyping, developing a Concurrent Engineering approach by implementing practices such as QFD, DFM/DFA and more. The output of the design (engineering) is a set of product and process specifications – mostly in the form of drawings, and the output of manufacturing is the product ready for sale. Basically, the design team will develop drawings with technical specifications representing the future product, and will send it to the manufacturing plant to be executed. Solving product/process fit problems is of high priority in information communication design because 90% of the development effort must be scrapped if any changes are made after the release to manufacturing. Conceptual models Conceptual models have been designed in order to facilitate a smooth product development process. Booz, Allen and Hamilton Model: One of the first developed models that companies still use in the NPD process is the Booz, Allen and Hamilton (BAH) Model, published in 1982. This is the best known model because it underlies the NPD systems that have been put forward later. This model represents the foundation of all the other models that have been developed afterwards. Significant work has been conducted in order to propose better models, but in fact these models can be easily linked to BAH model. The seven steps of the BAH model are: new product strategy, idea generation, screening and evaluation, business analysis, development, testing, and commercialization. Exploratory product development model (ExPD). Exploratory product development, which often goes by the acronym ExPD, is an emerging approach to new product development. Consultants Mary Drotar and Kathy Morrissey first introduced ExPD at the 2015 Product Development and Management Association annual meeting and later outlined their approach in the Product Development and Management Association's magazine Visions. In 2015, Drotar and Morrissey's firm Strategy2Market received the trademark on the term "Exploratory PD." Rather than going through a set of discrete phases, like the phase-gate process, this exploratory product development process allows organizations to adapt to a landscape of shifting market circumstances and uncertainty by using a more flexible and adaptable product development process for both hardware and software. Where the traditional phase-gate approach works best in a stable market environment, ExPD is more suitable for product development in markets that are unstable and less predictable. Unstable and unpredictable markets cause uncertainty and risk in product development. Many factors contribute to the outcome of a project, and ExPD works on the assumption that the ones that the product team doesn't know enough about or are unaware of are the factors that create uncertainty and risk. The primary goal of ExPD is to reduce uncertainty and risk by reducing the unknown. When organizations adapt quickly to the changing environment (market, technology, regulations, globalization, etc.), they reduce uncertainty and risk, which leads to product success. ExPD is described as a two-pronged, integrated systems approach. Drotar and Morrissey state that product development is complex and needs to be managed as a system, integrating essential elements: strategy, portfolio management, organization/teams/culture, metrics, market/customer understanding, and process. Drotar and Morrissey have published two books on ExPD. The first, Exploratory Product Development: Executive Version: Adaptable Product Development in a Changing World, was published as an e-book on December 3, 2018. On September 8, 2022, Drotar and Morrissey published their second book, "Learn & Adapt: ExPD An Adaptive Product Development Process for Rapid Innovation and Risk Reduction, which also highlights their process. The book has three sections: Overview of ExPD, How to Do It, and Adaptive Practices that Support ExPD. According to Kirkus, "the (approach the) authors advocate is outwardly focused and premised on being adaptable enough to develop new competencies and create new models as complex situations evolve." Kirkus summarizes the text as "complex and visually stimulating; a serious blueprint for serious strategists." IDEO approach. The concept adopted by IDEO, a design and consulting firm, is one of the most researched processes in regard to new product development and is a five-step procedure. These steps are listed in chronological order: Understand and observe the market, the client, the technology, and the limitations of the problem; Synthesize the information collected at the first step; Visualise new customers using the product; Prototype, evaluate and improve the concept; Implementation of design changes, which are associated with more technologically advanced procedures and therefore will require more time. Lean Start-up approach. Lean startup is a methodology for developing businesses and products that aims to shorten product development cycles and rapidly discover if a proposed business model is viable; this is achieved by adopting a combination of business-hypothesis-driven experimentation, iterative product releases, and validated learning. Lean startup emphasizes customer feedback over intuition and flexibility over planning. This methodology enables recovery from failures more often than traditional ways of product development. Stage-gate model. A pioneer of NPD research in the consumers goods sector is Robert G. Cooper. Over the last two decades he conducted significant work in the area of NPD. The Stage-Gate model developed in the 1980s was proposed as a new tool for managing new products development processes. This was mainly applied to the consumers goods industry. The 2010 APQC benchmarking study reveals that 88% of U.S. businesses employ a stage-gate system to manage new products, from idea to launch. In return, the companies that adopt this system are reported to receive benefits such as improved teamwork, improved success rates, earlier detection of failure, a better launch, and even shorter cycle times – reduced by about 30%. These findings highlight the importance of the stage-gate model in the area of new product development. The Stage-Gate model of NPD predevelopment activities are summarised in Phase zero and one, in respect to earlier definition of predevelopment activities: Preliminary Technical assessment Source-of-supply assessment: suppliers and partners or alliances Market research: market size and segmentation analysis, VoC (voice of the customer) research Product idea testing Customer value assessment Product definition Business and financial analysis These activities yield essential information to make a Go/No-Go to Development decision. These decisions represent the Gates in the Stage-Gate model. Management The following are types of new product development management structures: Customer-centric product development Customer-centric new product development focuses on finding new ways to solve customer problems and create more customer-satisfying experiences. Companies often rely on technology, but real success comes from understanding customer needs and values. The most successful companies are the ones that differentiated from others, solved major customer problems, offer a compelling customer value proposition, and engage customers directly, and systematically. Systematic product development Systematic new product development focuses on creating a process that allows for the collection, review, and evaluation of new product ideas. Having a way in which employees, suppliers, distributors, and dealers become involved in finding and developing new products is important to a company's success. It is also important for companies to have a process in place for monitoring the competition and their products so that they can stay ahead of the curve. Innovation management In order to successfully manage the new product development process, companies must have an innovation management system in place. This system helps to ensure that all aspects of new product development are taken into account and that the company is able to track and assess the progress of new products. The innovation management system should also help to foster a culture of innovation within the company, which can help to increase the chances of success for new products. Marketing writers Hyman and Wilkins argue that a company's rate of product innovation should fit between the extremes of being so rapid that "its core range decays" and so slow that its product range "become[s] obselete. Innovation manager An innovation manager is a senior person appointed to be responsible for implementing and managing the innovation management system. They are also responsible for ensuring that all aspects of new product development are taken into account and that the company is able to track and assess the progress of new products. Cross-functional innovation management committee A cross-functional innovation management committee is a team of individuals from different company departments, including marketing, engineering, design, manufacturing, and research and development, who are responsible for overseeing and managing the new product development process. This committee helps to ensure that all aspects of new product development are taken into account and that the company is able to track and assess the progress of new products. Companies may get a better overall picture of new product development by putting together a cross-functional team, which can help generate fresh ideas and give assistance in evaluating them. Use in turbulent times In difficult economic times, it is even more important for companies to focus on innovation and new product development. Oftentimes, such situations result in a short-sighted focus on cost-cutting and a reduction in spending on new products. However, companies that are able to innovate and create new products will be better positioned for the future. Although counter-intuitive, tough times may even call for a greater emphasis on new product development. This is because companies need to find ways to meet the changing needs and tastes of their customers. Innovation can help a company become more competitive and better positioned for the future. In difficult economic times, it is even more important for companies to focus on innovation and new product development. In addition, companies can use virtual product development to help reduce costs. Virtual product development uses collaboration technology to remove the need for co-located teams, which can result in significant cost savings such as a reduction in G&A (general & administrative) overhead costs of consulting firms. Another way to reduce the cost of new product development is through the use of 24-hour development cycles. This approach allows companies to develop products more quickly and at a lower cost. By using a 24-hour cycle, companies can shorten the time it takes to get a product to market, which can give them a competitive advantage and capability that can be extremely useful in cases where there is a sudden change in market conditions or customer needs. In difficult economic times, it is even more important for companies to focus on innovation and new product development. By using a variety of methods, such as virtual product development and 24-hour development cycles, companies can reduce the cost of new product development and improve their chances of success. Roles There are many different roles in a product development team, however below is a list of some of the more common ones: Related fields Brand management Engineering Industrial design Marketing Product design Product management See also Choice modelling Commercialization Conceptual economy End user Engineering information management Ideation (creative process) Industrial design Innovation Market penetration Open innovation Packaging Pro-innovation bias Product design Product lifecycle Requirements management Social design Soft launch References External links Beginnings Design Industrial design Systems engineering
New product development
Physics,Engineering
3,885
1,013,539
https://en.wikipedia.org/wiki/BMW%20iDrive
iDrive is an in-car communications and entertainment system, used to control most secondary vehicle systems in late-model BMW cars. It was launched in 2001, first appearing in the E65 7 Series. The system unifies an array of functions under a single control architecture consisting of an LCD panel mounted on the dashboard and a control knob mounted on the center console. iDrive introduced the first multiplexed MOST Bus/Byteflight optical fiber data busses with a very high bit rate in a production vehicle. These are used for high-speed applications such as controlling the television, DVD, or driver assistance systems like adaptive cruise control, infrared night vision or head-up display. iDrive allows the driver (and, in some models, front-seat passengers) to control the climate (air conditioner and heater), audio system (radio and CD player), navigation system, and communication system. iDrive is also used in modern Rolls-Royce models, as Rolls-Royce is owned by BMW, and in the 2019 onwards Toyota Supra is a collaboration between BMW and Toyota. BMW also owns the Mini brand, and a pared-down version of iDrive is available on those cars, branded as Connected. iDrive Generations iDrive (1st Gen) An early prototype iDrive (called the Intuitive Interaction Concept) was featured on the BMW Z9 concept in 1999. The production version debuted in September 2001 in the BMW 7 Series (E65) and was built on the VxWorks kernel while the Navigation computer used Microsoft Windows CE for Automotive; this can be seen when the system reboots or restarts after a software crash, displaying a "Windows CE" logo. The first generation of iDrive controllers in the 7 Series was equipped with only a rotary knob. The GPS computer ("NAV01", located in the trunk) was only capable of reading map CDs. In October 2003, a menu and a customizable button was added to the controller. The new GPS computer ("NAV02") was updated to read DVDs, featured a faster processor and the ability to display the map in bird's-eye view ("perspective"). In April 2005, the iDrive controller was changed again, the turn knob having a new leather top. The last hardware update of the GPS unit ("NAV03") got a faster processor again. The map display is antialiased. The 8.8" wide-screen display was updated, having a brighter screen and the ability to control a MP3 capable 6 CD-changer or a BMW iPod Interface. Possible options include a TV tuner, DVD changer, BMW Night Vision, side view camera and a rear view camera. iDrive Business (M-ASK) M-ASK stands for MMI Audio System controller and is manufactured by Becker. This is a limited version of the iDrive computer with a small 6.6" display and is only found on 5, 6 Series and the X5 or X6, without the navigation option. In addition, it can be ordered as an option in Europe on the 1 Series, 3 Series and 5 series as "Business navigation", which has basic navigation abilities. Early versions of the Business navigation could only display directional arrows, but the latest version can also display 2D maps. iDrive Business Navigation uses a different map DVD than iDrive Professional Navigation. In addition, as only one optical drive is available, one cannot use both navigation and listen to a CD simultaneously. When iDrive Professional is ordered the M-ASK system is replaced by iDrive CCC Professional with a dual slot dash mounted drive computer and larger 8.8" display. iDrive Business is available on the following cars; iDrive Business Navigation (optional) 1 Series E81/E82/E87/E88 3 Series E90/E91/E92/E93 5 Series E60/E61 iDrive Business (default when navigation is not ordered) 6 Series E63/E64 X5 E70 X6 E71 The above list can vary depending on the region. iDrive Professional Navigation (CCC) [iDrive 2.0] It debuted in 2003 with the E60/E61 5 Series and is based on Wind River VxWorks, a real-time operating system. CCC stands for Car Communication Computer and uses a larger 8.8" wide-screen display. It was available on the following cars as an option; 1-Series E81/E82/E87/E88 - 06/2004 – 09/2008 3-Series E90/E91/E92/E93 - 03/2005 – 09/2008 5-Series E60/E61 - 12/2003 – 11/2008 6-Series E63/E64 - 12/2003 – 11/2008 X5 E70 - 03/2007 – 10/2009 X6 E71 - 05/2008 – 10/2009 CCC based systems use a map DVD from Navteq in a dedicated DVD drive. CCC - Update 1 This is a minor update to iDrive Professional debuted in March 2007. It adds additional programmable buttons in the dashboard to directly access frequent functions and it removes the haptic feedback from the iDrive controller. It is available on the following cars as an option; 1 Series E81/E82/E87/E88 manufactured between March 2007 and September 2008 3 Series E90/E91/E92/E93 manufactured between March 2007 and August 2008 5 Series E60/E61 manufactured between March 2007 and August 2008 6 Series E63/E64 manufactured between March 2007 and August 2008 X5 E70 manufactured until MY2010 X6 E71 CCC - Update 2 This is a minor update debuted in September 2008 for Model Year 2009 cars equipped with iDrive Professional that did not get the new CIC based system. These cars get the new iDrive controller that is also used on cars with CIC. The actual iDrive computer (CCC) remains the same. This update is available on the following cars; 5 Series E60/E61 manufactured in September 2008 to February 2009 (to October 2008 for European production) 6 Series E63/E64 manufactured in September 2008 to February 2009 (to October 2008 for European production) iDrive Professional Navigation (CIC) [iDrive 3.0] It debuted in September 2008 with F01/F02 7 Series. CIC stands for Car Information Computer and is manufactured by Becker, utilizing the QNX operating system. It is available on the following cars as an option: 1-Series E81/E82/E87/E88 - 09/2008 – 08/2013 1-Series F20/F21 - 09/2011 – 03/2013 3-Series E90/E91/E92/E93 - 09/2008 – 10/2013 3-Series F30/F31/F34/F80 - 02/2012 – 11/2012 5-Series E60/E61 - 11/2008 – 05/2010 5-Series F07 - 10/2009 – 07/2012 5-Series F10 - 03/2010 – 09/2012 5-Series F11 - 09/2010 – 09/2012 6-Series E63/E64 - 11/2008 – 07/2010 6-Series F06 - 03/2012 – 03/2013 6-Series F12/F13 - 12/2010 – 03/2013 7-Series F01/F02/F03 - 11/2008 – 07/2013 7-Series F04 - 11/2008 – 06/2015 X1 E84 - 10/2009 – 06/2015 X3 F25 - 10/2010 – 04/2013 X5 E70 - 10/2009 – 06/2013 X6 E71 - 10/2009 – 08/2014 Z4 E89 - 02/2009 – 08/2016 The CIC system is a major update to iDrive, replacing the display, computer and the controller. The display is of a higher resolution, and is generally more responsive than CCC, to address one of the common complaints of iDrive. Internet access is also supported. CIC-based systems use maps from TeleAtlas that are installed on an internal 2.5" 80 GB Hard Disk Drive (HDD). This HDD can also store up to 8 GB of music files for playback. For facilitating the uploading of music files to the HDD, a USB port is provided in the glove box. Following 2009 LCI production, all CIC-based iDrive systems support DVD video. This, however, is only operational when the vehicle is in the "Park" position for automatic transmissions, or while the parking brake is set for vehicles that have a manual transmission. DVD audio will continue to play while driving. iDrive Professional NBT (Next Big Thing) [iDrive 4.0] BMW introduced a further update to the iDrive Professional System in early 2012, calling it the "Next Big Thing" (NBT). It was introduced in current generation cars as an option, including: 1-Series F20/F21 - 03/2013 – 03/2015 2-Series F22 - 11/2013 – 03/2015 3-Series F30/F31 - 11/2012 – 07/2015 3-Series F34 - 03/2013 – 07/2015 3-Series F80 - 03/2014 – 07/2015 4-Series F32 - 07/2013 – 07/2015 4-Series F33 - 11/2013 – 07/2015 4-Series F36 - 03/2014 – 07/2015 5-Series F07 - 07/2012 – 2016 5-Series F10/F11/F18 - 09/2012 – 2016 6-Series F06/F12/F13 - 03/2013 – 2016 7-Series F01/F02/F03 - 07/2012 – 06/2015 X3 F25 - 04/2013 – 08/2017 X4 F26 - 04/2014 – 08/2017 X5 F15 - 08/2014 – 07/2016 X5 F85 - 12/2014 – 07/2016 X6 F16 - 08/2014 – 07/2016 X6 F86 - 12/2014 – 07/2016 i3 - 09/2013 – 09/2017 i8 - 04/2014 – 09/2017 The update includes extensive hardware and software changes including cosmetic enhancements, faster processor, more memory, detailed 3D maps and improved routing. In addition, the capacity of the internal HDD has been increased from 10GB to 20GB. NBT also introduced a redesigned iDrive controller with optional handwriting recognition capabilities and gesture controls. This was achieved through a capacitive touch pad on top of the iDrive controller. NBT also removed the need for a separate COMBOX module for A2DP and USB media as those functions were integrated directly into the NBT Head Unit. BMW Online was also replaced with the newly introduced Connected Drive, which relied on a hardware TCB module with a built-in SIM card for mobile connectivity. iDrive Professional NBT EVO [iDrive 5.0/6.0] NBT EVO (Evolution) was released starting in 2016 and represented the first major change in the operational logic of iDrive since being introduced in 2001. The familiar vertical list of text menus was replaced by a horizontal set of dynamic tiles, each able to show real time information. This update saw major updates to the iDrive hardware, including the ability to interact with the system via touch screen for the first time. BMW's Connected Drive services were further enhanced with this upgrade to the iDrive system, and the TCB module was replaced with a newer, faster ATM module. NBT EVO also introduced basic gesture controls as an optional extra on select BMW models. Three Interface options exist for NBT EVO. ID4 looks like CIC NBT while ID5 and ID6 feature the new horizontal tile interface. NBT EVO was available on the following BMW Models: 1-Series F20/F21 - 03/2015 – 2019 2-Series F22 - 03/2015 – 2021 2-Series F23 - 11/2014 – 2021 3-Series F30/F31/F34/F80 - 07/2015 – 2018 3-Series G20 - 2019 - 2022 (base system, when no Live Cockpit Plus or Professional is equipped) 4-Series F32/F33/F36 - 07/2015 – 2019 5-Series G30 - 10/2016 – 2019 6-Series F06/F12/F13 - 03/2013 – 2018 6-Series G32 - 07/2017 – 2018 7-Series G12 - 07/2015 – 2019 X1 F48 - 06/2015 – 06/2022 X2 F39 - 11/2017 – 10/2022 X3 F25 - 03/2016 – 2017 X3 G01 - 11/2017 – present X4 F26 - 03/2016 – 2018 X5 F15/F85 - 07/2016 – 2018 X6 F16/F86 - 07/2016 – 2019 i3 (ID6.0) 09/2018–07/2022 i8 (ID6.0) 09/2018- 2020 BMW Live Cockpit series [iDrive 7.0] iDrive consists of the MGU hardware (Media Graphics Unit) running the 7th generation of iDrive called BMW Operating System 7.0. Two Live Cockpit configurations are available: Live Cockpit Plus and Live Cockpit Professional. The Live Cockpit Plus system uses a hybrid analog/digital instrument cluster with a 5.7-inch Driver's Information Display and a 8.8-inch main display. In the Live Cockpit Professional System these are upgraded to a 12.3-inch digital instrument cluster and a 10.25-inch main display. iDrive 7.0 is available on the following BMW models: BMW 1 Series (F40) BMW 2 Series (F44) BMW 3 Series (G20) BMW 4 Series (G22) BMW 5 Series (G30) BMW 6 Series (G32) BMW 7 Series (G11) BMW 8 Series (G15) BMW X3 (G01) BMW iX3 (G08) BMW X4 (G02) BMW X5 (G05) BMW X6 (G06) BMW X7 (G07) BMW Z4 (G29) BMW Curved Display [iDrive 8.0] BMW unveiled the 8th generation of iDrive in 2021 with BMW Curved Display. iDrive 8.0 is available on the following BMW models: BMW 1 Series (F70) BMW 2 Series Active Tourer (U06) BMW 2 Series Coupé (G42) (after summer 2022) BMW 3 Series (G20 facelift) BMW 5 Series (G60) BMW 7 Series (G70) BMW iX1 BMW i4 BMW iX BMW XM BMW X1 (U11) BMW X2 (U10) BMW X3 (G45) BMW X5 (G05 facelift) BMW X6 (G06 facelift) BMW X7 (G07 facelift) BMW is reportedly releasing iDrive 9 on a new infotainment head unit based on Android OS starting in March 2023. Rationale The design rationale of iDrive is to replace an array of controls for the above systems with an all-in-one unit. The controls necessary for vehicle control and safety, such as the headlights and turn signals, are still located in the immediate vicinity of the steering column. Since, in the rationale of the designers, the air conditioning, car audio, navigation and communication controls are not used equally often, they have been moved into a central location. The iDrive M-ASK and CCC systems were based around the points of a compass (north, south, east, west) with each direction corresponding with a specific area. These areas are also colour-coded providing identification as to which part of the system is currently being viewed. North (blue) for communication East (green) for navigation (In some models without navigation, this option is replaced by the On Board Computer) South (brown) for entertainment West (red) for climate control Starting in 2007, iDrive added programmable buttons (6 USA/Japan, 8 in Europe) to the dashboard, breaking tradition of having the entire system operated via the control knob. Each button can be programmed to instantly access any feature within iDrive (such as a particular navigation route, or one's favorite radio station). In addition, a dedicated AM/FM button, and a Mode button (to switch between entertainment sources) were added for North American-market vehicles. Older versions of iDrive used a widescreen display that was split into a 2/3 main window, and 1/3 "Assistance Window". This allowed the driver to use a function or menu, while simultaneously maintaining secondary information. For example, if the driver was not in the Navigation menu, they could still see a map on the assistance window. Other information that could be displayed included navigation route directions and a trip computer. Controversy iDrive caused significant controversy among users, the automotive media, and critics when it was first introduced. Many reviewers of BMW vehicles in automobile magazines disapproved of the system. Criticisms of iDrive included its steep learning curve and its tendency to cause the driver to look away from the road too much. Most users report that they adapt to the system after about one year of practice, and the advent of voice controls has reduced the learning curve greatly. A new iDrive system (CIC) was introduced in September 2008 to address most of the complaints. iDrive NBT, introduced in 2012, brought further improvements. References External links Third Generation BMW iDrive in the F01/F02 BMW 7 Series Operation Video Advanced driver assistance systems Automotive technology tradenames BMW Human–computer interaction In-car entertainment Rolls-Royce Vehicle telematics
BMW iDrive
Engineering
3,814
19,800,483
https://en.wikipedia.org/wiki/FEKO
Feko is a computational electromagnetics software product developed by Altair Engineering. The name is derived from the German acronym "Feldberechnung für Körper mit beliebiger Oberfläche", which can be translated as "field calculations involving bodies of arbitrary shape". It is a general purpose 3D electromagnetic (EM) simulator. Feko originated in 1991 from research activities of Dr. Ulrich Jakobus at the University of Stuttgart, Germany. Cooperation between Dr. Jakobus and EM Software & Systems (EMSS) resulted in the commercialisation of FEKO in 1997. In June 2014, Altair Engineering acquired 100% of EMSS-S.A. and its international distributor offices in the United States, Germany and China, leading to the addition of FEKO to the Altair Hyperworks suite of engineering simulation software. The software is based on the Method of Moments (MoM) integral formulation of Maxwell's equations and pioneered the commercial implementation of various hybrid methods such as: Finite Element Method (FEM) / MoM where a FEM region is bounded with an integral equation based boundary condition to ensure full coupling between the FEM and MoM solution areas of the problem. MoM / Physical Optics (PO) where computationally expensive MoM current elements are used to excite computationally inexpensive PO elements, inducing currents on the PO elements. Special features in the FEKO implementation of the MoM/PO hybrid include the analysis of dielectric or magnetically coated metallic surfaces. MoM / Geometrical Optics (GO) where rays are launched from radiating MoM elements. MoM / Uniform Theory of Diffraction (UTD) where computationally expensive MoM current elements are used to excite canonical UTD shapes (plates, cylinders) with ray-based principles of which the computational cost is independent of wavelength. A Finite Difference Time Domain (FDTD) solver was added in May 2014 with the release of FEKO Suite 7.0. References External links Numerical software Software that uses Qt Electromagnetic simulation software
FEKO
Mathematics
408
1,933,935
https://en.wikipedia.org/wiki/Fetish%20art
Fetish art is art that depicts people in fetishistic situations such as S&M, domination/submission, bondage, transvestism and the like, sometimes in combination. It may simply depict a person dressed in fetish clothing, which could include undergarments, stockings, high heels, corsets, or boots. A common fetish theme is a woman dressed as a dominatrix. History Many of the 'classic' 1940s, 1950s and 1960s-era fetish artists such as Eric Stanton and Gene Bilbrew began their careers at Irving Klaw's Movie Star News company (later Nutrix), creating drawings for episodic illustrated bondage stories. In 1946 fetish artist John Coutts (a.k.a. John Willie) founded Bizarre magazine. Bizarre was first published in Canada, then printed in the U.S., and was the inspiration for a number of new fetish magazines such as Bizarre Life. In 1957 English engineer John Sutcliffe founded Atomage magazine, which featured images of the rubber clothing he had made. Sutcliffe's work was an inspiration for Dianna Rigg's leather-catsuit-wearing character in The Avengers, a TV show that "opened the floodgates for fetish-SM images". In the 1970s and 1980s, fetish artists such as Robert Bishop were published extensively in bondage magazines. In more recent years, the annual SIGNY awards have been awarded to the bondage artists voted the best of that year. Many artists working in the mainstream comic book industry have included fetishistic imagery in their work, usually as a shock tactic or to denote villainy or corruption. The boost that depictions of beautiful women in tight fetish outfits give to the sales of comics to a mostly teenage male comic-buying audience may also be a factor. In 1950s America comics with bondage or fetish themes began appearing. Around the same time, fetish artists influenced the cartoons of George Petty, Alberto Vargas and others, which featured in magazines like Playboy and Esquire. One example of fetish imagery in comics is the catsuit-wearing, whip-wielding Catwoman, who has been called, "an icon of fetish art". Many S&M, leather and fetish artists have produced images depicting urine fetishism ("watersports"), including Domino, Touko Laaksonen ("Tom of Finland"), MATT, and Bill Schmeling ("The Hun"). Mainstream fine artists such as Allen Jones have included strong fetish elements in their work. An artist whose erotica transcends to mainstream collectors is found in the Shunga and Shibari style works of Hajime Sorayama. Taschen books included artist Hajime Sorayama, whom his peer artists call a cross between Norman Rockwell and Salvador Dalí, or an imaginative modern day Vargas. Sorayama's robotic diverse illustrative works are in the permanent collections of the New York City Museum of Modern Art (MoMA) and the Smithsonian Institution, as well as the fetish arts in the private World Erotic Art Museum Miami collection. The works of contemporary fetish artists such as Roberto Baldazzini and Michael Manning are published by companies such as NBM Publishing and Taschen. See also Erotic art Robert Bishop Charles Guyette Eric Stanton Gene Bilbrew Jeff Gord Irving Klaw John Willie List of fetish artists Damsel in distress Fetish model Photomanipulation Yiff References Visual arts genres Erotic art Fetish subculture
Fetish art
Biology
726
4,016,398
https://en.wikipedia.org/wiki/Plant%20taxonomy
Plant taxonomy is the science that finds, identifies, describes, classifies, and names plants. It is one of the main branches of taxonomy (the science that finds, describes, classifies, and names living things). Plant taxonomy is closely allied to plant systematics, and there is no sharp boundary between the two. In practice, "plant systematics" involves relationships between plants and their evolution, especially at the higher levels, whereas "plant taxonomy" deals with the actual handling of plant specimens. The precise relationship between taxonomy and systematics, however, has changed along with the goals and methods employed. Plant taxonomy is well known for being turbulent, and traditionally not having any close agreement on circumscription and placement of taxa. See the list of systems of plant taxonomy. Background Classification systems serve the purpose of grouping organisms by characteristics common to each group. Plants are distinguished from animals by various traits: they have cell walls made of cellulose, polyploidy, and they exhibit sedentary growth. Where animals have to eat organic molecules, plants are able to change energy from light into organic energy by the process of photosynthesis. The basic unit of classification is species, a group able to breed amongst themselves and bearing mutual resemblance, a broader classification is the genus. Several genera make up a family, and several families an order. History of classification The botanical term angiosperm, or flowering plant, comes from the Greek (; 'bottle, vessel') and (; 'seed'); in 1690, the term Angiospermae was coined by Paul Hermann, albeit in reference to only a small subset of the species that are known as angiosperms, today. Hermann's Angiospermae included only flowering plants possessing seeds enclosed in capsules, distinguished from his Gymnospermae, which were flowering plants with achenial or schizo-carpic fruits (the whole fruit, or each of its pieces, being here regarded as a seed and naked). The terms Angiospermae and Gymnospermae were used by Carl Linnaeus in the same sense, albeit with restricted application, in the names of the orders of his class Didynamia. The terms angiosperms and gymnosperm fundamentally changed meaning in 1827, when Robert Brown determined the existence of truly-naked ovules in the Cycadeae and Coniferae. The term gymnosperm was, from then-on, applied to seed plants with naked ovules, and the term angiosperm to seed plants with enclosed ovules. However, for many years after Brown's discovery, the primary division of the seed plants was seen as between monocots and dicots, with gymnosperms as a small subset of the dicots. In 1851, Hofmeister discovered the changes occurring in the embryo-sac of flowering plants, and determined the correct relationships of these to the Cryptogamia. This fixed the position of Gymnosperms as a class distinct from Dicotyledons, and the term Angiosperm then, gradually, came to be accepted as the suitable designation for the whole of the flowering plants (other than Gymnosperms), including the classes of Dicotyledons and Monocotyledons. This is the sense in which the term is used, today. In most taxonomies, the flowering plants are treated as a coherent group; the most popular descriptive name has been Angiospermae, with Anthophyta (lit. 'flower-plants') a second choice (both unranked). The Wettstein system and Engler system treated them as a subdivision (Angiospermae). The Reveal system also treated them as a subdivision (Magnoliophytina), but later split it to Magnoliopsida, Liliopsida, and Rosopsida. The Takhtajan system and Cronquist system treat them as a division (Magnoliophyta). The Dahlgren system and Thorne system (1992) treat them as a class (Magnoliopsida). The APG system of 1998, and the later 2003 and 2009 revisions, treat the flowering plants as an unranked clade without a formal Latin name (angiosperms). A formal classification was published alongside the 2009 revision in which the flowering plants rank as a subclass (Magnoliidae). The internal classification of this group has undergone considerable revision. The Cronquist system, proposed by Arthur Cronquist in 1968 and published in its full form in 1981, is still widely used but is no longer believed to accurately reflect phylogeny. A consensus about how the flowering plants should be arranged has recently begun to emerge through the work of the Angiosperm Phylogeny Group (APG), which published an influential reclassification of the angiosperms in 1998. Updates incorporating more recent research were published as the APG II system in 2003, the APG III system in 2009, and the APG IV system in 2016. Traditionally, the flowering plants are divided into two groups, Dicotyledoneae or Magnoliopsida Monocotyledoneae or Liliopsida to which the Cronquist system ascribes the classes Magnoliopsida (from "Magnoliaceae") and Liliopsida (from "Liliaceae"). Other descriptive names allowed by Article 16 of the ICBN include Dicotyledones or Dicotyledoneae, and Monocotyledones or Monocotyledoneae, which have a long history of use. In plain English, their members may be called "dicotyledons" ("dicots") and "monocotyledons" ("monocots"). The Latin behind these names refers the observation that the dicots most often have two cotyledons, or embryonic leaves, within each seed. The monocots usually have only one, but the rule is not absolute either way. From a broad diagnostic point of view, the number of cotyledons is neither a particularly handy, nor a reliable character. Recent studies, as per the APG, show that the monocots form a monophyletic group (a clade), but that the dicots are paraphyletic; nevertheless, the majority of dicot species fall into a clade with the eudicots (or tricolpates), with most of the remaining going into another major clade with the magnoliids (containing about 9,000 species). The remainder includes a paraphyletic grouping of early-branching taxa known collectively as the basal angiosperms, plus the families Ceratophyllaceae and Chloranthaceae. Plantae, the Plant Kingdom The plant kingdom is traditionally divided according to the following: Identification, classification and description of plants Three goals of plant taxonomy are the identification, classification and description of plants. The distinction between these three goals is important and often overlooked. Plant identification is a determination of the identity of an unknown plant by comparison with previously collected specimens or with the aid of books or identification manuals. The process of identification connects the specimen with a published name. Once a plant specimen has been identified, its name and properties are known. Plant classification is the placing of known plants into groups or categories to show some relationship. Scientific classification follows a system of rules that standardizes the results, and groups successive categories into a hierarchy. For example, the family to which the lilies belong is classified as follows: Kingdom: Plantae Division: Magnoliophyta Class: Liliopsida Order: Liliales Family: Liliaceae The classification of plants results in an organized system for the naming and cataloging of future specimens, and ideally reflects scientific ideas about inter-relationships between plants. The set of rules and recommendations for formal botanical nomenclature, including plants, is governed by the International Code of Nomenclature for algae, fungi, and plants abbreviated as ICN. Plant description is a formal description of a newly discovered species, usually in the form of a scientific paper using ICN guidelines. The names of these plants are then registered on the International Plant Names Index along with all other validly published names. Classification systems These include; APG system (angiosperm phylogeny group) APG II system (angiosperm phylogeny group II) APG III system (angiosperm phylogeny group III) APG IV system (angiosperm phylogeny group IV) Bessey system (a system of plant taxonomy) Cronquist system (taxonomic classification of flowering plants) Melchior system Online databases Ecocrop EPPO Code GRIN See Category: Online botany databases See also American Society of Plant Taxonomists Biophysical environment Botanical nomenclature Citrus taxonomy Environmental protection Herbarium History of plant systematics International Association for Plant Taxonomy Taxonomy of cultivated plants References Sources External links Plant systematics Tracking Plant Taxonomy Updates discussion group on Facebook
Plant taxonomy
Biology
1,871
325,260
https://en.wikipedia.org/wiki/Figurate%20number
The term figurate number is used by different writers for members of different sets of numbers, generalizing from triangular numbers to different shapes (polygonal numbers) and different dimensions (polyhedral numbers). The term can mean polygonal number a number represented as a discrete -dimensional regular geometric pattern of -dimensional balls such as a polygonal number (for ) or a polyhedral number (for ). a member of the subset of the sets above containing only triangular numbers, pyramidal numbers, and their analogs in other dimensions. Terminology Some kinds of figurate number were discussed in the 16th and 17th centuries under the name "figural number". In historical works about Greek mathematics the preferred term used to be figured number. In a use going back to Jacob Bernoulli's Ars Conjectandi, the term figurate number is used for triangular numbers made up of successive integers, tetrahedral numbers made up of successive triangular numbers, etc. These turn out to be the binomial coefficients. In this usage the square numbers (4, 9, 16, 25, ...) would not be considered figurate numbers when viewed as arranged in a square. A number of other sources use the term figurate number as synonymous for the polygonal numbers, either just the usual kind or both those and the centered polygonal numbers. History The mathematical study of figurate numbers is said to have originated with Pythagoras, possibly based on Babylonian or Egyptian precursors. Generating whichever class of figurate numbers the Pythagoreans studied using gnomons is also attributed to Pythagoras. Unfortunately, there is no trustworthy source for these claims, because all surviving writings about the Pythagoreans are from centuries later. Speusippus is the earliest source to expose the view that ten, as the fourth triangular number, was in fact the tetractys, supposed to be of great importance for Pythagoreanism. Figurate numbers were a concern of the Pythagorean worldview. It was well understood that some numbers could have many figurations, e.g. 36 is a both a square and a triangle and also various rectangles. The modern study of figurate numbers goes back to Pierre de Fermat, specifically the Fermat polygonal number theorem. Later, it became a significant topic for Euler, who gave an explicit formula for all triangular numbers that are also perfect squares, among many other discoveries relating to figurate numbers. Figurate numbers have played a significant role in modern recreational mathematics. In research mathematics, figurate numbers are studied by way of the Ehrhart polynomials, polynomials that count the number of integer points in a polygon or polyhedron when it is expanded by a given factor. Triangular numbers and their analogs in higher dimensions The triangular numbers for are the result of the juxtaposition of the linear numbers (linear gnomons) for : These are the binomial coefficients . This is the case of the fact that the th diagonal of Pascal's triangle for consists of the figurate numbers for the -dimensional analogs of triangles (-dimensional simplices). The simplicial polytopic numbers for are: (linear numbers), (triangular numbers), (tetrahedral numbers), (pentachoric numbers, pentatopic numbers, 4-simplex numbers), (-topic numbers, -simplex numbers). The terms square number and cubic number derive from their geometric representation as a square or cube. The difference of two positive triangular numbers is a trapezoidal number. Gnomon The gnomon is the piece added to a figurate number to transform it to the next larger one. For example, the gnomon of the square number is the odd number, of the general form , . The square of size 8 composed of gnomons looks like this: To transform from the -square (the square of size ) to the -square, one adjoins elements: one to the end of each row ( elements), one to the end of each column ( elements), and a single one to the corner. For example, when transforming the 7-square to the 8-square, we add 15 elements; these adjunctions are the 8s in the above figure. This gnomonic technique also provides a mathematical proof that the sum of the first odd numbers is ; the figure illustrates = 64 = 82. There is a similar gnomon with centered hexagonal numbers adding up to make cubes of each integer number. Notes References Integer sequences
Figurate number
Mathematics
962
159,669
https://en.wikipedia.org/wiki/Chinese%20astrology
Chinese astrology is based on traditional Chinese astronomy and the Chinese calendar. Chinese astrology flourished during the Han dynasty (2nd century BC to 2nd century AD). Chinese astrology has a close relation with Chinese philosophy (theory of the three harmonies: heaven, earth, and human), and uses the principles of yin and yang, wuxing (five phases), the ten Heavenly Stems, the twelve Earthly Branches, the lunisolar calendar (moon calendar and sun calendar), and the time calculation after year, month, day, and shichen (, double hour). These concepts are not readily found or familiar in Western astrology or culture. History and background Chinese astrology was elaborated during the Zhou dynasty (1046–256 BC) and flourished during the Han dynasty (2nd century BC to 2nd century AD). During the Han period, the familiar elements of traditional Chinese culture—the yin-yang philosophy, the theory and technology of the five elements (Wuxing), the concepts of heaven and earth, and Taoist, Buddhist and Confucian morality—were brought together to formalize the philosophical principles of Chinese medicine and divination, astrology and alchemy. The five classical planets are associated with the wuxing: Venus—Metal (White Tiger) Jupiter—Wood (Azure Dragon) Mercury—Water (Black Tortoise) Mars—Fire (Vermilion Bird) (may be associated with the phoenix which was also an imperial symbol along with the Dragon) Saturn—Earth (Yellow Dragon) According to Chinese astrology, a person's fate can be determined by the position of the major planets at the person's birth along with the positions of the Sun, Moon, comets, the person's time of birth, and zodiac sign. The system of the twelve-year cycle of animal signs was built from observations of the orbit of Jupiter (the Year Star; ). Following the orbit of Jupiter around the Sun, Chinese astronomers divided the celestial circle into 12 sections, and rounded it to 12 years (from 11.86). Jupiter is associated with the constellation Sheti (- Boötes) and is sometimes called Sheti. A system of computing one's predestined fate is based on birthday, birth season, and birth hour, known as zi wei dou shu (), or Purple Star Astrology, is still used regularly in modern-day Chinese astrology to divine one's fortune. The 28 Chinese constellations, Xiu (), are quite different from Western constellations. For example, the Big Bear (Ursa Major) is known as Dou (); the belt of Orion is known as Shen (), or the "Happiness, Fortune, Longevity" trio of demigods. The seven northern constellations are referred to as Xuan Wu (). Xuan Wu is also known as the spirit of the northern sky or the spirit of water in Taoist belief. In addition to astrological readings of the heavenly bodies, the stars in the sky form the basis of many fairy tales. For example, the Summer Triangle is the trio of the cowherd (Altair), the weaving maiden fairy (Vega), and the "tai bai" fairy (Deneb). The two forbidden lovers were separated by the silvery river (the Milky Way). Each year on the seventh day of the seventh month in the Chinese calendar, the birds form a bridge across the Milky Way. The cowherd carries their two sons (the two stars on each side of Altair) across the bridge to reunite with their fairy mother. The tai bai fairy acts as the chaperone of these two immortal lovers. Chinese zodiac Chinese astrology has a close relation with Chinese philosophy. The core values and concepts of Chinese philosophy originate from Taoism. Table of the sixty-year calendar The following table shows the 60-year cycle matched up to the Western calendar for the years 1924–2043 (see sexagenary cycle article for years 1924–1983). This is only applied to Chinese Lunar calendar. The sexagenary cycle begins at lichun. Each of the Chinese lunar years are associated with a combination of the ten Heavenly Stems () and the twelve Earthly Branches () which make up the 60 Stem-Branches () in a sexagenary cycle. Wuxing Although it is usually translated as 'element', the Chinese word xing literally means something like 'changing states of being', 'permutations' or 'metamorphoses of being'. In fact, Sinologists cannot agree on one single translation. The Chinese notion of 'element' is therefore quite different from the Western one. In the west, India Vedic, and Japanese Go dai elements were seen as the basic building blocks of matter and static or stationary. The Chinese 'elements', by contrast, were seen as ever changing, and the transliteration of xing is simply 'the five changes' and in traditional Chinese medicine are commonly referred to as phrases. Things seen as associated to each xing are listed below. Wood () The East () Springtime () Azure Dragon () The Planet Jupiter () The Color Green () Liver () and Gall bladder () Fire () The South () Summer () Vermilion Bird/Vermilion Phoenix () The Planet Mars () The Color Red () Circulatory system, Heart () and Small intestine () Earth () Center () Change of seasons (the last month of the season) The Yellow Dragon () The Planet Saturn () The Color Yellow () Digestive system, Spleen () and Stomach () Metal () The West () Autumn () White Tiger () The Planet Venus () The Color White () Respiratory system, Lung () and Large intestine () Water () The North () Winter () Black Tortoise () The Planet Mercury () The Color Black/Blue () Skeleton (), Urinary bladder and Kidney () Wuxing generating cycle ( sheng) (Inter-promoting, begetting, engendering, mothering or enhancing cycle) Generating: Wood fuels Fire to burn; Fire creates Earth (ash); Earth produces minerals, Metal; Metal creates Water from condensation; Water nourishes Wood to grow. Wuxing regulating cycle ( kè) The regulating cycle is important to create restraints in the whole system. For example, if Fire was allowed to burn out of control, it would be devastating and destructive as we see in nature in the form of bush fires or internally as high fevers, (Destructing, overcoming or inter-restraining or weakening cycle). Fire makes Metal flexible; Metal adds the minerals to Wood for there to be strong upward growth; Wood draws water from the Earth to create stability for building; Earth gives Water direction, like the banks of a river; Water controls Fire by cooling its heat. See also Chinese calendar correspondence table Chinese spiritual world concepts Chinese fortune telling Chinese zodiac Da Liu Ren Dunhuang Star Chart Feng shui Four Pillars of Destiny Qimen Dunjia Symbolic stars Synoptical astrology Tai Sui Tai Yi Shen Shu Traditional Chinese star names References Further reading Taoist divination Astrology Astrology by tradition Divination astrology astrology
Chinese astrology
Astronomy
1,491
53,434,474
https://en.wikipedia.org/wiki/Vertical%20Offshore%20Reference%20Frames
Vertical Offshore Reference Frames (VORF) is a set of high resolution surface models, published and maintained by the UK Hydrographic Office, which together define a vertical datum for hydrographic surveying and charting in the United Kingdom and Ireland. Tidal surface models The following tidal and sea level surfaces are included: Chart Datum (CD) Lowest Astronomical Tide (LAT) Mean Sea Level (MSL) Mean Low Water Springs (MLWS) Mean High Water Springs (MHWS) Highest Astronomical Tide (HAT) Land datums including Ordnance Datums Newlyn, Belfast and Poolbeg The surfaces are modelled with respect to the terrestrial reference frame used for satellite navigation (GNSS) positioning, ETRS89. Thus VORF directly permits the use of high precision GNSS in hydrographic survey, and also allows the capability of transforming vertical data between the different datums. The UK Maritime & Coastguard Agency requires the use of VORF for tidal reductions as part of its civil hydrography programme. Files A main product of the VORF project was the gridded vertical correction files which deliver the capability to transfer heights and depths from one vertical reference system to another, "allowing the direct use of depth data from surveys which is referred to a WGS84 compatible datum rather than Chart Datum and thus enabling Hydrographic surveyors to survey without the need to measure tides". This is accomplished via a set of files, each file containing a grid of height corrections to apply to GNSS-derived heights to translate them to one of several VORF models. There is higher resolution in estuaries and inlets, but for most of the areas covered, there is a single height correction for each roughly 900 by 500 metre rectangle. VORF correction files are purchased from the same source as Admiralty charts. Format The Admiralty provides VORF data in the form of ".vrf" files, each of which contains data representing the separation between a given pair of datums. The filename indicates the VORF model version, the south-west corner of the area covered, and the pair of datums it represents. The files can be bought and downloaded from the Admiralty Marine Data Portal, and are in plain text (ASCII) format, so can be viewed in any standard text editor. They contain a list of positions forming a regular grid. At each position the separation between the datums is given, and the uncertainty in this value. Positions are in decimal degrees and separation and uncertainty are in metres. The grid resolution is 0.008° over the majority of the VORF area, which corresponds to boxes of approximately 900 x 500 metres. There is higher resolution (typically 0.002° to 0.0005°) around rivers and estuaries, and around Portland Bill. Where a file contains higher-resolution data, the high-resolution values are at the end of the file. Pricing The data are sold by the "block", with a VORF block covering an area of 0.088° latitude by 0.16° longitude. This makes each block about a 10 kilometre square (96 square kilometres), containing about 220 data points. The price (as at July 2021) starts at £216 per block, with discounts for quantity. Use The VORF datum surfaces will generally be used by automated survey software, and for this purpose, it may be necessary to rearrange the data to create a suitable input file for the software in question. For example, software packages EIVA Navipac and Qinsy require specialised procedures. Development history The VORF project ran from 2005 as a collaborative research project, sponsored by the UKHO, with a consortium comprising Proudman Oceanographic Laboratory, DTU, and led by University College London. It was delivered to the UKHO in 2008. VORF was developed using satellite altimetry, tide gauge observations, geoid and tidal modelling, and GNSS observations. VORF meets its target inshore accuracy of 10 cm in most of the domain of applicability. See also References Cartography Tides Geodetic datums Vertical datums Nautical charts Surveying of the United Kingdom
Vertical Offshore Reference Frames
Mathematics
844
15,686,668
https://en.wikipedia.org/wiki/Sokolov%E2%80%93Ternov%20effect
The Sokolov–Ternov effect is the effect of self-polarization of relativistic electrons or positrons moving at high energy in a magnetic field. The self-polarization occurs through the emission of spin-flip synchrotron radiation. The effect was predicted by Igor Ternov and the prediction rigorously justified by Arseny Sokolov using exact solutions to the Dirac equation. Theory An electron in a magnetic field can have its spin oriented in the same ("spin up") or in the opposite ("spin down") direction with respect to the direction of the magnetic field (which is assumed to be oriented "up"). The "spin down" state has a higher energy than "spin up" state. The polarization arises due to the fact that the rate of transition through emission of synchrotron radiation to the "spin down" state is slightly greater than the probability of transition to the "spin up" state. As a result, an initially unpolarized beam of high-energy electrons circulating in a storage ring after sufficiently long time will have spins oriented in the direction opposite to the magnetic field. Saturation is not complete and is explicitly described by the formula where is the limiting degree of polarization (92.4%), and is the relaxation time: Here is as before, and are the mass and charge of the electron, is the vacuum permittivity, is the speed of light, is the Schwinger field, is the magnetic field, and is the electron energy. The limiting degree of polarization is less than one due to the existence of spin–orbital energy exchange, which allows transitions to the "spin up" state (with probability 25.25 times less than to the "spin down" state). Typical relaxation time is on the order of minutes and hours. Thus producing a highly polarized beam requires a long enough time and the use of storage rings. The self-polarization effect for positrons is similar, with the only difference that positrons will tend to have spins oriented in the direction parallel to the direction of the magnetic field. Experimental observation The Sokolov–Ternov effect was experimentally observed in the USSR, France, Germany, United States, Japan, and Switzerland in storage rings with electrons of energy 1–50 GeV. 1971 – Budker Institute of Nuclear Physics (first observation), with the use of 625 MeV storage ring VEPP-2. 1971 – Orsay (France), with the use of 536 MeV АСО storage ring. 1975 – Stanford (USA), with the use of 2.4 GeV SPEAR storage ring. 1980 – DESY, Hamburg (Germany), with the use of 15.2 GeV PETRA. Applications and generalization The effect of radiative polarization provides a unique capability for creating polarized beams of high-energy electrons and positrons that can be used for various experiments. The effect also has been related to the Unruh effect which, up to now, under experimentally achievable conditions is too small to be observed. The equilibrium polarization given by the Sokolov and Ternov has corrections when the orbit is not perfectly planar. The formula has been generalized by Derbenev and Kondratenko and others. Patent Sokolov A. A. and Ternov I. M. (1973): Award N 131 of 7 August 1973 with priority of 26 June 1963, Byull. Otkr. i Izobr., vol. 47. See also Unruh effect Hawking radiation Froissart–Stora equation Notes Special relativity Synchrotron radiation Particle physics Polarization (waves)
Sokolov–Ternov effect
Physics
762
56,508
https://en.wikipedia.org/wiki/Judi%20Bari
Judith Beatrice Bari (November 7, 1949 – March 2, 1997) was an American environmentalist, feminist, and labor leader, primarily active in Northern California after moving to the state in the mid-1970s. In the 1980s and 1990s, she was the principal organizer of Earth First! campaigns against logging in the ancient redwood forests of Mendocino County and related areas. She also organized Industrial Workers of the World Local 1 in an effort to bring together timber workers and environmentalists of Earth First! in common cause. Bari suffered severe injuries on 24 May 1990 in Oakland, California, when a pipe bomb went off under her seat in her car. She was driving with colleague Darryl Cherney, who had minor injuries. They were arrested by Oakland Police, aided by the FBI, who accused them of transporting a bomb for terrorist purposes. While those charges were dropped, in 1991 the pair filed suit against the Oakland Police Department and FBI for violations of their civil rights during the investigation of the bombing. A jury found in their favor when the case went to trial in 2002, and damages were awarded to Bari's estate and Cherney. Bari had died of cancer in 1997. The bombing has not been solved. In 1999 a bill was passed to establish the Headwaters Forest Reserve (H.R. 2107, Title V. Sec.501.) under administration by the Bureau of Land Management. This protected of mixed old-growth and previously harvested forest. It was a project that Bari had long supported. Early life and education Bari was born on November 7, 1949, and was raised in Silver Spring, Maryland, the daughter of mathematician Ruth Aaronson Bari, who became a recognized mathematician, and diamond setter Arthur Bari. Her parents were Jewish and Italian in ancestry, respectively. The elder Baris were both active in left-wing politics; they advocated for civil rights and opposed the Vietnam War. Judi Bari was the second of three daughters; her older sister is Gina Kolata, a science journalist for the New York Times; and younger is Martha Bari, an art historian. Although Judi Bari attended the University of Maryland for five years, she dropped out without graduating. She said that her college career was most notable for "anti-Vietnam War rioting". Bari began working as a clerk for a chain grocery store and became a union organizer in its work force. At her next job as a mail handler, she organized a wildcat strike in the United States Postal Service bulk mail facility in Maryland. Move to California, marriage and family Bari moved to the Bay Area in Northern California, which was a center of political activism. In 1978 she met her future husband Michael Sweeney at a labor organizers' conference. They shared an interest in radical politics. Sweeney had graduated from Stanford University, and for a time in the early 1970s had been a member of the Maoist group Venceremos, which had mostly Chicano members. He had been married before. In 1979, Bari and Sweeney married and settled in Santa Rosa, California. They had two daughters together, Lisa (1981) and Jessica (1985). The couple divorced in 1988 and shared custody of their children. Political and conservation activities During the early to mid-1980s, Bari devoted herself to Pledge of Resistance, a group that opposed US policies in Central America. She was a self-proclaimed virtuoso on the bullhorn. She edited, wrote, and drew cartoons for political leaflets and publications. Around 1985, Bari moved north with her husband and two children to the vicinity of Redwood Valley in Mendocino County, California. It was an area of old timber towns, such as Eureka and Fortuna, and a new wave of hippies and young counter-culture adults who migrated here from urban areas. In 1986, Houston millionaire Charles Hurwitz acquired Pacific Lumber Company, with assets in Northern California, including in redwood forests. He doubled the company's rate of timber harvesting as a means of paying off the acquisition cost. This enraged environmentalists. The federal government also investigated the transaction because of Hurwitz's use of junk bonds. Activist protests against old-growth timber harvesting by Pacific Lumber became the focus of Earth First! in the following years. On May 8, 1987, a sawmill accident occurred at the Louisiana Pacific mill in Cloverdale, California. Mill worker George Alexander nearly died of injuries suffered when a saw blade struck a spike in a log being milled, generating shrapnel. Adverse publicity resulted. Earth First!, which at that point still promoted "monkeywrenching" as part of its tactics, was blamed by the company and some workers for the spike because of incidents of equipment sabotage that had taken place in the vicinity where the log was harvested. But responsibility for the spike was not determined. However, it was later confirmed that the prime suspect in the case was not an Earth First! activist but a local "disgruntled" landowner. The bad publicity from the incident resulted in Earth First! disavowing tree spiking (but not other forms of sabotage). In 1988, Bari was instrumental in starting Local 1 of the Industrial Workers of the World (IWW), which allied with Earth First! in protests against cutting old growth redwoods. Bari used her labor organizing background to run a workshop on the Industrial Workers of the World at an Earth First! rendezvous in California. Through the formation of EF!–IWW Local 1, she sought to bring together environmentalists and timber workers who were concerned about the harvest rate by the timber industry. She believed they had interests in common. That year, Bari organized the first forest blockade, to promote expanding the South Fork Eel River Wilderness, managed by the US Bureau of Land Management. Related to her other interests, that year Bari also organized a counter-demonstration to protect a Planned Parenthood clinic in Ukiah. Many timber workers believed that the environmentalists were threatening their livelihoods. At this time, environmentalists were backing their legal suits against timber overcutting by staging blockades of job sites in the woods and tree sitting. Loggers saw such actions as harassment. Confrontations between loggers and demonstrators were often heated and sometimes violent. Reactions to Bari's involvement in the protests were severe: her car was rammed by a logging truck in 1989, and she received death threats. In August 1989, environmentalist Mem Hill suffered a broken nose in a protest confrontation with loggers in the woods. She filed a legal suit accusing a logger of assault, and claiming law enforcement did not protect her from attack. Bari emphasized non-violent action and began to incorporate music into her demonstrations. She played the fiddle and sang original compositions by Darryl Cherney, who played guitar. Sometimes she sang her own songs. Their song titles and lyrics aroused controversy, as many listeners considered them offensive. Cherney's song about tree spiking, "Spike a Tree for Jesus" is one example; "Will This Fetus Be Aborted?", sung as a counter-protest to an anti-abortion rally, was another. The media portrayed her as an obstructionist saboteur. Some activists and area residents found Bari to be egocentric, humorless, and strident. Her tactics often rankled not only members of the timber industry and political establishment, but fellow activists. Differences emerged between Bari and her husband over their political paths and diverging lives. He headed a recycling company in the county. They struggled to reconcile political action with the obligations of parenting. In 1988, with a divorce between herself and her husband underway, she met Darryl Cherney. They began a romantic relationship based partly on shared political beliefs, and appeared together at various protests (as noted above). In 1990, the Sierra Club withdrew its support from legislation amending California Forest Practice Rules and moving forward with a process to establish a Headwaters Forest preserve on Pacific Lumber Company land. They submitted a voter initiative, Proposition 130, dubbed "Forests Forever." The timber industry was strongly opposed to it. In response, environmentalists began organizing Redwood Summer, a campaign of nonviolent protests focused on slowing harvest of redwood forests in Northern California until such forests gained extra protections under Proposition 130. They named their campaign in honor of the 1964 Freedom Summer of the Civil Rights Movement. Bari was instrumental in recruiting demonstrators from college campuses across the United States. But on November 6, 1990, Proposition 130 was defeated by California voters, with 52.13% against. Opponents emphasized the disruptive activities of Redwood Summer, which interfered with timber workers, and the support of Earth First! for Proposition 130. It had been accused of sabotage and violence against workers in the past. During organizing for Redwood Summer, Bari directed efforts in Mendocino County, and Cherney went on the road to recruit activists. Bari had local connections and a rapport with some lumber industry workers that was developed during her organizing efforts of an IWW local. While recruiting, Cherney was kept at a distance, so that his reputation for advocating sabotage and propensity for hostile outbursts toward timber workers could not damage the campaign. On April 22, 1990, a group called Earth Night Action Group sabotaged power poles in southern Santa Cruz County, causing power outages. Upon hearing of that incident, Bari reportedly said, "Desperate times call for desperate measures," and "So what if some ice cream melted?" Observers interpreted her statements as approval of sabotage, and thought Earth First! might still be involved in such activities. A provocative flyer was publicized that had been written by Cherney: he called for "Earth Night" actions, and it featured images of a monkey wrench, an earth mover, and figures representing saboteurs in the night. Cherney said the flyer was facetious. The identities of members of the Earth Night Action Group has never been established; their relationship to Earth First! was a matter of speculation. On May 9, 1990, a failed incendiary pipe bomb was discovered in the Louisiana Pacific sawmill in Cloverdale. A hand-lettered sign, saying "L-P screws millworkers", had been placed outside the mill. Responsibility for the bomb was never established. On May 22, 1990, Bari met with local loggers to agree on ground rules for nonviolence during the Redwood Summer demonstrations. In the early afternoon of May 23, 1990, Bari started a road trip to Santa Cruz to organize for Redwood Summer and related musical events. She stopped for a press conference in Ukiah and for a meeting at the Seeds of Peace collective house in Berkeley. That night she stayed overnight in Oakland, at a house near MacArthur and Park boulevards. On May 24 she and Darryl Cherney (as passenger) drove away from the house, and a short time later a bomb exploded beneath her seat. She suffered severe injuries and Cherney suffered lesser ones. Car bombing attempt on Bari's life Summary On May 24, 1990, in Oakland, California, Bari and Darryl Cherney were traveling in her car when it was blown up by a pipe bomb under her seat. Bari was driving and severely injured by the blast. Cherney suffered minor injuries. Bari was arrested for transporting explosives while she was still in critical condition with a fractured pelvis and other major injuries. FBI bomb investigators reached the scene nearly simultaneously with first responders from the Oakland Police Department. Bari raised suspicion that the FBI knew about the bomb beforehand and might have been responsible for it. In Bari's words, it was as if the investigators were "waiting around the corner with their fingers in their ears." It was later revealed that there had been a tip to law enforcement, suspected to be from the person responsible for the bomb, that "some heavies" were carrying a bomb south for sabotage in the Santa Cruz area. The Federal Bureau of Investigation (FBI) took jurisdiction of the case away from the Bureau of Alcohol, Tobacco, Firearms and Explosives, alleging it was an eco-terrorism case. The Oakland Police Department of Alameda County was the local agency on the case. Bari's wounds disabled her to the extent she had to curtail her activities. As Bari convalesced, other activists carried out Redwood Summer, conducting a series of demonstrations by thousands of environmental activists. In late July 1990, the Alameda County District Attorney declined to press charges against Bari and Cherney, claiming insufficient evidence. But Bari and Cherney filed a civil rights suit in 1991 for violations by the FBI and Oakland Police because of the arrests and search warrants carried out on their properties. The trial was not concluded until 2002. Bari died of breast cancer in 1997. The jury found that their civil rights had been violated. The court made an award of $4.4 million to Cherney and Bari's estate. Events of investigation When the Oakland police and the FBI initially accused Bari and Cherney of knowingly carrying a bomb for use in an act of terrorism, the story made headlines nationwide. By 3:00 p.m. of the day of the bombing, Bari was arrested for transportation of illegal explosives. She was still being treated in Highland Hospital . Because of Earth First! had earlier developed a reputation for sabotage, the media reported the police version of events. For example, a KQED news report, entitled "Focus: Logjam", used the term "radical" to describe Earth First!, blamed them for having sabotaged loggers' equipment and conducting tree spiking, and tied Bari's bombing in with such actions. Based on his personal observations of bomb damage to the car, FBI Special Agent Frank Doyle filed a public affidavit that the bomb had been carried on the back seat floorboard of Bari's vehicle. The FBI was granted a search warrant on May 25 at 2:21 a.m., and agents used a helicopter to quickly reach Bari's home and search it. Agents also searched the premises of the "Seeds of Peace" house in Berkeley, where Bari and Cherney had visited the day before the explosion. Members of Seeds of Peace were repeatedly interviewed; they said that they repeatedly told police that Bari and Cherney were committed to nonviolence. Within a week, supporters of Bari and Cherney were petitioning for an investigation of the FBI's investigative methods. Daniel Hamburg, a former Mendocino County Supervisor, and others complained that the investigation seemed focused on charging the two environmentalists. On July 6, a new search warrant for Bari's home was granted, as investigators sought exemplars of typewriting to compare to the typewritten The "Lord's Avenger".(See more below) FBI analysis of the explosive device determined it was a pipe bomb with nails wrapped to its surface to create shrapnel. It was equipped with a timer-armed motion trigger, so that it would explode only when the car was driven. The bomb was confirmed to have been placed on the floorboard directly under the driver's seat, not on the floorboard behind the seat, as Agent Doyle had claimed. That evidence suggested that the bomb was an anti-personnel device intended to kill the driver of Bari's car. The FBI investigation remained focused on the theory that the explosion was an accidental detonation of a device knowingly transported by Bari. They attempted to match roofing nails transported in Bari's car to finishing nails used with the bomb. After seven weeks of news stories reporting the police claims that all evidence pointed to Bari and Cherney, the Alameda County District Attorney announced that he would not file any formal charges against the pair due to insufficient evidence against them. Law enforcement agencies never fully investigated evidence that the bombing was an attempt on Bari's life. The crime has remained unsolved. During her convalescence, Bari issued a directive prohibiting those in her circle from cooperating with investigators. Even after she was no longer considered a suspect, she demanded that her circle remain silent. Bari offered cooperation with investigators in return for legal immunity; but her offer was refused. Theories The "Lord's Avenger" Five days after the bombing, on May 29, while Bari was still in hospital, Mike Geniella of the Santa Rosa Press Democrat received a letter claiming responsibility for both the bomb in Bari's car and a partially detonated one set a week before at the Cloverdale lumber mill. Written in an ornate, biblical style with misogynistic language, the letter was signed "The Lord's Avenger." It said the writer had been outraged by Bari's statements and behavior in December 1988, when she opposed an anti-abortion protest at a Planned Parenthood clinic in Ukiah, California. The letter described the construction of the two bombs in great detail. Based on content of the letter, law enforcement investigated Bill Staley, a self-styled preacher, Louisiana Pacific mill worker, and former professional football player who had been prominent at the 1988 anti-abortion demonstration. Staley was eventually cleared of suspicion in the bombing. While the letter's author gave accurate details about the bombs' construction, investigators found the explanation of how the bomb was placed in Bari's car to be implausible. Both supporters and detractors of Bari's theory of the bombing being an FBI/industry plot, which had been publicized, concluded that the bomb builder sent the letter in an effort to divert attention to Staley. Darryl Cherney Investigators looked closely at both Cherney and Bari's ex-husband Sweeney as potential suspects, knowing that women often faced danger and were killed by men close to them, especially after relationships ended. Some of Bari's friends had noted changes in her relationship with Cherney, and thought he may have set the bomb because Bari had replaced him as the leading organizer of Earth First! in northern California. In a related rumor, there was talk that killing Bari would provide a martyr to boost the profile of Redwood Summer. Suspicions about his writing the Lord's Avenger letter, as well as more general grounds, fell apart under logical impossibilities. FBI The FBI's assertion that the bombing was an accidental detonation was shown to be completely implausible in the face of physical evidence. Bari and her supporters began to suspect the assailant was associated with the FBI. Within the next year, Bari developed the theory that the bomber was an acquaintance whom she had suspected of being an FBI informant. From depositions taken in 1994 for Bari and Cherney's federal civil rights lawsuit, they learned that the May 24 bombing of Bari's car bore a close resemblance to "crime scenes" staged by the FBI in a "bomb school" held in redwood country with the assistance of the Pacific Lumber company earlier that year. Bari and followers believed this supported their idea that the bombing could be attributed to the FBI. The FBI school was intended to train local and state police officers on how to investigate bomb scenes. The school taught that bomb explosions inside a vehicle often indicated the knowing, criminal transportation of homemade bombs, which went off accidentally. They noted that it was difficult to break into a locked car in order to plant a bomb. By 1991, evidence conclusively showed that the bomb was placed directly beneath Bari's seat, as she had said from the day of the accident. According to Bari, FBI Special Agent Frank Doyle, one of the agents on her case, had been the instructor at the bomb school. At least four of the law enforcement responders to the bombing had been students of his at the school. In the weeks before the bombing, Bari had received numerous death threats related to her anti-logging activism, which she reported to local police. After the bombing, her attorney turned over such written threats to the FBI for investigation. As revealed in the 2002 trial evidence, neither the Oakland police nor the FBI ever investigated these. Bari's ex-husband In 1991, Stephen Talbot, KQED reporter and documentary producer, and investigative reporter David Helvarg made a documentary titled Who Bombed Judi Bari?. During the production, he discovered circumstantial evidence and heard suspicions expressed by acquaintances of Bari that her ex-husband Mike Sweeney should be considered a suspect. Bari told Talbot in confidence that she also had doubts about her former husband, and that he abused her during their marriage. She later publicly denied these statements. Talbot named Sweeney and others as possible suspects in the bombing, but in 1991 did not attribute any statements to Bari. After her death, he felt released from his journalist's protection of her as a source. He wrote about Sweeney as a suspect more directly in a 2002 article published on Salon.com. Bari strongly criticized Talbot's 1991 film in her article, "Who bought Steve Talbot?," published in the San Francisco Weekly and the Anderson Valley Advertiser. Talbot also had reported a 1989 letter signed by "Argus" that was sent to the Chief of the Ukiah Police Department, offering to be an informant against Bari regarding marijuana dealing. Bari claimed in her article that the "Argus" letter had to have been written by Irv Sutley, a Peace and Freedom Party activist whom she had met in 1988. Attention had also been focused on two other threatening letters: a "no second warning" death threat letter sent to Bari about a month before the bombing, and what became known as the "Lord's Avenger" letter sent to the Santa Rosa Press Democrat immediately after the bombing. Through the early 1990s, many activists believed that the bombing was the work of either the FBI or other opponents of Bari's Earth First! activities. Irv Sutley was suspected as the hitman. But Bari's attempts to shape accounts of the bombing were alienating supporters and raised suspicions that she was hiding something. Bruce Anderson of the Advertiser was among those put off by her assertions. He knew that the 1988 divorce had been bitter. While he thought that some of her post-bombing behavior was odd, he continued to support her public position. As he later recalled: I still feel guilty about not defending you [Talbot]. I wimped out completely. I knew she'd told you about Sweeney. Lots of people knew she'd told you. I was a complete dupe, a coward and a fool. I convinced myself that her work mobilizing people against the corporate timber companies outweighed unpleasant aspects of her character and the even more unpleasant aspects of her personal behavior. In a reaction to efforts to tie Sutley to the bombing, some former Bari supporters publicly shifted their suspicion toward Sweeney. In 1995 Ed Gehrman, a teacher and publisher of Flatland, a small magazine (now defunct) in Fort Bragg, California, had also participated in Redwood Summer protests. He became concerned about the controversy over Sutley. Initially suspecting Sutley, Gehrman questioned him directly about it. Sutley denied being involved. In addition, he said that in 1989, Pam Davis, a friend of Bari, had on three separate occasions offered him $5,000 to kill her ex-husband Sweeney. In response, Bari said in a radio broadcast that the apparent solicitation was a joke misunderstood by her friend, who had conveyed the offer to Sutley. Gehrman believed that someone was lying. He discussed the issues with journalist Alexander Cockburn of CounterPunch, a political magazine. Cockburn offered to pay for polygraph tests of the key players in the controversy. Sutley was the only one who accepted the offer; he took a polygraph test and passed. (Law enforcement does not rely on such polygraph tests.) After that, Gehrman considered Sutley credible. As he considered motives for that attack, he began to suspect Sweeney more strongly. Gehrman presented his case for exculpating Sutley in Flatland. Anderson reconsidered his support of Bari's position, arousing anger among her supporters. Anderson was incensed by the possibility that Bari had tried to smear an innocent man in order to promote her narrative that the timber industry and/or the FBI were involved in the bombing. Anderson suggested that Bari and Sweeney each had sufficient guilty knowledge to destroy the other - a legal mutual assured destruction scenario. Meanwhile, Gehrman tried to use the "Argus," "no second warning," and "Lord's Avenger" letters to determine the identity of Bari's assailant. He submitted facsimiles of the three letters and their envelopes, along with exemplars of text written by various suspects, to Don Foster. An English professor at Vassar College, Foster had established expertise in attributional analysis of documents. (He has since been discounted as an expert.) Foster concluded that the three letters were from the same writer and most closely matched exemplars by Sweeney. Anderson wrote regular columns in the Advertiser accusing the supporters of the late Bari of lying by their continued support of the industry/FBI theory. Gehrman said he was approached in 2005 by Jan Maxwell, a longtime friend of Pam Davis. Maxwell said that Davis had told her that Bari had suggested a murder-for-hire solicitation against Sweeney. This seemed to place the solicitations to Sutley within a larger pattern. Gehrman presented a summary of his knowledge about the case, which he reprinted in the Advertiser in 2008. Years before, in 2002, at the conclusion of the Bari/Cherney civil rights trial, Stephen Talbot had already publicly reported on Salon.com that Bari had confided in him about her suspicions of Sweeney and the car bombing, as well as her knowledge that he had firebombed the Santa Rosa airport in 1980. She also said that Sweeney had abused her during their marriage. Aftermath While the bombing investigation was underway, Earth First! organizers proceeded with training and demonstrations in several timber towns: Fort Bragg (July), Eureka, and Fortuna. Before they got underway, the Mendocino County Board of Supervisors was considering legislation to regulate the size of protest signs and standards, in order to curb violence by demonstrators. Meanwhile, Redwood Summer organizers debated whether to cancel demonstrations in the woods as being too dangerous. On May 29, representatives of Redwood Summer were pleased to reach an agreement with some of industry: they signed with small local logging companies to support nonviolent and non-destructive protests of timber harvesting. Activists eventually continued events of Redwood Summer, demonstrating in some of the timber towns. The demonstrations by environmentalists were generally countered by demonstrations of numerous loggers and their families. The latter believed that their jobs and lives were jeopardized by proposed restrictions on logging. Redwood Summer ended with Earth First! claiming success because they had trained so many volunteers in nonviolent resistance. But the numbers of participants in protests were smaller than organizers had hoped for. In addition, by September, the New York Times was reporting that antagonism between environmentalists and timber workers seemed to have increased. State voters defeated Proposition 130, which would have restricted logging, on November 6, 1990. The campaign against it had emphasized its support by Earth First!. Several years later, the Northern California "Timber Wars" heated up again in 1998. Earth First! members were dissatisfied with the final agreement that established the Headwaters Forest Reserve. By a bill passed in 1997, the government was authorized to acquire and protect , rather than the much larger portion proposed for more than a decade. The division between the timber community and Earth First! became sharper than ever. "Anarchists" and other advocates of violence, such as Rodney Coronado, a convicted arsonist and Earth Liberation Front member, gained prominence within Earth First!. Such members threatened both the industrial equipment and facilities of timber companies, as well as individuals at their private residences. After Bari died in 1997, she had the status of a major leader in Earth First! lore, but timber protests moved away from the community-based collaboration that she had tried to develop and present. Later events related to bombing investigation The bombing of Bari and Cherney has never been solved. Following the 2002 trial and award of damages, Cherney and supporters sought access to the remains of the partially intact Cloverdale mill bomb held by the FBI. Investigators believed that similarities between it and the remains of the pipe bomb in the car showed they were constructed by the same maker. They hoped to find DNA evidence that could be analyzed by current technology and reveal a suspect. In 2012, a federal judge ordered the FBI not to destroy the remains of that pipe bomb, as they had planned. Ben Rosenfeld, attorney for Cherney, requested DNA analysis by an outside lab. The FBI said they had never performed such testing. The judge ordered such testing. The case remains under the jurisdiction of the City of Oakland, where it occurred, and the Alameda County District Attorney. The Mendocino County Sheriff's Office has deferred on jurisdictional issues, claiming that there is insufficient evidence that the bomb was planted in Mendocino County. In 2001 DNA evidence from documents, including the "Lord's Avenger" letter, which is believed strongly tied to Bari's assailant and yielded a fingerprint, was presented by joint agreement of the Bari advocates and the FBI. It does not match DNA samples obtained from Sutley. Mike Sweeney reportedly had not submitted a DNA sample. It is not known if law enforcement requested him to submit one. Writing and public service career Bari became a political writer as part of her interests in feminism, class struggle, and ecology. In May 1992, in an article published in Ms. magazine, she claimed to have feminized Earth First!. The radical environmentalist group was founded by men. In its early days, they pursued sabotage that damaged equipment and threatened the lives of timber workers, a series of actions known as "monkeywrenching". Bari emphasized non-violent actions and public education in an effort to build collaboration in the region. Stepping back from Earth First! leadership because of dealing with inoperable cancer, by the end of 1996, Bari was working as a para-legal and hosting a weekly public radio show. Before her death, she organized the Redwood Summer Justice Project, a non-profit organization to coordinate political and financial support for the suit she and Cherney were conducting. In 1994 Bari was part of a congressional advisory committee, chartered by Congressman Dan Hamburg (D-CA), trying to develop a proposal for a Headwaters Forest Reserve of 44,000 acres. Efforts had been underway to protect this area for more than a decade. Their proposal included a compensation clause for those lumber workers who would have been laid off following establishment of this extensive reserve. The bill based on the "large reserve" proposal died in Congress after Hamburg lost his 1994 re-election bid; during a midterm upheaval, he was defeated by the Republican former incumbent of his seat. Instead, a 7472-acre forest reserve was authorized by a bill passed on November 14, 1997, shortly after Bari's death. Death and posthumous civil rights trial On March 2, 1997, Bari died of breast cancer at her home near Willits. A memorial service in her honor was attended by an estimated 1,000 people. Bari and Cherney had filed a federal civil rights suit in 1991 claiming that the FBI and police officers falsely arrested the pair in relation to the bombing of her car in May 1990. They were accused of carrying the bomb to use for other purposes. Bari and Cherney said that law enforcement was trying to frame them as terrorists so as to discredit their political organizing to protect the redwood forests. In 1997, Bari and Cherney sued the law enforcement officers named in the civil rights suit for conspiracy to violate the activists' First and Fourth Amendment rights. On October 15 that year, the agents lost their bid for immunity from prosecution. Also on October 15, federal judge Claudia Wilken dismissed former FBI supervisor SAIC Richard Wallace Held from the case. The court said that as SAIC he had no duty to oversee the daily duties of his subordinate agents. The plaintiffs' contention that the FBI was responsible for the bomb was also dismissed from the case. Its scope was restricted to malicious investigative malpractice on the part of the FBI, and the allowed damage claim was reduced from $20 million to $4.4 million. The suit finally went to trial in 2002. After deliberation for two weeks, a jury found in favor of Bari's and Cherney's federal civil lawsuit. They concluded the pair's civil rights had been violated by several named individuals from the FBI and Oakland Police Department. As part of the jury's verdict, the judge ordered Frank Doyle and two other FBI agents, and three Oakland police officers, to pay a total of $4.4 million to Cherney and to Bari's estate. The award was compensation for the defendants' violation of the plaintiffs' First Amendment rights to freedom of speech and freedom of assembly, and for the defendants' various unlawful acts, including unlawful search and seizure in violation of the plaintiffs' Fourth Amendment rights. At trial the FBI and the Oakland Police had pointed fingers at each other: Oakland investigators testified that they relied almost exclusively on the F.B.I.'s counter-terrorism unit in San Francisco for advice on how to handle the case. But the F.B.I. agents denied misleading the investigators into believing that Ms. Bari and Mr. Cherney were violence-prone radicals who were probably guilty of transporting the bomb. While neither agency would admit wrongdoing, the jury held both liable, finding that "[B]oth agencies admitted they had amassed intelligence on the couple before the bombing." This evidence supported the jury's finding that both the FBI and the Oakland police persecuted Bari and Cherney as potential terrorists rather than conducting a full investigation to try to find the perpetrators. They were trying to discredit and sabotage Earth First! and the planned Redwood Summer of 1990, thereby violating the plaintiffs' First Amendment rights and justifying the large award. After the trial's gag order was lifted, a juror revealed to the press that she believed the law enforcement agents had lied: "Investigators were lying so much it was insulting . ... I'm surprised that they seriously expected anyone would believe them ... They were evasive. They were arrogant. They were defensive," said juror Mary Nunn. Legacy On May 20, 2003, the Oakland City Council unanimously voted a resolution establishing Judi Bari Day, stating: Whereas, Judi Bari was a dedicated activist, who worked for many social and environmental causes, the most prominent being the protection and stewardship of California's ancient redwood forests. ... Now, therefore, be it resolved that the City of Oakland shall designate May 24 as Judi Bari Day ... Bibliography Books by Bari Books and articles about Bari , self-published on his website set up for the book The Encyclopedia of American Law Enforcement: Facts on File Crime Library. Michael Newton. Infobase Publishing, 2007. . The Last Stand: The War Between Wall Street and Main Street Over California's Ancient Redwoods . David Harris . University of California Press, 1997. . The Symbolic Earth: Discourse and Our Creation of the Environment Environmental Studies . Editors: James Gerard Cantrill, Christine Lena. University Press of Kentucky, 1996. . Stories of Globalization: Transnational Corporations, Resistance, and the State . Alessandro Bonanno, Douglas H. Constance. Penn State Press, 2010. . The War Against the Greens: The "Wise-Use" Movement, the New Right, and the Browning of America. David Helvarg. Big Earth Publishing, 2004. . Renewed controversy A critical biography of Bari titled The Secret Wars of Judi Bari (2005), by investigative journalist Kate Coleman, drew fierce criticism by many supporters. But a review in Environmental History said that the author "succeeds in offering a balanced view of her life." Cherney, managers of Bari's estate (for her portion of the FBI settlement award), Bari's ex-husband Michael Sweeney, a suspect in the bombing; and their followers, claimed the book had hundreds of factual errors and expressed a bias against Bari and Earth First! These critics noted that the publisher, Encounter Books, was founded by arch-conservative Peter Collier. They said it was funded primarily by arch-conservative foundations not sympathetic to Bari's causes. Author Coleman said that such allegations and the aspersions cast on the publisher, were being used as a smokescreen. She said the book's detractors were dedicated to preserving an incomplete and distorted memory of Ms. Bari. Cherney and some other critics said that Coleman had failed to include more information from their points of view. The author said that they had not responded to her attempts to contact them. In her book, Coleman outlined a case that Sweeney, Bari's ex-husband, had planted the bomb in order to kill her. This thesis had been suggested by others, namely Stephen Talbot, in his 1991 documentary, and more specifically in his 2002 article on Salon.com, in which he revealed statements that Bari had made to him in 1991. He felt her death lifted his responsibility to protect her confidences. Mark Hertsgaard wrote a critical review in the Los Angeles Times entitled, "'Too many rumors, too few facts to examine eco-activism case". He said, "the reporting is thin and sloppy, and the humdrum prose is marred by dubious speculation." Ed Guthmann, in a review in the San Francisco Chronicle, criticized Hertsgaard's review for containing its own errors. Films References Further reading Steve Ongerth, Redwood Uprising: The Story of Judi Bari and Earth First!-IWW Local #1 , also titled Redwood Uprising: From One Big Union to Earth First! and the Bombing of Judi Bari (2010) External links "The Attempted Murder of Judi Bari", 1994 interview, Albion Monitor Friends of Judi Bari, a defense group Profile, SourceWatch Obituary for Judi Bari, IWW "Don't Mourn, Organise! The Judi Bari Story", BBC, December 2004, 30 min. audio (MP3) Mike Sweeney, editor; Website criticizing Kate Coleman's The Secret Wars of Judi Bari Bari's writings Writings by and about Judi Bari IWW Environmental Unionism Caucus, featuring more writings by Judi Bari, focusing specifically on class-struggle ecology 1949 births 1997 deaths 20th-century American Jews 20th-century American non-fiction writers 20th-century American women writers Activists from California American anti-capitalists American anti-fascists American communists American feminists American people of Italian descent American women environmentalists American women non-fiction writers American women's rights activists Anti-corporate activists Anti-fascists Deaths from breast cancer in California Ecofeminists Explosion survivors Industrial Workers of the World members Jewish American activists Jewish American non-fiction writers Jewish feminists Jewish women writers People from Willits, California University of Maryland, College Park alumni Writers from California Jewish women activists
Judi Bari
Chemistry
7,942
47,710,914
https://en.wikipedia.org/wiki/Boletus%20amyloideus
Boletus amyloideus is a rare species of bolete fungus in the family Boletaceae. It was described as new to science in 1975 by mycologist Harry D. Thiers, from collections made in California. It fruit bodies have a convex to somewhat flattened reddish-brown cap measuring in diameter. The pore surface on the cap underside is bright yellow, with small angular pores and tubes measuring 4–8 mm long. The spore print is olive-brown; basidiospores are smooth, amyloid, spindle shaped to ellipsoid, and have dimensions of 13–16 by 4.5–5.5 μm. The bolete is known only from coastal California, where it grows on the ground in mixed forests. Its edibility is unknown. See also List of Boletus species List of North American boletes References External links amyloideus Fungi described in 1975 Fungi of California Fungi without expected TNC conservation status Fungus species
Boletus amyloideus
Biology
201
58,508,500
https://en.wikipedia.org/wiki/List%20of%20combined%20sex-hormonal%20preparations
This is a list of known combined sex-hormonal formulations. Brand names and developmental code names are in parentheses. Androgens Injection Marketed Nandrolone decanoate/nandrolone phenylpropionate (Dinandrol) Testosterone propionate/testosterone isocaproate/testosterone caproate (Androteston PP) Testosterone propionate/testosterone enanthate/testosterone undecylenate (Durasteron) Testosterone propionate/testosterone enanthate (Testoviron Depot) Testosterone propionate/testosterone ketolaurate (Testosid Depot) Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate (Sustanon 100) Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate/testosterone caproate (Omnadren 250) Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate/testosterone decanoate (Sustanon 250) Testosterone propionate/testosterone valerate/testosterone undecylenate (Triolandren) Testosterone propionate/testosterone cypionate/prasterone (Sten) Veterinary Marketed Boldenone acetate/boldenone cypionate/boldenone propionate/boldenone undecylenate (Equilon 100) Testosterone acetate/testosterone undecanoate/testosterone valerate (Deposterona) Estrogens Oral Marketed Conjugated estriol (Emmenin, Progynon) Conjugated estrogens (Premarin) Esterified estrogens (Estratab, Menest) Estradiol/estrone/estriol (Hormonin) Ethinylestradiol/dienestrol (Foxinette) Injection Marketed Estradiol benzoate/estradiol phenylpropionate (Dimenformon Prolongatum) Never marketed Estradiol/estradiol enanthate Progestogens Injection Marketed Progesterone/hydroxyprogesterone heptanoate/α-tocopherol palmitate (Tocogestan) Androgens and estrogens Oral Marketed Chlorotrianisene/methyltestosterone (Tace mit Androgen) Conjugated estrogens/methyltestosterone (Premarin with Methyltestosterone) Dienestrol/methyltestosterone (Estan, Lucidon) Dienestrol diacetate/methyltestosterone (Farmatest) Diethylstilbestrol/methyltestosterone (Tylosterone) Esterified estrogens/methyltestosterone (Covaryx, Eemt, Essian, Estratest, Menogen, Syntest) Ethinylestradiol/methyltestosterone (Climatone, Dumone, Duotrone, Estandron, Femandren, Femovirin, Gynetone, Lynandron, Mepilin, Mixogen, Primodian) Methylestradiol/methyltestosterone (Klimanosid) Methylestradiol/methyltestosterone/reserpine (Klimanosid R) Injection Marketed Estradiol benzoate/testosterone propionate (Bothermon) Estradiol benzoate/estradiol dienanthate/testosterone enanthate benzilic acid hydrazone (Climacteron, Lactimex, Lactostat) Estradiol benzoate/estradiol phenylpropionate/testosterone propionate/testosterone phenylpropionate/testosterone isocaproate (Estandron Prolongatum, Lynandron Prolongatum, Mixogen [Injection]) Estradiol benzoate/testosterone isobutyrate (Femandren M, Folivirin) Estradiol butyrylacetate/testosterone ketolaurate/reserpine (Klimanosid R-Depot) Estradiol cypionate/testosterone cypionate (Depo-Testadiol, Femovirin) Estradiol cypionate/testosterone enanthate (Supligol) Estradiol valerate/prasterone enanthate (Binodian Depot, Cidodian Depot, Gynodian Depot, Klimax, Supligol NF) Estradiol valerate/testosterone enanthate (Deladumone, Despamen, Ditate, Ditate-DS, Gravignost, Primodian Depot, Valertest) Veterinary Marketed Diethylstilbestrol/methyltestosterone (Maxymin, Tylosterone) Diethylstilbestrol/testosterone (Rapigain) Estradiol benzoate/estradiol enanthate/testosterone enanthate (Uni-Bol) Estradiol benzoate/testosterone propionate (Component E-H, Implix BF, Progro H, Synovex H) Ethinylestradiol/methyltestosterone (Taril) Estrogens and progestogens Oral Marketed In birth control pills: Drospirenone/estetrol (Nextstellis, Drovelis, Lydisilka) Estradiol/nomegestrol acetate (Naemis, Zoely) Estradiol valerate/cyproterone acetate (Femilar) Estradiol valerate/dienogest (Natazia, Qlaira) Ethinylestradiol/chlormadinone acetate (Belara) Ethinylestradiol/cyproterone acetate (Diane, Diane-35) Ethinylestradiol/desogestrel (Alenvona, Apri, Azurette, Bekyree, Bimizza, Caziant, Cesia, Cimizt, Cyclessa, Cyred, Denise, Desogen, Desirett, Emoquette, Enskyce, Gedarel, Gracial, Isibloom, Juleber, Kalliga, Kariva, Kimidess, Laurina, Linessa, Marvelon, Mercilon, Mircette, Mirvala, Novynette, Ortho-Cept, Pimtrea, Reclipsen, Regulon, Simliya, Solia, Velivet, Viorele, Volnea) Ethinylestradiol/dienogest (Valette) Ethinylestradiol/dimethisterone (Oracon) Ethinylestradiol/drospirenone (Yasmin, Yasminelle, Yaz) Ethinylestradiol/drospirenone/levomefolic acid (Beyaz, Safyral) Ethinylestradiol/etynodiol diacetate (Demulen) Ethinylestradiol/gestodene (Femodene, Femodette, Gynera, Harmonet, Meliane, Minesse, Minulet) Ethinylestradiol/levonorgestrel (Amethyst, Aviane, Balcoltra, Falmina, Levlen, Lillow, Orsythia, Vienva) Ethinylestradiol/levonorgestrel/bisglycinate Ethinylestradiol/levonorgestrel/ferrous fumarate Ethinylestradiol/lynestrenol (Lyndiol) Ethinylestradiol/medroxyprogesterone acetate (Gianda, Provest) Ethinylestradiol/megestrol acetate (Nuvacon, Volidan) Ethinylestradiol/norethisterone (Brevicon, Norinyl 1+35, Ortho-Novum 7/7/7, Taytulla, Tri-Norinyl, Zenchent) Ethinylestradiol/norethisterone/ferrous fumarate (Blisovi 24 FE, Estrostep Fe, Femcon FE, Kaitlib FE, Lo Minastrin Fe, Loestrin 24 Fe, Microgestin 24 FE) Ethinylestradiol/norethisterone acetate (FemHRT) Ethinylestradiol/norethisterone acetate/ferrous fumarate Ethinylestradiol/norgestimate (Ortho Tri-Cyclen, Sprintec) Ethinylestradiol/norgestrel (Ovral) Ethinylestradiol/norgestrienone (Miniplanor, Planor) Ethinylestradiol/quingestanol acetate (Riglovis) Ethinylestradiol sulfonate/norethisterone acetate (Deposiston) Mestranol/anagestone acetate (Neo-Novum) Mestranol/chlormadinone acetate (C-Quens, Lutedion) Mestranol/etynodiol diacetate (Ovulen) Mestranol/hydroxyprogesterone acetate (Hormolidin) Mestranol/lynestrenol (Lyndiol) Mestranol/norethisterone (Norethin, Noriday, Norinyl, Norquen, Ortho-Novum) Mestranol/noretynodrel (Enavid, Enovid) Quinestrol/etynodiol diacetate (Soluna) Quinestrol/levonorgestrel (Yuèkětíng, Àiyuè) Quinestrol/norgestrel (Compound Norgestrel) Quinestrol/quingestanol acetate (Unovis) For menopausal hormone therapy and/or gynecological disorders: Conjugated estrogens/medrogestone (Presomen) Conjugated estrogens/medroxyprogesterone acetate (Prempro, Premphase) Conjugated estrogens/norgestrel (Prempak-C) Estradiol/drospirenone (Angeliq) Estradiol/dydrogesterone (Femoston) Estradiol/gestodene (Avaden, Avadene) Estradiol/levonorgestrel Estradiol/medroxyprogesterone acetate (Indivina, Tridestra) Estradiol/norethisterone acetate (Activella, Activelle, Cliane, Estalis, Eviana, Evorel, Kliogest, Novofem) Estradiol/norgestimate (Prefest) Estradiol/progesterone (Bijuva) Estradiol/trimegestone (Lovelle, Totelle) Estradiol valerate/cyproterone acetate (Climen, Climen 28) Estradiol valerate/estriol/levonorgestrel (CycloÖstrogynal) Estradiol valerate/levonorgestrel (Cyclo-Progynova N) Estradiol valerate/medroxyprogesterone acetate (Dilena, Divina, Divitren, Farludiol, Indivina, Sisare) Estradiol valerate/norethisterone (Climagest, Climesse) Estradiol valerate/norethisterone acetate (Cliovelle, Norestin, Trisequens) Estradiol valerate/norgestrel (Cyclacur, Cyclocur, Cyclo-Progynova, Postoval, Progyluton) Ethinylestradiol/ethisterone (Amenorone, Di-Pro, Duosterone, Menstrogen, Oracecron, Orasecron) Mestranol/noretynodrel (Enovid) Never or not yet marketed In birth control pills: Estradiol/norethisterone (Netagen, Netagen 403) Transdermal Marketed In contraceptive patches: Ethinylestradiol/norelgestromin (Evra, Ortho Evra, Xulane) For menopausal hormone therapy: Estradiol/levonorgestrel (Climara Pro) Estradiol/norethisterone acetate (CombiPatch) Vaginal Marketed In contraceptive vaginal rings: Ethinylestradiol/etonogestrel (Circlet, NuvaRing) Segesterone acetate/ethinylestradiol (Annovera) Injection Marketed In combined injectable contraceptives: Estradiol/megestrol acetate (Mego-E, Chinese Injectable No. 2) Estradiol benzoate/estradiol valerate/hydroxyprogesterone caproate (Sin-Ol) Estradiol benzoate butyrate/algestone acetophenide (Redimen, Soluna, Unijab) Estradiol cypionate/hydroxyprogesterone caproate (Sinbios) Estradiol cypionate/medroxyprogesterone acetate (Cyclo-Provera, Cyclofem, Lunelle) Estradiol enanthate/algestone acetophenide (Anafertin, Deladroxate, Perlutan, Topasel, Yectames) Estradiol valerate/hydroxyprogesterone caproate (Chinese Injectable No. 1) Estradiol valerate/norethisterone enanthate (Mesigyna, Mesygest) For menopausal hormone therapy, gynecological disorders, habitual abortion, and other indications: Estradiol/progesterone (Juvenum) Estradiol benzoate/hydroxyprogesterone caproate (Primosiston) Estradiol benzoate/progesterone (Duogynon, Sistocyclin) Estradiol dipropionate/hydroxyprogesterone caproate (EP Hormone Depot) Estradiol hemisuccinate/progesterone (Hosterona) Estradiol pivalate/progesterone (Estrotate with Progesterone) Estradiol valerate/hydroxyprogesterone caproate (Gravibinan, Gravibinon) Estrone/progesterone (Synergon) Never marketed In combined injectable contraceptives: Estradiol/progesterone (in microspheres and macrocrystalline aqueous suspension) Estradiol undecylate/norethisterone enanthate Estradiol valerate/megestrol acetate Estradiol valerate/methenmadinone caproate (Lutofollin) Polyestradiol phosphate/medroxyprogesterone acetate Additional preparations listed at Combined injectable birth control § Research For other indications: Estradiol valerate/gestonorone caproate (SH-834, SH-8.0834) Veterinary Marketed Estradiol benzoate/progesterone (Component E-S, Implix BM, Synovex C, Synovex S) Estradiol valerate/norgestomet (Syncro-Mate-B) Androgens and progestogens Oral Marketed Methyltestosterone/ethisterone (Androgeston) Injection Marketed Testosterone propionate/progesterone (Testoluton) Veterinary Marketed Danazol/megestrol acetate (Dogalact) Nandrolone decanoate/methandriol dipropionate (Filybol, Tribolin) Nandrolone phenylpropionate/methandriol dipropionate (RWR) Estrogens, progestogens, and androgens Oral Marketed Methylestradiol/normethandrone (Ginecosid, Ginecoside, Mediol, Renodiol) Not yet marketed Ethinylestradiol/drospirenone/prasterone Sublingual Marketed Estradiol/progesterone/methyltestosterone (Trihormonal) Ethinylestradiol/ethisterone/methyltestosterone (Trinestryl, Trimone Sublets) Injection Marketed Estradiol benzoate/progesterone/methandriol dipropionate (Progestandron) Estradiol benzoate/progesterone/testosterone propionate (Lukestra, Steratrin, Trihormonal, Trinestryl) Estradiol benzoate/estradiol valerate/norethisterone acetate/testosterone enanthate (Ablacton) Estradiol dibutyrate/hydroxyprogesterone heptanoate/testosterone caproate (Triormon Depositum) Estradiol diundecylate/hydroxyprogesterone heptanoate/testosterone cyclohexylpropionate (Trioestrine Retard) Estradiol hexahydrobenzoate/hydroxyprogesterone caproate/testosterone hexahydrobenzoate (Trinestril AP) Estrone/progesterone/testosterone (Tristeron, Tristerone) Never marketed Estrapronicate/hydroxyprogesterone heptanoate/nandrolone undecanoate (Trophobolene, Trophoboline) Veterinary Marketed Estradiol/trenbolone acetate (Component, Compudose-G, Progro TE-H, Revalor, Synovex) Estradiol benzoate/trenbolone acetate (Synovex Choice, Synovex One, Synovex Plus, Synovex with Trenbolone Acetate) Miscellaneous Oral Marketed Conjugated estrogens/bazedoxifene (Duavee) Never marketed Esterified estrogens/raloxifene Estradiol/raloxifene Transdermal Marketed Estradiol benzoate/prednisolone/salicylic acid (Alpicort E, Alpicort F, Alpicort Plus) Vaginal Marketed Estradiol benzoate/gentamicin/hydrocortisone/nystatin (Cridermol Fem, Ginabiot, Ginecovan) Estradiol benzoate/monalazone (Malun-25) Mixed Marketed Goserelin/bicalutamide (ZolaCos CP) Leuprorelin/norethisterone acetate (Lupaneta Pack) See also List of sex-hormonal medications available in the United States List of sex-hormonal aqueous suspensions List of steroid esters List of marketed estradiol benzoate formulations References Anabolic–androgenic steroids Sex-hormonal preparations Drug-related lists Estrogens Progestogens
List of combined sex-hormonal preparations
Chemistry
4,114
18,529,696
https://en.wikipedia.org/wiki/Alternant%20code
In coding theory, alternant codes form a class of parameterised error-correcting codes which generalise the BCH codes. Definition An alternant code over GF(q) of length n is defined by a parity check matrix H of alternant form Hi,j = αjiyi, where the αj are distinct elements of the extension GF(qm), the yi are further non-zero parameters again in the extension GF(qm) and the indices range as i from 0 to δ − 1, j from 1 to n. Properties The parameters of this alternant code are length n, dimension ≥ n − mδ and minimum distance ≥ δ + 1. There exist long alternant codes which meet the Gilbert–Varshamov bound. The class of alternant codes includes BCH codes Goppa codes Srivastava codes References Error detection and correction Finite fields Coding theory
Alternant code
Mathematics,Engineering
186
75,458,259
https://en.wikipedia.org/wiki/IMCY-0098
IMCY-0098 is a peptide immunotherapy based on proinsulin and developed by Imcyse to reduce the progression of type 1 diabetes. References Peptide therapeutics Immunotherapy Experimental diabetes drugs
IMCY-0098
Chemistry
48
1,520,573
https://en.wikipedia.org/wiki/Jon%20Bosak
Jon Bosak led the creation of the XML specification at the W3C. From 1996–2008, he worked for Sun Microsystems. XML Tim Bray, who was one of the editors of the XML specification, has this to say in his note on Bosak in his annotated version of the specification: In a 1999 posting to the xml-dev mailing list, Bray writes: When he stepped down from the W3C XML Coordination Group in 2000, Jon Bosak was given the unusual recognition of having a formal identifier reserved for him: In appreciation for his vision and leadership and dedication the W3C XML Plenary on this 10th day of February, 2000 reserves for Jon Bosak in perpetuity the XML name "xml:Father". The Universal Business Language In 2001, Bosak organized the OASIS Universal Business Language Technical Committee to create standard formats for basic electronic business documents. He led the UBL TC through the completion of UBL 2.1 in November 2013 and continues to serve on the Committee as Secretary. UBL was approved for use in European public sector procurement by decision of the European Commission dated 31 October 2014 and published as an International Standard, ISO/IEC 19845:2015, on 15 December 2015. Metrological Studies Bosak is author of the book (, 2010) and the article (latest version 19 April 2014) Bob Bosak Jon Bosak's father, Robert Bosak (1925–1987), began the family's long involvement in the computer industry in 1947 when he went to work on the first computer on the west coast of the US. He joined RAND in 1948 to work on analysis and programming of scientific problems. In 1951, he joined Lockheed Aircraft Corporation, where he organized and directed the Mathematical Analysis Group. For a short time after his divorce in the 1950s, he shared an apartment with Bob Bemer, "the Father of ASCII." Bob Bosak returned to RAND in 1956 to become head of programming for the Semi Automatic Ground Environment (SAGE), the automated NORAD system that controlled US air defenses from 1959 to 1983 and strongly influenced the design of modern air traffic control systems. He was one of the designers of JOVIAL and principal author of the seminal paper An Information Algebra. References External links Background information at ibiblio Interview with JavaWorld Schema document for namespace 1998 Schema document for namespace 2001 Year of birth missing (living people) Living people Sun Microsystems people XML Guild
Jon Bosak
Technology
511
2,943,640
https://en.wikipedia.org/wiki/Antibiotic%20sensitivity%20testing
Antibiotic sensitivity testing or antibiotic susceptibility testing is the measurement of the susceptibility of bacteria to antibiotics. It is used because bacteria may have resistance to some antibiotics. Sensitivity testing results can allow a clinician to change the choice of antibiotics from empiric therapy, which is when an antibiotic is selected based on clinical suspicion about the site of an infection and common causative bacteria, to directed therapy, in which the choice of antibiotic is based on knowledge of the organism and its sensitivities. Sensitivity testing usually occurs in a medical laboratory, and uses culture methods that expose bacteria to antibiotics, or genetic methods that test to see if bacteria have genes that confer resistance. Culture methods often involve measuring the diameter of areas without bacterial growth, called zones of inhibition, around paper discs containing antibiotics on agar culture dishes that have been evenly inoculated with bacteria. The minimum inhibitory concentration, which is the lowest concentration of the antibiotic that stops the growth of bacteria, can be estimated from the size of the zone of inhibition. Antibiotic susceptibility testing has been needed since the discovery of the beta-lactam antibiotic penicillin. Initial methods were phenotypic, and involved culture or dilution. The Etest, an antibiotic impregnated strip, has been available since the 1980s, and genetic methods such as polymerase chain reaction (PCR) testing have been available since the early 2000s. Research is ongoing into improving current methods by making them faster or more accurate, as well as developing new methods for testing, such as microfluidics. Uses In clinical medicine, antibiotics are most frequently prescribed on the basis of a person's symptoms and medical guidelines. This method of antibiotic selection is called empiric therapy, and it is based on knowledge about what bacteria cause an infection, and to what antibiotics bacteria may be sensitive or resistant. For example, a simple urinary tract infection might be treated with trimethoprim/sulfamethoxazole. This is because Escherichia coli is the most likely causative bacterium, and may be sensitive to that combination antibiotic. However, bacteria can be resistant to several classes of antibiotics. This resistance might be because a type of bacteria has intrinsic resistance to some antibiotics, because of resistance following past exposure to antibiotics, or because resistance may be transmitted from other sources such as plasmids. Antibiotic sensitivity testing provides information about which antibiotics are more likely to be successful and should therefore be used to treat the infection. Antibiotic sensitivity testing is also conducted at a population level in some countries as a form of screening. This is to assess the background rates of resistance to antibiotics (for example with methicillin-resistant Staphylococcus aureus), and may influence guidelines and public health measures. Methods Once a bacterium has been identified following microbiological culture, antibiotics are selected for susceptibility testing. Susceptibility testing methods are based on exposing bacteria to antibiotics and observing the effect on the growth of the bacteria (phenotypic testing), or identifying specific genetic markers (genetic testing). Methods used may be qualitative, meaning that a result indicates resistance is or is not present; or quantitative, using a minimum inhibitory concentration (MIC) to describe the concentration of antibiotic to which a bacterium is sensitive. There are many factors that can affect the results of antibiotic sensitivity testing, including failure of the instrument, temperature, moisture, and potency of the antimicrobial agent. Quality control (QC) testing helps to ensure the accuracy of test results. Organizations such as the American Type Culture Collection and National Collection of Type Cultures provide strains of bacteria with known resistance phenotypes that can be used for quality control. Phenotypic methods Testing based on exposing bacteria to antibiotics uses agar plates or dilution in agar or broth. The selection of antibiotics will depend on the organism grown, and the antibiotics that are available locally. To ensure that the results are accurate, the concentration of bacteria that is added to the agar or broth (the inoculum) must be standardized. This is accomplished by comparing the turbidity of bacteria suspended in saline or broth to McFarland standards—solutions whose turbidity is equivalent to that of a suspension containing a given concentration of bacteria. Once an appropriate concentration (most commonly an 0.5 McFarland standard) has been reached, which can be determined by visual inspection or by photometry, the inoculum is added to the growth medium. Manual The disc diffusion method involves selecting a strain of bacteria, placing it on an agar plate, and observing bacterial growth near antibiotic-impregnated discs. This is also called the Kirby-Bauer method, although modified methods are also used. In some cases, urine samples or positive blood culture samples are applied directly to the test medium, bypassing the preliminary step of isolating the organism. If the antibiotic inhibits microbial growth, a clear ring, or zone of inhibition, is seen around the disc. The bacteria are classified as sensitive, intermediate, or resistant to an antibiotic by comparing the diameter of the zone of inhibition to defined thresholds which correlate with MICs. Mueller–Hinton agar is frequently used in the disc diffusion test. The Clinical and Laboratory Standards Institute (CLSI) and European Committee on Antimicrobial Susceptibility Testing (EUCAST) provide standards for the type and depth of agar, temperature of incubation, and method of analysing results. Disc diffusion is considered the cheapest and most simple of the methods used to test for susceptibility, and is easily adapted to testing newly available antibiotics or formulations. Some slow-growing and fastidious bacteria cannot be accurately tested by this method, while others, such as Streptococcus species and Haemophilus influenzae, can be tested but require specialized growth media and incubation conditions. Gradient methods, such as Etest, use a plastic strip placed on agar. A plastic strip impregnated with different concentrations of antibiotics is placed on a growth medium, and the growth medium is viewed after a period of incubation. The minimum inhibitory concentration can be identified based on the intersection of the teardrop-shaped zone of inhibition with the marking on the strip. Multiple strips for different antibiotics may be used. This type of test is considered a diffusion test. In agar and broth dilution methods, bacteria are placed in multiple small tubes with different concentrations of antibiotics. Whether a bacterium is sensitive or not is determined by visual inspection or automatic optical methods, after a period of incubation. Broth dilution is considered the gold standard for phenotypic testing. The lowest concentration of antibiotics that inhibits growth is considered the MIC. Automated Automated systems exist that replicate manual processes, for example, by using imaging and software analysis to report the zone of inhibition in diffusion testing, or dispensing samples and determining results in dilutional testing. Automated instruments, such as the VITEK 2, BD Phoenix, and Microscan systems, are the most common methodology for AST. The specifications of each instrument vary, but the basic principle involves the introduction of a bacterial suspension into pre-formulated panels of antibiotics. The panels are incubated and the inhibition of bacterial growth by the antibiotic is automatically measured using methodologies such as turbidimetry, spectrophotometry or fluorescence detection. An expert system correlates the MICs with susceptibility results, and the results are automatically transmitted into the laboratory information system for validation and reporting. While such automated testing is less labour-intensive and more standardized than manual testing, its accuracy can be comparatively poor for certain organisms and antibiotics, so the disc diffusion test remains useful as a backup method. Genetic methods Genetic testing, such as via polymerase chain reaction (PCR), DNA microarray, and loop-mediated isothermal amplification, may be used to detect whether bacteria possess genes which confer antibiotic resistance. An example is the use of PCR to detect the mecA gene for beta-lactam resistant Staphylococcus aureus. Other examples include assays for testing vancomycin resistance genes vanA and vanB in Enterococcus species, and antibiotic resistance in Pseudomonas aeruginosa, Klebsiella pneumoniae and Escherichia coli. These tests have the benefit of being direct and rapid, as compared with observable methods, and have a high likelihood of detecting a finding when there is one to detect. However, whether resistance genes are detected does not always match the resistance profile seen with phenotypic method. The tests are also expensive and require specifically trained personnel. Polymerase chain reaction is a method of identifying genes related to antibiotic susceptibility. In the PCR process, a bacterium's DNA is denatured and the two strands of the double helix separate. Primers specific to a sought-after gene are added to a solution containing the DNA, and a DNA polymerase is added alongside a mixture containing molecules that will be needed (for example, nucleotides and ions). If the relevant gene is present, every time this process runs, the quantity of the target gene will be doubled. After this process, the presence of the genes is demonstrated through a variety of methods including electrophoresis, southern blotting, and other DNA sequencing analysis methods. DNA microarrays and chips use the binding of complementary DNA to a target gene or nucleic acid sequence. The benefit of this is that multiple genes can be assessed simultaneously. Using magnetic nanoparticles studded with a beta-2-glycoprotein I peptide imitating a plasma protein, microbial pathogens could selectively be retrieved from blood culture specimens within hours, in a study published September 2024. Magnets are used to fish out the peptide-bacterial complex, followed by genetic testing. MALDI-TOF Matrix-assisted laser desorption ionisation-time of flight mass spectrometry (MALDI-TOF MS) is another method of susceptibility testing. This is a form of time-of-flight mass spectrometry, in which the molecules of a bacterium are subject to matrix-assisted laser desorption. The ionised particles are then accelerated, and spectral peaks recorded, producing an expression profile, which is capable of differentiating specific bacterial strains after being compared to known profiles. This includes, in the context of antibiotic susceptibility testing, strains such as beta-lactamase producing E. coli. MALDI-TOF is rapid and automated. There are limitations to testing in this format however; results may not match the results of phenotypic testing, and acquisition and maintenance is expensive. Reporting Bacteria are marked as sensitive, resistant, or having intermediate resistance to an antibiotic based on the minimum inhibitory concentration (MIC), which is the lowest concentration of the antibiotic that stops the growth of bacteria. The MIC is compared to standard threshold values (called "breakpoints") for a given bacterium and antibiotic. Breakpoints for the same organism and antibiotic may differ based on the site of infection: for example, the CLSI generally defines Streptococcus pneumoniae as sensitive to intravenous penicillin if MICs are ≤0.06 μg/ml, intermediate if MICs are 0.12 to 1 μg/ml, and resistant if MICs are ≥2 μg/ml, but for cases of meningitis, the breakpoints are considerably lower. Sometimes, whether an antibiotic is marked as resistant is also based on bacterial characteristics that are associated with known methods of resistance such as the potential for beta-lactamase production. Specific patterns of drug resistance or multidrug resistance may be noted, such as the presence of an extended-spectrum beta lactamase. Such information may be useful to the clinician, who can change the empiric treatment to a tailored treatment that is directed only at the causative bacterium. The results of antimicrobial susceptibility tests performed during a given time period can be compiled, usually in the form of a table, to form an antibiogram. Antibiograms help the clinician to select the best empiric antimicrobial therapy based on the local resistance patterns until the laboratory test results are available. Clinical practice Ideal antibiotic therapy is based on determining the causal agent and its antibiotic sensitivity. Empiric treatment is often started before laboratory microbiological reports are available. This might be for common or relatively minor infections based on clinical guidelines (such as community-acquired pneumonia), or for serious infections, such as sepsis or bacterial meningitis, in which delayed treatment carries substantial risks. The effectiveness of individual antibiotics varies with the anatomical site of the infection, the ability of the antibiotic to reach the site of infection, and the ability of the bacteria to resist or inactivate the antibiotic. Specimens for antibiotic sensitivity testing are ideally collected before treatment is started. A sample may be taken from the site of a suspected infection; such as a blood culture sample when bacteria are suspected to be present in the bloodstream (bacteraemia), a sputum sample in the case of a pneumonia, or a urine sample in the case of a urinary tract infection. Sometimes multiple samples may be taken if the source of an infection is not clear. These samples are transferred to the microbiology laboratory where they are added to culture media, in or on which the bacteria grow until they are present in sufficient quantities for identification and sensitivity testing to be carried out. When antibiotic sensitivity testing is completed, it will report the organisms present in the sample, and which antibiotics they are susceptible to. Although antibiotic sensitivity testing is done in a laboratory (in vitro), the information provided about this is often clinically relevant to the antibiotics in a person (in vivo). Sometimes, a decision must be made for some bacteria as to whether they are the cause of an infection, or simply commensal bacteria or contaminants, such as Staphylococcus epidermidis and other opportunistic infections. Other considerations may influence the choice of antibiotics, including the need to penetrate through to an infected site (such as an abscess), or the suspicion that one or more causes of an infection were not detected in a sample. History Since the discovery of the beta-lactam antibiotic penicillin, the rates of antimicrobial resistance have increased. Over time, methods for testing the sensitivity of bacteria to antibiotics have developed and changed. Alexander Fleming in the 1920s developed the first method of susceptibility testing. The "gutter method" that he developed was a diffusion method, involving an antibiotic that was diffused through a gutter made of agar. In the 1940s, multiple investigators, including Pope, Foster and Woodruff, Vincent and Vincent used paper discs instead. All these methods involve testing only susceptibility to penicillin. The results were difficult to interpret and not reliable, because of inaccurate results that were not standardised between laboratories. Dilution has been used as a method to grow and identify bacteria since the 1870s, and as a method of testing the susceptibility of bacteria to antibiotics since 1929, also by Alexander Fleming. The way of determining susceptibility changed from how turbid the solution was, to the pH (in 1942), to optical instruments. The use of larger tube-based "macrodilution" testing has been superseded by smaller "microdilution" kits. In 1966, the World Health Organisation confirmed the Kirby–Bauer method as the standard method for susceptibility testing; it is simple, cost-effective and can test multiple antibiotics. The Etest was developed in 1980 by Bolmstrӧm and Eriksson, and MALDI-TOF developed in 2000s. An array of automated systems has been developed since and after the 1980s. PCR was the first genetic test available and first published as a method of detecting antibiotic susceptibility in 2001. Further research Point-of-care testing is being developed to speed up the time for testing, and to help practitioners avoid prescribing unnecessary antibiotics in the style of precision medicine. Traditional techniques typically take between 12 and 48 hours, although it can take up to five days. In contrast, rapid testing using molecular diagnostics is defined as "being feasible within an 8-h(our) working shift". Progress has been slow due to a range of reasons including cost and regulation. Additional research is focused at the shortcomings of current testing methods. As well as the duration it takes to report phenotypic methods, they are laborious, have difficult portability and are difficult to use in resource-limited settings, and have a chance of cross-contamination. As of 2017, point-of-care resistance diagnostics were available for methicillin-resistant Staphylococcus aureus (MRSA), rifampin-resistant Mycobacterium tuberculosis (TB), and vancomycin-resistant enterococci (VRE) through GeneXpert by molecular diagnostics company Cepheid. Quantitative PCR, with the view of determining the percent of a detected bacteria that possesses a resistance gene, is being explored. Whole genome sequencing of isolated bacteria is also being explored, and likely to become more available as costs decrease and speed increases over time. Additional methods explored include microfluidics, which uses a small amount of fluid and a variety of testing methods, such as optical, electrochemical, and magnetic. Such assays do not require much fluid to be tested, are rapid and portable. The use of fluorescent dyes has been explored. These involve labelled proteins targeted at biomarkers, nucleic acid sequences present within cells that are found when the bacterium is resistant to an antibiotic. An isolate of bacteria is fixed in position and then dissolved. The isolate is then exposed to fluorescent dye, which will be luminescent when viewed. Improvements to existing platforms are also being explored, including improvements in imaging systems that are able to more rapidly identify the MIC in phenotypic samples; or the use of bioluminescent enzymes that reveal bacterial growth to make changes more easily visible. Bibliography References External links Antibiotics Microbiology techniques Infectious diseases Antimicrobial resistance
Antibiotic sensitivity testing
Chemistry,Biology
3,806
9,744,193
https://en.wikipedia.org/wiki/Penile%20implant
A penile implant is an implanted device intended for the treatment of erectile dysfunction, Peyronie's disease, ischemic priapism, deformity and any traumatic injury of the penis, and for phalloplasty or metoidioplasty, including in gender-affirming surgery. Men also opt for penile implants for aesthetic purposes. Men's satisfaction and sexual function is influenced by discomfort over genital size, which leads some to seek surgical and non-surgical solutions for penis alteration. Although there are many distinct types of implants, most fall into one of two categories: malleable and inflatable transplants. History The first modern prosthetic reconstruction of a penis is attributed to NA Borgus, a German physician who performed the first surgical attempts in 1936 on soldiers with traumatic amputations of the penis. He used rib cartilages as prosthetic material and reconstructed the genitals for both micturition and intercourse purposes. Willard E. Goodwin and William Wallace Scott were the first to describe the placement of synthetic penile implants using acrylic prosthesis in 1952. Silicone-based penile implants were developed by Harvey Lash and the first case series were published in 1964. The development of a high-grade silicone that is currently used in penile implants is credited to NASA. The prototypes of the contemporary inflatable and malleable penile implants were presented in 1973 during the annual meeting of the American Urological Association by two groups of physicians from Baylor University (Gerald Timm, William E. Bradley and F. Brantley Scott) and University of Miami (Michael P. Small and Hernan M. Carrion). Small and Carrion pioneered the popularization of semi-rigid penile implants with the introduction of Small-Carrion prosthesis (Mentor, USA) in 1975. Brantley Scott described the initial device as composed of two inflatable cylindrical bodies made up of silicone, a reservoir containing radiopaque fluid and two pumping units. The first generation products were marketed through American Medical Systems (AMS; currently Boston Scientific), with which Brantley Scott was associated. Many device updates have been released by AMS since the first generation implants. In 1983, Mentor (currently Coloplast) joined the market. In 2017, there were more than ten manufacturers of penile implants in the world, however only a few now remain in the market. The latest additions to the market are Zephyr Surgical Implants and Rigicon Innovative Urological Solutions. Zephyr Surgical Implants, along with penile implants for biological men, introduced the first line of inflatable and malleable penile implants designed for sex reassignment for trans men. In recent years, Rigicon Innovative Urological Solutions, a US-based company, has made significant advancements in the field of penile implants. In 2017, they released the 'Rigi10,' a malleable implant that expanded the market's options. Following this, in 2019, they introduced both the 'Infla10' series, which includes the Infla10 AX, Infla10 X, and Infla10 models, and the 'Rigi10 Hydrophilic.' These inflatable and hydrophilic-coated malleable models respectively were important additions to the range of penile implant technologies available. These advancements have contributed to the diversity and progress in the development of penile implants, offering patients more varied and tailored treatment solutions. According to analysis of the 5% Medicare Public Use Files from 2001 to 2010 approximately 3% of patients diagnosed with erectile dysfunction opt for penile implantation. Each year nearly 25,000 inflatable penile prostheses are implanted in the USA. The list shows penile implants available in the market in 2020. Types Malleable penile implant The malleable (also known as non-inflatable or semi-rigid) penile prosthesis is a pair of rods implanted into the corpora of the penis. The rods are hard, but 'malleable' in the sense that they can be adjusted manually into the erect position. There are two types of malleable implants: one that is made of silicone and does not have a rod inside, also called soft implants, and another with a silver or steel spiral wire core inside coated with silicone. Some of the models have trimmable tails intended for length adjustment. Currently, a variety of malleable penile implants are available worldwide. Inflatable penile implant The inflatable penile implant (IPP), more recently developed, is a set of inflatable cylinders and a pump system. Based on the differences in structure, there are two types of inflatable penile implants: two-piece and three-piece IPPs. Both types of inflatable devices are filled with sterile saline solution which is pumped into cylinders when in process. The cylinders are implanted into the cavernous body of the penis. The pump system is attached to the cylinders and placed in the scrotum. Three-piece implants have a separate large reservoir connected to the pump. The reservoir is commonly placed in the retropubic space (Retzius' space), however other locations have also been described, such as between the transverse muscle and rectus muscle. Three-piece implants provide more desirable rigidity and girth of the penis resembling natural erection. Additionally, due to the presence of a large reservoir, three-piece implants provide full flaccidity of the penis when deflated, thus bringing more comfort than two-piece inflatable and malleable implants. The saline solution is pumped manually from the reservoir into bilateral chambers of cylinders implanted in the shaft of the penis, which replaces the non- or minimally-functioning erectile tissue. This produces an erection. The glans of the penis, however, remains unaffected. Ninety to ninety-five percent of inflatable prostheses produce erections suitable for sexual intercourse. In the United States, the inflatable prosthesis has largely replaced the malleable one, due to its lower rate of infections, high device survival rate and 80–90% satisfaction rate. The first IPP prototype presented in 1975 by Scott and colleagues was a three-piece prosthesis (two cylinders, two pumps and a fluid reservoir). Since then, the IPP has undergone multiple modifications and improvements for device reliability and durability, including change in the chemical material used in implant manufacturing, using hydrophilic and antibiotic eluting coatings to reduce the rates of infections, introducing one-touch release etc. Surgical techniques used for the implantation of penile prostheses have also improved along with evolution of the device. Inflatable penile implants were one of the first interventions in urology where the "no-touch" surgical technique was introduced. This has significantly reduced the rates of post-operative infections. Medical use Erectile dysfunction In spite of recent rapid and extensive development of non-surgical management options for erectile dysfunction, especially novel targeted medications and gene therapy, the penile implants remain the mainstay and the gold standard choice for the treatment of erectile dysfunction refractory to oral medications and injectable therapy. Additionally, penile implants can be a relevant option for those with erectile dysfunction who want to proceed with a permanent solution without medical therapy. Penile implants have been used for the treatment of erectile dysfunction with various etiologies, including vascular, cavernosal, neurogenic, psychological and post-surgical (e.g. prostatectomy). The American Urological Association recommends informing all men with erectile dysfunction about penile implants as a choice of treatment and discussing the potential outcomes with them. Penile deformity Penile implants can help recover the natural shape of the penis in various conditions that have led to penile deformity. These can be traumatic injuries, penile surgeries, disfiguring and fibrosing diseases of the penis, such as Peyronie's disease. In Peyronie's disease, the change in penile curvature affects normal sexual intercourse as well as causing erectile dysfunction due to disruption of blood flow in the cavernous bodies of the penis. Therefore, implantation of penile prosthesis in Peyronie's disease addresses several mechanisms involved in the pathophysiology of the disease. Female-to-male sex reassignment Although different models of penile prostheses have been reported to be implanted after phalloplasty procedures, with the first case described in 1978 by Pucket and Montie, the first penile implants designed and produced specifically for female-to-male gender reassignment surgery for trans men were introduced in 2015 by Zephyr Surgical Implants. Both malleable and inflatable models are available. These implants have more realistic shape with an ergonomic glans at the tip of the prosthesis. The inflatable model has an attached pump resembling a testicle. The prosthesis is implanted with a sturdy fixation on pubic bone. Another, thinner malleable implant is intended for metoidioplasty. Outcomes Satisfaction The overall satisfaction rate with penile implants reaches over 90%. Both self- and partner-reported satisfaction rates are evaluated to assess the outcomes. It has been shown that implantation of inflatable penile prosthesis brings more patient and partner satisfaction than medication therapy with PDE5 inhibitors or intracavernosal injections. Satisfaction rates are reported to be higher with inflatable rather than malleable implants, but there are no differences between two-piece and three-piece devices. The most frequent reasons for dissatisfaction are reduced penis length and girth, failed expectations and difficulties with device use. Thus, it is vital to provide patients and their partners with detailed preoperative counselling and instructions. Curvature correction 33% to 90% of cases of patients with Peyronie's disease that have had an inflatable PI procedure have successfully corrected their penile deformity. The residual curvature after penile implant placement usually requires intraoperative surgical intervention. Progress in procedure Dilation of the corpora cavernosa, typically with Hegar sounds, before inserting the device has been a common part of implantation procedures. This dilation destroys erectile tissue. It has been shown that a tissue-sparing technique, i.e. without dilation, correlates with superior outcomessome remaining natural erectile response can be preserved, and post-operative pain is reduced as well. Complications The most common complication associated with penile implant placement appears to be infections with reported rates of 1–3%. Both surgical site and device infections are reported. When the infection involves the penile implant itself, implant removal is required and irrigation of the cavities with antiseptic solutions. In this scenario, placement of a new implant is needed to avoid further tissue fibrosis and shortening of the penis. The rate of repeat surgeries or device replacements ranges from 6% to 13%. Other reported complications include perforation of the corpus cavernosum and urethra (0.1–3%), commonly occurring in patients with previous fibrosis, prosthesis erosion or extrusion, change in glans shape, hematoma, shortening of penis length, and device malfunction. Due to continuous improvement of surgical techniques and modifications of implants, complication rates have dramatically decreased over time. To overcome post-operative penile shortening and to increase the perceived length of the penis and patient satisfaction, ventral and dorsal phalloplasty procedures in combination with penile implants have been described. Modified glanulopexy has been proposed to prevent supersonic transporter deformity and glandular hypermobility which are possible complications of penile implants. Sliding techniques in which the penis is cut and elongated with penile implants have been performed in cases of severe penile shortening. However, these techniques had higher rates of complications and are currently avoided. References External links Male genital surgery Men's health Andrology Sexual health Human penis Penile erection Masculinizing surgery Medical devices Medical equipment Implants (medicine) Prosthetics
Penile implant
Biology
2,577
34,739,938
https://en.wikipedia.org/wiki/Catalytic%20distillation
Catalytic distillation is a branch of reactive distillation which combines the processes of distillation and catalysis to selectively separate mixtures within solutions. Its main function is to maximize the yield of catalytic organic reactions, such as the refining of gasoline. The earliest case of catalytic distillation was thought to have dated back to 1966; however, the idea was officially patented in 1980 by Lawrence A. Smith, Jr. The process is currently used to purify gasoline, extract rubber, and form plastics. Catalysts The catalysts used for catalytic distillation are composed of different substances and packed onto varying objects. The majority of the catalysts are powdered acids, bases, metal oxides, or metal halides. These substances tend to be highly reactive which can significantly speed up the rate of the reaction making them effective catalysts. The shapes which the catalysts are packed onto must be able to form a consistent geometric arrangement to provide equal spacing in the catalyst bed (an area in the distillation column where the reactant and catalyst come into contact to form the products). This spacing is meant to ensure the catalysts are spread evenly within the column. The catalyst bed must be largely spacious (about 50% empty) so that any evaporated gaseous reactants may catalyze and form gaseous products. The catalyst bed must also be able to contract and expand as it may have to respond to pressure changes within the column. Before the catalysts are packed onto the shape, they are first packed onto something porous like a cloth or wire mesh. The cloth may be made from cotton, fiberglass, polyester, nylon, or other similar materials. The mesh is generally made from aluminum, steel, or stainless steel. In terms of shapes, catalysts are usually packed on rings, saddles, balls, sheets, tubes, or spirals. These shapes tend to be made from fiberglass, teflon, and nonreactive metals. Before the catalysts are introduced into the system, they are either bagged, attached on metal grills or screens, or placed on polymer foams. Process Within the catalytic distillation column, liquid reactants are catalyzed while concurrently being heated. As a result, the products immediately begin to vaporize and are separated from the initial solution. By catalyzing and heating the reactants at the same instant, the newly formed products are rapidly boiled out of the system. With the lack of the products, Le Chatelier's principle comes into effect and forms new products from the reactants to replace the removed products. Since the products are continuously exiting, the system never reaches equilibrium. The continuous formation of products causes the reaction to achieve completion. Reflux In most reactions carried out by catalytic distillation, the reactants are often more volatile than the products. Because of this, an internal recycling system, known as the reflux, is implemented right after the condenser (an area within the column where escaped gases are cooled down to liquids). The reflux transfers the concentrated vapor back to the catalyst area. The reflux also returns a portion of the condensed liquids to the column to ensure only the products with the lowest boiling points are captured. As the reflux returns impure mixtures, the catalysts are washed for a prolonged usage. Types of Reactions Reactions within catalytic distillation columns include: dimerization - forming a single molecule from two monomers with weak or strong bonds. polymerization - forming a three dimensional molecule from multiple monomers. etherification - forming a molecule by bonding two CHn groups (alkane) around an oxygen atom. esterification - forming a molecule from an acid with oxygen (oxoacid) and an OH group (hydroxyl) containing molecule. isomerization - changing the structure of a molecule without changing its individual elements and their respective quantities. alkylation - transferring a CHn group from one molecule to another. hydrogenation - adding hydrogen atoms to a molecule. dehydrogenation - separating hydrogen atoms from a molecule. Improvements from two column distillation In two column distillation, the obtaining the desired product calls for a column for catalysis and then a column for distillation. This means that the distillation company would have to fund the construction of two large columns as well as a method for transporting the contents of one column to another. With catalytic distillation, the company only needs to fund one column which eliminates both the cost for a second column and the cost to move chemicals from one column to another. This optimization cuts overhead costs to nearly half the original cost. In addition to cutting costs, catalytic distillation is a milestone in efficiency and efficacy. Less time is spent because it is not necessary to move the contents from column to another. Also, the percent yielded from reactants to products increased in some reactions from 96-97% to 99.9%. References Distillation
Catalytic distillation
Chemistry
1,015
1,899,305
https://en.wikipedia.org/wiki/Boltzmann%20relation
In a plasma, the Boltzmann relation describes the number density of an isothermal charged particle fluid when the thermal and the electrostatic forces acting on the fluid have reached equilibrium. In many situations, the electron density of a plasma is assumed to behave according to the Boltzmann relation, due to their small mass and high mobility. Equation If the local electrostatic potentials at two nearby locations are φ1 and φ2, the Boltzmann relation for the electrons takes the form: where ne is the electron number density, Te is the temperature of the plasma, and kB is the Boltzmann constant. Derivation A simple derivation of the Boltzmann relation for the electrons can be obtained using the momentum fluid equation of the two-fluid model of plasma physics in absence of a magnetic field. When the electrons reach dynamic equilibrium, the inertial and the collisional terms of the momentum equations are zero, and the only terms left in the equation are the pressure and electric terms. For an isothermal fluid, the pressure force takes the form while the electric term is . Integration leads to the expression given above. In many problems of plasma physics, it is not useful to calculate the electric potential on the basis of the Poisson equation because the electron and ion densities are not known a priori, and if they were, because of quasineutrality the net charge density is the small difference of two large quantities, the electron and ion charge densities. If the electron density is known and the assumptions hold sufficiently well, the electric potential can be calculated simply from the Boltzmann relation. Inaccurate situations Discrepancies with the Boltzmann relation can occur, for example, when oscillations occur so fast that the electrons cannot find a new equilibrium (see e.g. plasma oscillations) or when the electrons are prevented from moving by a magnetic field (see e.g. lower hybrid oscillations). References Plasma physics_equations
Boltzmann relation
Physics
401
2,316,561
https://en.wikipedia.org/wiki/Cyclocomputer
A cyclocomputer, cycle computer, cycling computer or cyclometer is a device mounted on a bicycle that calculates and displays trip information, similar to the instruments in the dashboard of a car. The computer with display, or head unit, usually is attached to the handlebar for easy viewing. Some GPS watches can also be used as display. History In 1895, Curtis H. Veeder invented the Cyclometer. The Cyclometer was a simple mechanical device that counted the number of rotations of a bicycle wheel. A cable transmitted the number of rotations of the wheel to an analog odometer visible to the rider, which converted the wheel rotations into the number of miles traveled according to a predetermined formula. After founding the Veeder Manufacturing Company, Veeder promoted the Cyclometer with the slogan, It's Nice to Know How Far You Go. The Cyclometer's success led to many other competing types of mechanical computing devices. Eventually, cyclometers were developed that could measure speed as well as distance traveled. Basic operation The head A basic cyclocomputer with a wheel speed sensor may display the current speed, average speed, maximum speed, trip distance, trip time, total distance traveled, and the current time. More advanced models with additional sensors and storage may display and record altitude, incline (inclinometer), heart rate, power output (measured in watt) and temperature as well as offer additional functions such as pedaling cadence, a stopwatch and even GPS navigation and video data overlay synchronization. They have become useful accessories in bicycling as a sport and as a recreational activity. The display is usually implemented with a liquid crystal display, and it may show one or more values at once. Many current models display one value, such as current speed, with large numbers, and another number that the user may select, such as time, distance, average speed, etc., with small numbers. The head usually has one or more buttons that the user can push to switch the value(s) displayed, reset values such as time and trip distance, calibrate the unit, and on some units, turn on a back light for the display. Most displays are navigated by pressing buttons and high-end models use a capacitive touch screen to navigate screens and maps. The wheel sensor The older, traditional sensors have a magnet attached to a spoke of either the front or rear wheel. A sensor based either on the Hall effect, or on a magnetic reed switch, is attached to the fork or the rear of the frame. The sensor detects when the magnet passes once per rotation of the wheel and time stamps or time codes the revolution count. Alternatively, a sensor may be attached to the wheel hub. Distance is determined by counting the number of rotations, which translates into the number of wheel circumferences passed. Speed is calculated from distance against lapsed time period using the circumference of the wheel and the time it took to make one rotation. The cadence sensor To measure cadence (revolutions per minute of the crankarm), a magnet is mounted to the crankarm, and a sensor mounted to the frame. This works on the same principle as the speedometer function and measures the turning of the cranks and front chain ring. Transmission Some models use a wired connection between the sensor and the head unit. Other models transmit the data wirelessly from the sensor/transmitter to the head unit. Data can be exported to a SD card, computer, or phone and uploaded to an internet web service. Wireless cadence and speed sensors use wireless communication standards ANT + and Bluetooth Low Energy and can directly communicate with a smartphone application that also uses the phone's GPS, barometer, temperature, clock, and other sensors to create a more detailed picture, record, or map. Calibration Once a new computer is installed, it usually requires proper configuration. This normally includes selecting distance units (kilometers vs. miles) and the circumference of the wheel. Since the sensor measures wheel rotation, different wheel sizes will translate to different measures of speed and distance for a given number of rotations. For more accuracy the bicycle (with the set cyclocomputer) must be ridden by the intended rider over an accurately measured distance. The computer's reading is then compared with the known distance and any necessary corrections made. Additional information Besides variables calculated from the rotating wheels or crank, cyclocomputers can display other information. Gear For integrated shifters on racing bicycles, the gear can be read by the computer: Shimano's Flight Deck and Campagnolo's ErgoBrain work with their respective systems to detect the gearing. This allows indirect measurement of cadence. These systems do not have sensors on the crankset or cassette to determine what gear the bicycle is in. They work exclusively with the shifters, which may result in misleading information. Instead of knowing what gear the bicycle is in, they rely on sensing when the cyclist changes gears using sensors in the shifters. If the gear change doesn't actually happen, or the computer's sensors are too sensitive (e.g.: when braking with STI-style shifters), the information displayed is not accurate. Performance With additional sensors, other performance measurements are available: A heart rate monitor can be integrated into the computer, using a chest strap sensor. A power meter measures power output in watts, using a torque sensor in the bottom bracket, pedals, or rear hub. Environment Some models also have sensors built into the head that measure and display environmental parameters such as temperature and altitude. Cyclist power measurement Some more sophisticated models are able to measure the rider's power in terms of watts. These units incorporate elements that measure torque at the crank, or rear wheel hub, or tension on the chain. This technology began in the late 1980s. (See Team Strawberry for the early development and testing stages of this technology.) Maps Some cyclocomputers (such as the Garmin Edge, trimm One, Wahoo Elemnt Bolt or Hammerhead Karoo) can be loaded with maps and can thus show the rider's position on the map, or provide turn-by-turn directions for a pre-determined route. Electric bicycles Most electric bicycles have a microcontroller in the motor controller to calculate input cadence or torque, adjust amperage, control the motors and send the display screen information. Often cyclists can select the level of power assist provided using the computer. The computer also monitors the speed and can deactivate the motor for braking or if required by law (for example, in many countries pedelec bikes cannot use motor assist above 25 km/h). See also Outline of cycling References American inventions Computer Computers Consumer electronics Electronics industry
Cyclocomputer
Technology
1,396
77,133,261
https://en.wikipedia.org/wiki/Personality%20hire
In recruitment, a personality hire refers to the practice of hiring candidates for their personality, rather than their tangible skill set. Personality hires typically have stronger soft skills than hard skills, may serve as a morale booster within the workplace, and help build corporate culture. Some candidates may label themselves as personality hires due to imposter syndrome. The term came into mainstream use in 2023 and is similar to that of a diversity hire. A personality hire may be reflective of an implicit cognitive affinity bias. Personality hires have been criticized for their lack of skills and competency. Due to their sociable personalities, personality hires may have to set personal boundaries. See also Cult of personality References Personality Recruitment Employment services Human resource management
Personality hire
Biology
143
1,156,776
https://en.wikipedia.org/wiki/Implicant
In Boolean logic, the term implicant has either a generic or a particular meaning. In the generic use, it refers to the hypothesis of an implication (implicant). In the particular use, a product term (i.e., a conjunction of literals) P is an implicant of a Boolean function F, denoted , if P implies F (i.e., whenever P takes the value 1 so does F). For instance, implicants of the function include the terms , , , , as well as some others. Prime implicant A prime implicant of a function is an implicant (in the above particular sense) that cannot be covered by a more general, (more reduced, meaning with fewer literals) implicant. W. V. Quine defined a prime implicant to be an implicant that is minimal—that is, the removal of any literal from P results in a non-implicant for F. Essential prime implicants (also known as core prime implicants) are prime implicants that cover an output of the function that no combination of other prime implicants is able to cover. Using the example above, one can easily see that while (and others) is a prime implicant, and are not. From the latter, multiple literals can be removed to make it prime: , and can be removed, yielding . Alternatively, and can be removed, yielding . Finally, and can be removed, yielding . The process of removing literals from a Boolean term is called expanding the term. Expanding by one literal doubles the number of input combinations for which the term is true (in binary Boolean algebra). Using the example function above, we may expand to or to without changing the cover of . The sum of all prime implicants of a Boolean function is called its complete sum, minimal covering sum, or Blake canonical form. See also Quine–McCluskey algorithm Karnaugh map Petrick's method References External links Slides explaining implicants, prime implicants and essential prime implicants Examples of finding essential prime implicants using K-map Boolean algebra
Implicant
Mathematics
455
47,275,899
https://en.wikipedia.org/wiki/Francisco%20Santos%20Leal
Francisco (Paco) Santos Leal (born 28 May 1968) is a Spanish mathematician at the University of Cantabria, known for finding a counterexample to the Hirsch conjecture in polyhedral combinatorics. In 2015 he won the Fulkerson Prize for this research. Santos was born in Valladolid, Spain. He earned a licenciate in mathematics from the University of Cantabria in 1991, and a master's degree in pure mathematics from Joseph Fourier University in Grenoble, France in the same year. He returned to Cantabria for his doctorate, which he finished in 1995, with a thesis on the combinatorial geometry of algebraic curves and Delaunay triangulations supervised by Tomás Recio. He also has a second licenciate, in physics, from Cantabria in 1996. After postdoctoral studies at the University of Oxford he returned to Cantabria as a faculty member in 1997, and was promoted to full professor in 2008. From 2009 to 2013 he has been vice-dean of the Faculty of Sciences at Cantabria. As well as being honored by the Fulkerson Prize in 2015 for a counter-example of the Hirsch conjecture, he was an invited sectional speaker at the 2006 International Congress of Mathematicians. Santos is an Editor-in-Chief of the Electronic Journal of Combinatorics. References External links Home page Google scholar profile 1968 births Living people People from Valladolid Geometers Academic staff of the University of Cantabria 20th-century Spanish mathematicians 21st-century Spanish mathematicians Grenoble Alpes University alumni
Francisco Santos Leal
Mathematics
327
33,538,586
https://en.wikipedia.org/wiki/Fan%20filter%20unit
A fan filter unit (FFU) is a type of motorized air filtering equipment. It is used to supply purified air to cleanrooms, laboratories, medical facilities or microenvironments by removing harmful airborne particles from recirculating air. The units are installed within the system's ceiling or floor grid. Large cleanrooms require a proportionally large number of FFUs, which in some cases may range from several hundred to several thousand. Units often contain their own pre-filter, HEPA filter and internally controllable fan air distribution. Design FFUs are typically manufactured in 4' x 2', 3' x 2', or 2' x 2' steel housings, which can be placed in ceiling grid bays of similar dimensions. Units often contain a pre-filter as well as a HEPA (high-efficiency particulate air), ULPA (ultra-low particulate air) or other MERV (minimum efficiency reporting value) filter. A motorized fan is used to pull air through the filters for distribution to rooms or enclosed work stations such as hoods. Fan speed is typically controlled via a step-wise or rheostat motor adjustment. Desired cleanliness levels determine the filter used: HEPA filters remove particles 0.3 μm or larger at 99.99% efficiency, while ULPA filters remove particles 0.12 μm or larger at 99.999% efficiency. FFUs are engineered for laminar air flow, as is required in critical environments. Controlled air flowing in a uniform direction and speed (carrying microparticles) is cleaner than turbulent air that flows in multiple directions or at inconsistent speeds. Eddies created by turbulent air cause contaminating microparticles to settle on clean surfaces. Uses 4' x 2' or 2' x 2' FFUs are designed to be placed in ceiling grid bays with similar dimensions. Ceiling grids with standard size bays that match FFU dimensions are used to construct cleanrooms. Depending on the cleanliness requirements of the controlled space, more fan filter units can be added to the grid in order to meet ISO standards for airflow velocity and air changes per hour. FFUs can be used in place of a more conventional recirculating air unit such as a ducted or plenum air system. As FFUs require space above the ceiling grid (13" for the FFU module plus another 1–2 feet of "empty" air-filled space), plenums are commonly used for clean rooms with height restrictions; they are the only air systems that work in layouts with smaller internal floor dimensions. Additionally, when less than 20 filters are installed in a room, a fan- powered HEPA, FFU is generally considered to be less expensive than a more conventional ducted supply system. When a microenvironment of clean air is required, FFUs can be used to construct enclosed work spaces, or laminar flow cabinets. Applying the same principle as the larger cleanroom grid, FFUs can be placed directly in a free- standing grid above the space that requires clean air. In fact, this approach is also used for silicon wafer etching in the semiconductor industry. References External links Negative Ion Generator Air Purifier Information Air filters
Fan filter unit
Chemistry
666
37,992,682
https://en.wikipedia.org/wiki/Blanking%20level
In video technology, blanking level is the level of the composite video signal during the front and back porches of the video signal. The composite video signal is actually the video information superimposed on blanking. The total level of the composite video signal (blanking + video) is 1000 mV. This level can also be given in IRE units such that the level difference reserved for video information is 100 IRE units. So white corresponds to 100 IRE units and blanking level corresponds to 0 IRE units. The level of black is 0 IRE units in the case of CCIR System B and CCIR System G (European systems) and 7.5 IRE units in the case of CCIR System M (American system), although NTSC-J in Japan, also utilizing System M, uses 0 IRE for both black and blanking much like Systems B & G. So, while there is no difference between the black and the blanking levels in most systems, they differ by 50 mV in system M. (When defined in terms of voltage difference, 7.5 IRE units is almost equal to 50 mV.) References Television technology Broadcast engineering
Blanking level
Technology,Engineering
239
10,033,072
https://en.wikipedia.org/wiki/Frizzled
Frizzled is a family of atypical G protein-coupled receptors that serve as receptors in the Wnt signaling pathway and other signaling pathways. When activated, Frizzled leads to activation of Dishevelled in the cytosol. Species distribution Frizzled proteins and the genes that encode them have been identified in an array of animals, from sponges to humans. Function Frizzled proteins also play key roles in governing cell polarity, embryonic development, formation of neural synapses, cell proliferation, and many other processes in developing and adult organisms. These processes occur as a result of one of three signaling pathways. These include the canonical Wnt/β-catenin pathway, Wnt/calcium pathway, and planar cell polarity (PCP) pathway. Mutations in the human frizzled-4 receptor have been linked to familial exudative vitreoretinopathy, a rare disease affecting the retina at the back of the eye, and the vitreous, the clear fluid inside the eye. The frizzled (fz) locus of Drosophila coordinates the cytoskeletons of epidermal cells, producing a parallel array of cuticular hairs and bristles. In fz mutants, the orientation of individual hairs with respect both to their neighbours and to the organism as a whole is altered. In the wild-type wing, all hairs point towards the distal tip. In the developing wing, Fz has 2 functions: it is required for the proximal-distal transmission of an intracellular polarity signal; and it is required for cells to respond to the polarity signal. Fz produces an mRNA that encodes an integral membrane protein with 7 putative transmembrane (TM) domains. This protein should contain both extracellular and cytoplasmic domains, which could function in the transmission and interpretation of polarity information. This signature is usually found downstream of the Fz domain () Cysteine-rich domain Frizzled proteins include cysteine-rich domain that is conserved in diverse proteins, including several receptor tyrosine kinases. In Drosophila melanogaster, members of the Frizzled family of tissue-polarity genes encode proteins that appear to function as cell-surface receptors for Wnts. The Frizzled genes belong to the seven transmembrane class of receptors (7TMR) and have in their extracellular region a cysteine-rich domain that has been implicated as the Wnt binding domain. Sequence similarity between the cysteine-rich domain of Frizzled and several receptor tyrosine kinases, which have roles in development, include the muscle-specific receptor tyrosine kinase (MuSK), the neuronal-specific kinase (NSK2), and ROR1 and ROR2. The structure of this domain is known and is composed mainly of alpha helices. This domain contains ten conserved cysteines that form five disulphide bridges. Group members The following is a list of the ten known human frizzled receptors: Frizzled-1 () Frizzled-2 () Frizzled-3 () Frizzled-4 () Frizzled-5 () Frizzled-6 () Frizzled-7 () Frizzled-8 () Frizzled-9 () Frizzled-10 () As a drug target Vantictumab is a monoclonal antibody against five frizzled receptors that is under development for the treatment of cancer. See also Smoothened References External links G protein-coupled receptors
Frizzled
Chemistry
773
68,585,633
https://en.wikipedia.org/wiki/Laundry-folding%20machine
A laundry-folding machine or laundry-folding robot is a machine or domestic robot which folds apparel such that they can be stored compactly and orderly. A laundry folding machine can be a part of or integrated with a washing machine, clothes dryer, ironing machine and/or wardrobe. Some operate these processes autonomously, while other require varying degrees of manual intervention. Industrial use For industrial use, there are several types of laundry folding machines in different sizes and varieties, of which some are very specialized for certain types of clothing, or very large to be able to fold large textiles such as bedding. Domestic use There have been several attempts to produce commercial clothes folding machines for home use. FoldiMate was an American company founded in 2010, which presented a prototype of a clothes folding machine first in 2016, then at the Consumer Electronics Show in 2017, and an updated prototype at CES in 2018. The garments had to be manually fed into the machine from top one at a time, and after a few minutes the user could pick up ready-made clothes at the bottom. Their goal was for FoldiMate to enter the market by the end of 2019, but in July 2021 it became clear that the company would cease its operations. Laundroid was a Japanese combined washing and folding machine that aimed at being able to wash, dry, iron and fold clothes, and then transport them to an integrated wardrobe - completely autonomously, with an estimated working duration being overnight. It was first shown at the consumer electronics exhibition show CEATEC in 2015, and was marketed as the world's first robot that could wash and fold clothes. The goal was for Laundroid to enter the market in 2017 (later adjusted to 2019), but in 2019 the company behind Laundroid, Seven Dreams, announced that they had gone bankrupt. The development had been supported by amongst others Daiwa House and Panasonic. See also Clothes dryer Clothes horse Clothes hanger Smart home Walk-in closet Washing machine References Rotating machines Folding machine Domestic robots
Laundry-folding machine
Physics,Technology
409
5,607,556
https://en.wikipedia.org/wiki/Minot%27s%20Ledge%20Light
Minot's Ledge Light, officially Minots Ledge Light, is a lighthouse on Minots Ledge, one mile offshore of the towns of Cohasset and Scituate, Massachusetts, to the southeast of Boston Harbor.The current lighthouse is the second on the site, the first having been washed away in a storm after only a few months of use. First lighthouse In 1843, lighthouse inspector I. W. P. Lewis compiled a report on Minots Ledge, showing that more than 40 vessels had been lost due to striking the ledge from 1832 to 1841, with serious loss of life and damage to property. The most dramatic incident was the sinking of a ship "St John" in October 1849 with ninety-nine Irish immigrants, who all drowned within sight of their new homeland. It was initially proposed to build a lighthouse similar to John Smeaton's pioneering Eddystone Lighthouse, situated off the south-west coast of England. However Captain William H. Swift, put in charge of planning the tower, believed it impossible to build such a tower on the mostly submerged ledge. Instead he successfully argued for an iron pile light, a spidery structure drilled into the rock. The first Minot's Ledge Lighthouse was built between 1847 and 1850, and was lighted for the first time on January 1, 1850. One night in April 1851, the new lighthouse was struck by a major storm that caused damage throughout the Boston area. The following day only a few bent pilings were found on the rock. The two assistant keepers who had been tending the lighthouse at the time had died at their posts. The current lighthouse Until 1863 the design and construction of lighthouses was the responsibility of the Corps of Topographical Engineers; this resulted in a rivalry with the longer-established Army Corps of Engineers, which built fortifications and had responsibility, as it does today, for waterway improvements. The Chief Engineer of the Army Corps of Engineers, Joseph G. Totten, personally took charge of the project to design and construct a permanent lighthouse on Minot's Ledge. Totten's design was as simple as it was effective. With extensive experience building fortifications, Totten fully appreciated the permanency and strength of granite constructions. He designed the lighthouse so the first 40 feet of lighthouse would be a solid granite base weighing thousands of tons. To secure the lighthouse to the ledge, he had several massive iron pins emplaced so that the lighthouse would be literally pinned to the ledge by its own weight. Working on the ledge could take place only in conditions when it was exposed at low tide and the sea was calm, so construction took years. Work started on the current lighthouse in 1855, and it was completed and first lit on November 15, 1860. With a final cost of $300,000, it was the most expensive light house that was ever constructed in the United States to that date. The lighthouse is built of large and heavy dovetailed granite blocks, which were cut and dressed ashore in Quincy and taken to the ledge by ship. The lighthouse was equipped with a third-order Fresnel lens. The light signal, a 1-4-3 flashing cycle adopted in 1894, is locally referred to as "I LOVE YOU" (1-4-3 being the number of letters in that phrase), and it is often cited as such by romantic couples within its range. Minots Ledge Light was automated in 1947. Historical information The following is taken from the Coast Guard Historian's website: Minot's Ledge Lighthouse keepers in 1940: George H. Fitzpatrick, Perc A. Evans, Patrick J. Bridy Minot's Ledge Lighthouse was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1977. The light was added to the National Register of Historic Places in 1987 as Minot's Ledge Light. It was put up for sale under the National Historic Lighthouse Preservation Act in 2009. Nomenclature and location Officially, it is Minots Ledge Light, but the National Register listing calls it Minot's Ledge Light. There is a replica of the top section of the lighthouse, located on the shores of Cohasset Harbor. The replica can be viewed just outside the Cohasset Sailing Club. The replica on shore is not a replica, but instead is made from the stone and steel remnants of the original upper portion of the lighthouse including the lamp chamber, which was wholly rebuilt in the late twentieth century, the copper dome is in fact a replica.. It is located about one mile off of the coast of Scituate Neck. In Popular Culture An image of Minot's Ledge Light has featured prominently on the label of Cohasset Punch, a brand of liqueur popular in Chicago, from 1899 until its discontinuation in the late-1980s. The brand was revived in 2024 and features a new illustration of Minot's Ledge Light. See also Government Island Historic District, the Cohasset land station associated with the lighthouse National Register of Historic Places in Plymouth County, Massachusetts References External links Minot's Ledge poem. Fitz-James O'Brien, Harper's New Monthly Magazine, April 1861. audio recording, 2006, Public Domain. Scituate, Massachusetts Collapsed buildings and structures in the United States Disasters in Massachusetts 1851 disasters in the United States Historic Civil Engineering Landmarks Lighthouses completed in 1850 Lighthouses completed in 1860 Lighthouses in Plymouth County, Massachusetts Lighthouses on the National Register of Historic Places in Massachusetts
Minot's Ledge Light
Engineering
1,106
78,170,816
https://en.wikipedia.org/wiki/Neurosemiotics
Neurosemiotics is an area of science which studies the neural aspects of meaning making. It interconnects neurobiology, biosemiotics and cognitive semiotics. Neurolinguistics, neuropsychology and neurosemantics can be seen as parts of neurosemiotics. Description The pioneers of neurosemiotics include Jakob von Uexküll, Kurt Goldstein, Friedrich Rothschild, and others. The first graduate courses on neurosemiotics were taught in some American and Canadian universities since 1970s. The term 'neurosemiotics' is also not much older. Neurosemiotics demonstrates which are the necessary conditions and processes responsible for semiosis in the neural tissue. It also describes the differences in the complexity of meaning making in animals of different complexity of the nervous system and the brain. See also Semiotics Zoosemiotics References Semiotics Neuroscience
Neurosemiotics
Biology
199
515,408
https://en.wikipedia.org/wiki/Optically%20stimulated%20luminescence
In physics, optically stimulated luminescence (OSL) is a method for measuring doses from ionizing radiation. It is used in at least two applications: Luminescence dating of ancient materials: mainly geological sediments and sometimes fired pottery, bricks etc., although in the latter case thermoluminescence dating is used more often Radiation dosimetry, which is the measurement of accumulated radiation dose in the tissues of health care, nuclear, research and other workers, as well as in building materials in regions of nuclear disaster The method makes use of electrons trapped between the valence and conduction bands in the crystalline structure of certain minerals (most commonly quartz and feldspar). The trapping sites are imperfections of the lattice — impurities or defects. The ionizing radiation produces electron-hole pairs: Electrons are in the conduction band and holes in the valence band. The electrons that have been excited to the conduction band may become entrapped in the electron or hole traps. Under the stimulation of light, the electrons may free themselves from the trap and get into the conduction band. From the conduction band, they may recombine with holes trapped in hole traps. If the centre with the hole is a luminescence center (radiative recombination centre), emission of light will occur. The photons are detected using a photomultiplier tube. The signal from the tube is then used to calculate the dose that the material had absorbed. The OSL dosimeter provides a new degree of sensitivity by giving an accurate reading as low as 1 mrem for x-ray and gamma ray photons with energies ranging from 5 keV to greater than 40 MeV. The OSL dosimeter's maximum equivalent dose measurement for x-ray and gamma ray photons is 1000 rem. For beta particles with energies from 150 keV to in excess of 10 MeV, dose measurement ranges from 10 mrem to 1000 rem. Neutron radiation with energies of 40 keV to greater than 35 MeV has a dose measurement range from 20 mrem to 25 rem. In diagnostic imaging, the increased sensitivity of the OSL dosimeter makes it ideal for monitoring employees working in low-radiation environments and for pregnant workers. To carry out OSL dating, mineral grains have to be extracted from the sample. Most commonly these are so-called coarse grains of 100-200 μm or fine grains of 4-11 μm. Occasionally other grain sizes are used. The difference between radiocarbon dating and OSL is that the former is used to date organic materials, while the latter is used to date minerals. Events that can be dated using OSL are, for example, the mineral's last exposure to sunlight; Mungo Man, Australia's oldest human find, was dated in this manner. It is also used for dating the deposition of geological sediments after they have been transported by air (aeolian sediments) or rivers (fluvial sediments). In archaeology, OSL dating is applied to ceramics: The dated event is the time of their last heating to a high temperature (in excess of 400 °C). Recent OSL dating of stone tools in Arabia pushed the "out-of-Africa" date hypothesis of human migration back 50,000 years and added a possible path of migration from the African continent to the Arabian peninsula instead of through Europe. The most widely-used OSL method is called single-aliquot regeneration (SAR). References Particle detectors Dating methodologies in archaeology
Optically stimulated luminescence
Technology,Engineering
723
344,142
https://en.wikipedia.org/wiki/Audio%20frequency
An audio frequency or audible frequency (AF) is a periodic vibration whose frequency is audible to the average human. The SI unit of frequency is the hertz (Hz). It is the property of sound that most determines pitch. The generally accepted standard hearing range for humans is 20 to 20,000 Hz. In air at atmospheric pressure, these represent sound waves with wavelengths of to . Frequencies below 20 Hz are generally felt rather than heard, assuming the amplitude of the vibration is great enough. Sound frequencies above 20 kHz are called ultrasonic. Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances. In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines its pitch. Higher pitches have higher frequency, and lower pitches are lower frequency. The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz), though the high frequency limit usually reduces with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz. In many media, such as air, the speed of sound is approximately independent of frequency, so the wavelength of the sound waves (distance between repetitions) is approximately inversely proportional to frequency. Frequencies and descriptions See also Absolute threshold of hearing Hypersonic effect, controversial claim for human perception above 20,000 Hz Loudspeaker Musical acoustics Piano key frequencies Scientific pitch notation Whistle register References Acoustics Sound Sound measurements Physical quantities Audio engineering
Audio frequency
Physics,Mathematics,Engineering
343
319,545
https://en.wikipedia.org/wiki/Messier%2014
Messier 14 (also known as M14 or NGC 6402) is a globular cluster of stars in the constellation Ophiuchus. It was discovered by Charles Messier in 1764. At a distance of about 30,000 light-years, M14 contains several hundred thousand stars. At an apparent magnitude of +7.6 it can be easily observed with binoculars. Medium-sized telescopes will show some hint of the individual stars of which the brightest is of magnitude +14. The total luminosity of M14 is in the order of 400,000 times that of the Sun corresponding to an absolute magnitude of -9.12. The shape of the cluster is decidedly elongated. M14 is about 100 light-years across. A total of 70 variable stars are known in M14, many of the W Virginis variety common in globular clusters. In 1938, a nova appeared, although this was not discovered until photographic plates from that time were studied in 1964. It is estimated that the nova reached a maximum brightness of magnitude +9.2, over five times brighter than the brightest 'normal' star in the cluster. Slightly over 3° southwest of M14 lies the faint globular cluster NGC 6366. Gallery See also List of Messier objects References External links SEDS Messier pages on M14 M14, Galactic Globular Clusters Database page - one of the two being M14 Messier 014 Messier 014 014 Messier 014 17640601 Discoveries by Charles Messier
Messier 14
Astronomy
312
21,151,426
https://en.wikipedia.org/wiki/OGLE-TR-111b
OGLE-TR-111b is an extrasolar planet approximately 5,000 light-years away in the constellation of Carina (the Keel). The planet is currently the only confirmed planet orbiting the star OGLE-TR-111 (though a possible second planet is plausible). In 2002 the Optical Gravitational Lensing Experiment (OGLE) survey detected that the light from the star periodically dimmed very slightly every 4 days, indicating a planet-sized body transiting the star. But since the mass of the object had not been measured, it was not clear that it was a true planet, low-mass red dwarf or something else. In 2004 radial velocity measurements showed unambiguously that the transiting body is indeed a planet. The planet is probably very similar to the other hot Jupiters orbiting nearby stars. Its mass is about half that of Jupiter and it orbits the star at a distance less than 1/20th that of Earth from the Sun. OGLE-TR-111b has similar mass and orbital distance as the first transiting planet, HD 209458 b (Osiris). But unlike it, the planet has a radius comparable to Jupiter which is typical to other transiting planets detected by OGLE. However, those other planets tend to be more massive and orbit even closer than typical hot Jupiters. Therefore, this planet is an important "missing link" between the different types of transiting planets. See also HD 209458 b Lists of exoplanets OGLE-2005-BLG-390Lb OGLE-TR-10b OGLE-TR-56b OGLE-TR-113b OGLE2-TR-L9b Optical Gravitational Lensing Experiment WASP-11b/HAT-P-10b References External links Hot Jupiters Transiting exoplanets Exoplanets discovered in 2002 Giant planets Carina (constellation)
OGLE-TR-111b
Astronomy
388
53,814,610
https://en.wikipedia.org/wiki/Alpha%20and%20beta%20male
Alpha male and beta male are pseudoscientific terms for men derived from the designations of alpha and beta animals in ethology. They may also be used with other genders, such as women, or additionally use other letters of the Greek alphabet (such as omega). The popularization of these terms to describe humans has been widely criticized by scientists. Both terms have been frequently used in internet memes. The term beta is used as a pejorative self-identifier among some members of the manosphere, particularly incels, who do not believe they are assertive or traditionally masculine, and feel overlooked by women. It is also used to negatively describe other men who are not deemed to be assertive, particularly with women. In Internet culture, the term sigma male is also frequently used, gaining popularity , but has since been used jokingly, often being used with incel. History The terms were used almost solely in animal ethology prior to the 1990s, particularly in regard to mating privileges with females, ability to hold territory, and hierarchy in terms of food consumption within their herd or flock. In animal ethology, beta refers to an animal who is subordinate to higher-ranking members in the social hierarchy, thus having to wait to eat and having fewer or negligible opportunities for copulation. In the 1982 book of Chimpanzee Politics: Power and Sex Among Apes, primatologist and ethologist Frans de Waal suggested that his observations of a chimpanzee colony could possibly be applied to human interactions. Some commentary on the book, including in the Chicago Tribune, discussed its parallels to human power hierarchies. In the early 1990s, some media outlets began to use the term alpha to refer to humans, specifically to "manly" men who excelled in business. Journalist Jesse Singal, writing in New York magazine, attributes the popular awareness of the terms to a 1999 Time magazine article, which described an opinion held by Naomi Wolf, who was at the time an advisor to then-presidential candidate Al Gore: "Wolf has argued internally that Gore is a 'Beta male' who needs to take on the 'Alpha male' in the Oval Office before the public will see him as the top dog." Singal also credits Neil Strauss's bestselling 2005 book on pickup artistry, titled The Game, for popularizing alpha male as an aspirational ideal. Usage The view that there is a dominance hierarchy among humans consisting of "alpha males" and "beta males" is sometimes reported in the mainstream media. The term alpha male is often applied to any dominating man, especially bullies, despite the fact that dominating behavior is rarely seen as a positive trait for either an ideal date or a romantic partner. Claims about women being "hard-wired" to desire "alpha males" are seen by experts as misogynistic and stereotypical, and are not supported by research. Evolutionary psychologists who study human mating behavior instead believe that humans use two distinct strategiesdominance and prestigefor climbing social hierarchies, and that prestige plays a significantly more important role in establishing men's attractiveness to women than does dominance. Cognitive scientist Scott Barry Kaufman summarizes: Taken together, the research suggests that the ideal man (for a date or romantic partner) is one who is assertive, confident, easygoing, and sensitive, without being aggressive, demanding, dominant, quiet, shy, or submissive. In other words, a prestigious man, not a dominant man. In fact, it appears that the prestigious man who is high in both assertiveness and kindness is considered the most attractive to women for both short-term affairs and long-term relationships. Misconceptions about "alpha males" are common within the manosphere, a collection of websites, blogs, and online forums promoting masculinity, strong opposition to feminism, and misogyny which includes movements such as the men's rights movement, incels (involuntary celibates), Men Going Their Own Way (MGTOW), pick-up artists (PUA), and fathers' rights groups. The term beta is also often used among manosphere communities to refer to men they consider easily taken advantage of or ignored by women. Its usage is inconsistent; media studies scholar Debbie Ging has described the communities' theories about "alpha, beta, omega, and zeta masculinity" as "confused and contradictory". Beta is sometimes used as self-identifier among men who do not embody hegemonic masculinity. It is also sometimes used by manospherians as a pejorative term for men who are or are perceived to be feminist, or who are thought to be acting as "". Some manosphere groups refer to members of other groups in the manosphere as betas; for example, members of the MGTOW community sometimes use it to refer to men's rights activists or incels. Members of the pickup artist (PUA) communities use it to refer to men who cannot seduce women. Similar terms used by the manosphere communities include nice guy, cuck, simp, and soy boy. Related terms "Alpha fux beta bux" In the manosphere, the term alpha fux beta bux presupposes a sexual strategy of hypergamy or "marrying up" among women whereby they prefer and have sex with "alpha" males but settle for less attractive "beta" males for financial reasons. Sometimes it expresses a belief that women marry beta males to exploit them financially, while continuing to have extramarital sex with alpha males. Ging explains these beliefs as an effort by young men in the Western world to cope with their limited economic prospects following the 2007–2008 financial crisis by appealing to gender-essentialist notions of gold-digging women popular in postfeminist culture. Beta orbiter A beta orbiter is a beta male who invests time and effort into mingling with women in the hope of eventually getting into a romantic relationship or having sex with them. The term earned some media attention in 2019 with the murder of Bianca Devins. A man killed the 17-year-old Devins and posted photographs of her body online, one of which bore the caption, "sorry fuckers, you're going to have to find somebody else to orbit." Beta uprising The term beta uprising or incel rebellion has been used largely among incels to refer to revenge by members of their community who have been overlooked by women. It is also sometimes used to describe a movement to overthrow what they view as an oppressive, feminist society. A 2018 vehicle-ramming attack in Toronto, Canada, was allegedly perpetrated by a man who had posted on his Facebook page just prior to the attack, "the Incel Rebellion has already begun". Media outlets have used the terms beta uprising and incel rebellion to refer to acts of violence perpetrated by members of manosphere communities, particularly incels. Sigma male Sigma male is an internet slang term to describe solitary, masculine men. The term gained prominence within Internet culture during the late 2010s and early 2020s, and has inspired numerous memes, graffitis and videos. It is used to denote a male who is equivalent to an alpha male but exists outside the alpha-beta male hierarchy as a "lone wolf". In the manosphere, it is regarded as the "rarest" kind of male. In 2023, #sigma gained over 46 billion views on the social media platform TikTok. The term first appeared in a blog post by American writer Vox Day. Later, California plastic surgeon John T. Alexander published the book The Sigma Male: What Women Really Want. In 2018, the term appeared on YouTube and in 2021 it went viral after a tweet by Lily Simpson. The term sigma male has also taken on an ironic and satirical meaning, often mocking the concept of the "manosphere" and the ideas of hustle culture with bizarre and nonsensical actions being considered part of the sigma male mindset or "grindset". On social media, the term is often used to describe idealistic, masculine fictional characters from films and TV shows. Notably, actor Christian Bale's portrayal of the character Patrick Bateman from the 2000 film American Psycho is often cited as an ideal representation of a "sigma male", both through memes and unironic discussion. Beth Skwarecki, health editor of the weblog Lifehacker, describe the sigma male as a "bullshit concept from the incel world." Due to the term's attribution to fictional film characters, it has been highlighted as promoting unrealistic personality and beauty standards. See also Chad (slang) Internet slang Neckbeard (slang) Omegaverse Toxic masculinity References 2010s slang 2020s slang Generation Alpha slang Incel subculture Internet culture Stereotypes of men Obsolete biology theories
Alpha and beta male
Biology
1,820
24,867,126
https://en.wikipedia.org/wiki/Verizon%20Droid
The Droid series of phones are exclusive to Verizon Wireless. The branding "Droid" is a registered trademark of Lucasfilm that is licensed for Verizon's use. Many of these phones are also sold in other countries under different names (for example Motorola Droid is called Milestone in Europe); see under the individual articles for details. History It was announced on July 23, 2013 that Motorola will now exclusively manufacture Droid phones for Verizon. Phones HTC Droid Eris – Released on November 6, 2009 Motorola DroidReleased November 6, 2009 HTC Droid IncredibleReleased on April 29, 2010 Motorola Droid XReleased July 15, 2010 Motorola Droid 2Released August 12, 2010 Motorola Droid ProOptimized for business users, released November 18, 2010 HTC Droid Incredible 2Released April, 2011 Samsung Droid ChargeReleased May 14, 2011 Motorola Droid X2Released May 19, 2011 Motorola Droid 3Released July 7, 2011, shipping with 2.3.6 Gingerbread Motorola Droid BionicReleased September 8, 2011 Motorola Droid RazrReleased November 11, 2011 Motorola Droid Razr MaxxShipping January 26, 2012 Motorola Droid 4Released February 10, 2012 HTC Droid Incredible 4G LTEReleased July 5, 2012 Motorola Droid Razr MReleased September 2012 Motorola Droid Razr HDReleased October 18, 2012 Motorola Droid Razr HD MaxxReleased October 18, 2012 HTC Droid DNAReleased November 21, 2012 Motorola Droid MaxxReleased July 23, 2013 Motorola Droid UltraReleased July 23, 2013 Motorola Droid MiniReleased July 23, 2013 Motorola Droid TurboReleased October 28, 2014 Motorola Droid Maxx 2Released October 27, 2015 Motorola Droid Turbo 2Released October 27, 2015 Moto Z DroidReleased July 2016 Moto Z Force DroidReleased July 2016 Moto Z Play DroidReleased September 8, 2016 References External links - DroidDoes Verizon Wireless
Verizon Droid
Technology
441
6,871,815
https://en.wikipedia.org/wiki/Thrombin%20time
The thrombin time (TT), also known as the thrombin clotting time (TCT), is a blood test that measures the time it takes for a clot to form in the plasma of a blood sample containing anticoagulant, after an excess of thrombin has been added. It is used to diagnose blood coagulation disorders and to assess the effectiveness of fibrinolytic therapy. This test is repeated with pooled plasma from normal patients. The difference in time between the test and the 'normal' indicates an abnormality in the conversion of fibrinogen (a soluble protein) to fibrin, an insoluble protein. The thrombin time compares the rate of clot formation to that of a sample of normal pooled plasma. Thrombin is added to the samples of plasma. If the time it takes for the plasma to clot is prolonged, a quantitative (fibrinogen deficiency) or qualitative (dysfunctional fibrinogen) defect is present. In blood samples suspected to contain heparin, a substance derived from snake venom called batroxobin (formerly reptilase) is used for comparison to thrombin time. Batroxobin has a similar action to thrombin but unlike thrombin it is not inhibited by heparin, so reptilase time and thrombin time can be used concurrently to distinguish anticoagulant effect from hypofibrinogenemia or dysfibrinogenemia. Normal values for thrombin time may be 12 to 14 seconds, but the test has significant reagent variability. If batroxobin is used, the time should be between 15 and 20 seconds. Thrombin time can be prolonged by heparin, fibrin degradation products, and fibrinogen deficiency or abnormality. Thrombin time is not affected by anti-Xa anticoagulants such as rivaroxaban or apixaban, but is very sensitive to direct thrombin inhibitors including dabigatran, argatroban, and bivalirudin. Test procedure After separating the plasma from the whole blood by centrifugation, bovine thrombin is added to the sample of plasma. Clot formation is detected optically or mechanically by a coagulation instrument. The time between the addition of the thrombin and the clot formation is recorded as the thrombin clotting time. Specimen requirements Whole blood is taken with either citrate or oxalate additive (if using the vacutainer system, this is a light blue top tube). As with other coagulation assays, the tube must not be over- or under-filled in order to ensure the correct anticoagulant-to-blood ratio: one part anticoagulant per nine parts blood. Reference ranges The reference ranges of the thrombin clotting time is generally <22 seconds, and often from 14 to 16 seconds. Laboratories usually calculate their own ranges, based on the method used and the results obtained from healthy individuals from the local population. Variability arises from differences in thrombin concentration, dilution of plasma, presence and/or concentration of calcium ions, as well as the influence of analyser type. TT may also be sensitive to citrate pH. Separate ranges are used for infants. Limitations Blood samples that are more than eight hours old can give inaccurate results when tested. See also Coagulation cascade Partial thromboplastin time (PTT), or activated partial thromboplastin time (aPTT or APTT) Prothrombin time (PT) References External links http://www.fpnotebook.com/hemeonc/lab/ThrmbnTm.htm Blood tests
Thrombin time
Chemistry
805
32,170,642
https://en.wikipedia.org/wiki/Iann%20Barron
Iann Marchant Barron (16 June 1936 – 16 May 2022) was a British computer engineer and entrepreneur. Biography During vacation work in 1956–57 at Elliott Brothers while still at Cambridge he designed the Elliott 803. On leaving University he joined the Civil Service in 1958 as a Scientific Officer on special assignment first to the Army Operational Research Group, and in 1960 to the Air Ministry. He returned to the company now called Elliott Automation as a Project Leader for the Elliott 502 computer team, later becoming the company's Head of System Research. In 1965 Barron left Elliott Automation to become Founder and Managing Director of Computer Technology Limited, where the Modular One range of computer systems was developed. In the mid-1970s he formed a new company, Microcomputer Analysis Ltd, which offered consultancy on microprocessors to the semiconductor industry. This brought him into contact with two eminent American semiconductor specialists, Richard Petritz and Paul Schroeder, and in 1978 the triumvirate founded Inmos International PLC, which produced the innovative transputer, and led to the development of SpaceWire. Barron was elected a Distinguished Fellow of the British Computer Society (DFBCS) in 1986 and was appointed CBE in the 1994 New Year Honours. Barron died on 16 May 2022, at the age of 85. References Bibliography "The Origins of SpaceWire", Paul Walker "The Inmos Legacy", Dick Selwood, August 2007, inmos.com "In Barron's Court", IEE Review, 16 January 1997 "Transputer-> Forgotten Futures", 1998 USENET discussion in comp.sys.transputer 1936 births 2022 deaths British computer scientists Commanders of the Order of the British Empire Fellows of the British Computer Society History of computing in the United Kingdom People educated at University College School
Iann Barron
Technology
373
74,823,213
https://en.wikipedia.org/wiki/Bhojpuri%20numerals
Bhojpuri number words include numerals and other words derived from them, along with the words which are borrowed from other numbers. Cardinal numbers Base numbers 1-99 The Old Bhojpuri word for Twenty is kor̤ī, which is still used in Trinidadian Bhojpuri. In Western Standard Bhojpuri, egara, baara end with "e" instead of "a', hence, egare, baare, tere e.t.c are used till eighteen. The word for Hundred in Bhojpuri is Sai. Higher numbers The word for thousand is Hajār, which is a Persian loanword, the Old Bhojpuri word is Sahas. The word for One Hundred Thousand is Lākh. Numbers above Hundred are formed by subjoining the lower number with the higher ones. Base 20 counting A counting system considering 20 as a base is also used in Bhojpuri. Hence, 65 is expressed as (3*20)+5, i.e. Teen Bees/Kori aa Panch, Some time number lesser than 20 but near twenty are also expressed in terms of twenty. For example, Eightneen can be expresses has Du Kam Bees/Kori. Ordinals First four ordinals are: The rest of the ordinals are made by adding -wā to the cardinals, for ex. pachwā (fifth). Multiplicative numerals Multiplicatives are formed are adding hālī, hālā, ber, beri, tor, torī with the numbers. Notes References numerals Numerals
Bhojpuri numerals
Mathematics
332
32,929,531
https://en.wikipedia.org/wiki/Astraeus%20pteridis
Astraeus pteridis, commonly known as the giant hygroscopic earthstar, is a species of false earthstar in the family Diplocystaceae. It was described by American mycologist Cornelius Lott Shear in 1902 under the name Scleroderma pteridis. Sanford Myron Zeller transferred it to Astraeus in a 1948 publication. It is found in North America. A. pteridis was previously frequently confused with the supposedly cosmopolitan A. hygrometricus, now shown to be found only in Europe. Distribution A molecular phylogenetic study from 2013 resulted in the application of the name A. pteridis to the larger Astraeus found in the Pacific Northwest region of North America. A. pteridis has also been found in the Canary Islands, Madeira, and Argentina, which share historical connections to Lusitania. It may be widely distributed or have been translocated. Morphology A. pteridis closely resembles A. hygrometricus, though it is larger, reaching 5 to 15 cm (2.0 to 5.9 in) or more when expanded, and often has a more pronounced areolate pattern on the inner surface of the rays. Within Astraeus, A. pteridis is most closely related to A. morganii. Like other Astraeus, it is hygroscopic, with rays expanding in humid conditions and closing in arid conditions. It is not typically considered edible. References External links Boletales Fungi described in 1902 Fungi of North America Fungi of Macaronesia Fungus species
Astraeus pteridis
Biology
323
946,426
https://en.wikipedia.org/wiki/Amplitude-shift%20keying
Amplitude-shift keying (ASK) is a form of amplitude modulation that represents digital data as variations in the amplitude of a carrier wave. In an ASK system, a symbol, representing one or more bits, is sent by transmitting a fixed-amplitude carrier wave at a fixed frequency for a specific time duration. For example, if each symbol represents a single bit, then the carrier signal could be transmitted at nominal amplitude when the input value is 1, but transmitted at reduced amplitude or not at all when the input value is 0. Method Any digital modulation scheme uses a finite number of distinct signals to represent digital data. ASK uses a finite number of amplitudes, each assigned a unique pattern of binary digits. Usually, each amplitude encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular amplitude. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the amplitude of the received signal and maps it back to the symbol it represents, thus recovering the original data. Frequency and phase of the carrier are kept constant. Like AM, an ASK is also linear and sensitive to atmospheric noise, distortions, propagation conditions on different routes in PSTN, etc. Both ASK modulation and demodulation processes are relatively inexpensive. The ASK technique is also commonly used to transmit digital data over optical fiber. For LED transmitters, binary 1 is represented by a short pulse of light and binary 0 by the absence of light. Laser transmitters normally have a fixed "bias" current that causes the device to emit a low light level. This low level represents binary 0, while a higher-amplitude lightwave represents binary 1. The simplest and most common form of ASK operates as a switch, using the presence of a carrier wave to indicate a binary one and its absence to indicate a binary zero. This type of modulation is called on-off keying (OOK), and is used at radio frequencies to transmit Morse code (referred to as continuous wave operation), More sophisticated encoding schemes have been developed which represent data in groups using additional amplitude levels. For instance, a four-level encoding scheme can represent two bits with each shift in amplitude; an eight-level scheme can represent three bits; and so on. These forms of amplitude-shift keying require a high signal-to-noise ratio for their recovery, as by their nature much of the signal is transmitted at reduced power. ASK system can be divided into three blocks. The first one represents the transmitter, the second one is a linear model of the effects of the channel, the third one shows the structure of the receiver. The following notation is used: ht(f) is the carrier signal for the transmission hc(f) is the impulse response of the channel n(t) is the noise introduced by the channel hr(f) is the filter at the receiver L is the number of levels that are used for transmission Ts is the time between the generation of two symbols Different symbols are represented with different voltages. If the maximum allowed value for the voltage is A, then all the possible values are in the range [−A, A] and they are given by: the difference between one voltage and the other is: Considering the picture, the symbols v[n] are generated randomly by the source S, then the impulse generator creates impulses with an area of v[n]. These impulses are sent to the filter ht to be sent through the channel. In other words, for each symbol a different carrier wave is sent with the relative amplitude. Out of the transmitter, the signal s(t) can be expressed in the form: In the receiver, after the filtering through hr (t) the signal is: where we use the notation: where * indicates the convolution between two signals. After the A/D conversion the signal z[k] can be expressed in the form: In this relationship, the second term represents the symbol to be extracted. The others are unwanted: the first one is the effect of noise, the third one is due to the intersymbol interference. If the filters are chosen so that g(t) will satisfy the Nyquist ISI criterion, then there will be no intersymbol interference and the value of the sum will be zero, so: the transmission will be affected only by noise. Probability of error The probability density function of having an error of a given size can be modelled by a Gaussian function; the mean value will be the relative sent value, and its variance will be given by: where is the spectral density of the noise within the band and Hr (f) is the continuous Fourier transform of the impulse response of the filter hr (f). The probability of making an error is given by: where, for example, is the conditional probability of making an error given that a symbol v0 has been sent and is the probability of sending a symbol v0. If the probability of sending any symbol is the same, then: If we represent all the probability density functions on the same plot against the possible value of the voltage to be transmitted, we get a picture like this (the particular case of is shown): The probability of making an error after a single symbol has been sent is the area of the Gaussian function falling under the functions for the other symbols. It is shown in cyan for just one of them. If we call the area under one side of the Gaussian, the sum of all the areas will be: . The total probability of making an error can be expressed in the form: We now have to calculate the value of . In order to do that, we can move the origin of the reference wherever we want: the area below the function will not change. We are in a situation like the one shown in the following picture: it does not matter which Gaussian function we are considering, the area we want to calculate will be the same. The value we are looking for will be given by the following integral: where is the complementary error function. Putting all these results together, the probability to make an error is: from this formula we can easily understand that the probability to make an error decreases if the maximum amplitude of the transmitted signal or the amplification of the system becomes greater; on the other hand, it increases if the number of levels or the power of noise becomes greater. This relationship is valid when there is no intersymbol interference, i.e. is a Nyquist function. See also Frequency-shift keying (FSK) References External links Calculating the Sensitivity of an Amplitude Shift Keying (ASK) Receiver Quantized radio modulation modes Applied probability Fault tolerance
Amplitude-shift keying
Mathematics,Engineering
1,365
5,198,214
https://en.wikipedia.org/wiki/Auwers%20synthesis
The Auwers synthesis is a series of organic reactions forming a flavonol from a coumarone. This reaction was first reported by Karl von Auwers in 1908. The first step in this procedure is an acid catalyzed aldol condensation between benzaldehyde and a 3-cyclooxapentanone to an o-hydroxychalcone. Bromination of the alkene group gives a dibromo-adduct which rearranges to the flavonol by reaction with potassium hydroxide. Mechanism A possible mechanism for the rearrangement step is shown below: See also Algar–Flynn–Oyamada reaction Allan–Robinson reaction References Name reactions Oxygen heterocycle forming reactions Ring expansion reactions Carbon-carbon bond forming reactions
Auwers synthesis
Chemistry
164
53,515,724
https://en.wikipedia.org/wiki/History%20of%20research%20on%20Arabidopsis%20thaliana
Arabidopsis thaliana is a first class model organism and the single most important species for fundamental research in plant molecular genetics. A. thaliana was the first plant for which a high-quality reference genome sequence was determined and a worldwide research community has developed many other genetic resources and tools. The experimental advantages of A. thaliana have enabled many important discoveries. These advantages have been extensively reviewed, as has its role in fundamental discoveries about the plant immune system, natural variation, root biology, and other areas. Early history A. thaliana was first described by Johannes Thal, and later renamed in his honor. (See the Taxonomy section of the main article.) Friedrich Laibach outlined why A. thaliana might be a good experimental system in 1943 and collected a large number of natural accessions. A. thaliana is largely self-pollinating, so these accessions represent inbred strains, with high homozygosity that simplifies genetic analysis. Natural A. thaliana accessions are often referred to as "ecotypes". Laibach had earlier (1907) determined the A. thaliana chromosome number (5) as part of his PhD research. Laibach's student Erna Reinholz described mutagenesis of A. thaliana with X-ray radiation in 1945. George Rédei pioneered the use of A. thaliana for fundamental studies, mutagenizing plants with ethyl methanesulfonate (EMS) and then screening them for auxotrophic defects and writing an influential review in 1975. Rédei distributed the standard laboratory accessions 'Columbia-0' and 'Landsberg erecta'''. Gerhard Röbbelen organized the first International Arabidopsis Symposium in 1965. Röbbelen also started the 'Arabidopsis Information Service', a newsletter for sharing information in the community. This newsletter was maintained by A.R. Kranz starting in 1974, and was published until 1990. Growing interest, 1975-1986 As molecular biology methods progressed, many investigators sought to focus community effort on a common model plant species such as petunia or tomato. This concept changed the emphasis of the long tradition of researchers using diverse agronomically important species such as maize, barley, and peas. The A. thaliana subcommunity espoused an ethos of freely sharing information and materials, and investigators were attracted by the perceived wide-open nature of plant molecular genetics relative to other fields that were better established and thus more "crowded" and competitive. The A. thaliana genome was shown to be relatively small and nonrepetitive, which was an important advantage for early molecular methods. Pioneering A. thaliana studies have used its natural filamentous pathogenHyaloperonospora arabidopsidis, the model plant-pathogenic bacterium Pseudomonas syringae, and many other microbes.A. thaliana roots are transparent and have a relatively simple radially symmetric cellular structure, facilitating analysis by microscopy. Molecular cloning, 1986-2000 Cloning of an A. thaliana gene, an alcohol dehydrogenase-encoding locus, was described in 1986, by which time mutations at over 200 loci had been defined. Genetic linkage maps, QTL populations, and map-based cloning Development of genetic maps based on scorable phenotypes and molecular genetic markers facilitated map-based cloning of mutant loci from classical "forward genetic" screens. Growing amounts of DNA sequence data facilitated development and application of such molecular markers. Descriptions of the first successful map-based cloning projects were published in 1992. Recombinant inbred strain/line (RIL) populations were developed, notably from a cross of Columbia-0 × Lansberg erecta, and used to map and clone a wide variety of quantitative trait loci. Efficient genetic transformationA. thaliana can be genetically transformed using Agrobacterium tumefaciens; transformation was first reported in 1986. Later work showed that transgenic seed can be obtained by simply dipping flowers into a suitable bacterial suspension. The invention/discovery of this 'floral dip' method, published in 1998, made A. thaliana arguably the most easily transformed multicellular organism, and has been essential to many subsequent investigations. Efficient transformation facilitated insertional mutagenesis as described further below. Transcription factors and regulation Floral homeotic genes and the ABC model A. thaliana geneticists made important contributions to development of the ABC model of flower development via genetic analysis of floral homeotic mutants. Homeodomain genes The plant homeodomain finger is so named due to its discovery in an Arabidopsis homeodomain. In 1993 Schindler et al. discovered the PHD finger in the protein . It has since proven to be important to chromatin in a wide variety of taxa. KNOTTED-like homeobox genes, homologs of the maize KNOTTED1 gene that control shoot apical meristem identity, were described in 1994 and cloning of the SHOOT-MERISTEMLESS locus was published in 1996. Genome project An international consortium began developing a physical map for A. thaliana in 1990, and DNA sequencing and assembly efforts were formalized in the Arabidopsis Genome Initiative (AGI) in 1996. This work paralleled the Human Genome Project and related projects for other model organisms, including the budding yeast S. cerevisiae, the nematode C. elegans, and the fly Drosophila melanogaster, which were published in 1996, 1998, and 2000, respectively. The project built on efforts to sequence expressed sequence tags from A. thaliana. Descriptions of the sequences of chromosomes 4 and 2 were published in 1999, and the project was completed in 2000. This represented the first reference genome for a flowering plant and facilitated comparative genomics. Functional and comparative genomics, 2000-2010 and beyond NSF 2010 project A series of meetings led to an ambitious long-term NSF-funded initiative to determine the function of every A. thaliana gene by the year 2010. The rationale for this project was to combine new high-throughput technologies with systematic gene-family-wide studies and community resources to accelerate progress beyond what was possible via piecemeal single-laboratory studies. Microarray and transcriptome analysis DNA microarray technology was rapidly adopted for A. thaliana research and led to the development of "atlases" of gene expression in different tissues and under different conditions. Large-scale "reverse genetic" analysis The A. thaliana genome sequence, low-cost Sanger sequencing, and ease of transformation facilated genome-wide mutagenesis, yielding collections of sequence-indexed transposon mutant and (especially) T-DNA mutant lines. The ease and speed of ordering mutant seed from stock centers dramatically accelerated "reverse genetic" study of many gene families; the Arabidopsis Biological Resource Center and the Nottingham Arabidopsis Stock Centre were important in this regard, and information on stock availability was integrated into The Arabidopsis Information Resource database. Syngenta developed and publicly shared a significant T-DNA mutant population, the Syngenta Arabidopsis Insertion Library (SAIL) collection. Industry investment in A. thaliana research suffered a setback in the closure of Syngenta's Torrey Mesa Research Institute (TMRI), but remained robust. Mendel Biotechnology overexpressed the vast majority of A. thaliana transcription factors to generate leads for genetic engineering. Cereon Genomics, a subsidiary of Monsanto, sequenced the Landsberg erecta accession (at lower coverage than the Col-0 project) and shared the assembly, along with other sequence marker data. RNA silencing A. thaliana quickly became an important model for the study of plant small RNAs. The argonaute1 mutant, named for its resemblance to an Argonauta octopuses, was the namesake for the Argonaute protein family central to silencing. Forward genetic screens focused on vegetative phase change uncovered many genes controlling small RNA biogenesis. Multiple groups identified mutations in the DICER-LIKE1 gene (encoding the main DICER protein controlling microRNA biogenesis in plants) that cause strong developmental defects.A. thaliana became an important model for RNA-directed DNA methylation (transcriptional silencing), partly because many A. thaliana methylation mutants are viable, which is not the case for several model animals (in which such mutations cause lethality). Growing popularity of other model plants As the NSF 2010 project neared completion, there was a perceived decrease in funding agency interest in A. thaliana, evidenced by the cessation of USDA funding for A. thaliana research and the end of NSF funding for the TAIR database. This trend coincided with the progress of the (US NSF-supported) National Plant Genome Initiative, which began in 1998 and put an increased emphasis on crops. Draft genome sequence for rice were published in 2002 and followed by publications for sorghum and maize in 2009. A draft genome of the model tree Populus trichocarpa was published in 2006. The draft genome of Brachypodium distachyon, a short-statured model grass (Poaceae) was published in 2010. The Joint Genome Institute of the United States Department of Energy identified poplar, sorghum, B. distachyon, model C4 grass Setaria viridis (foxtail millet), model moss Physcomitrella patens, model alga Chlamydomonas reinhardtii, and soybean as its "flagship" species for plant genomics geared towards bioenergy applications. Awards Well established investigators including Ronald W. Davis, Gerald Fink, and Frederick M. Ausubel adopted A. thaliana as a model in the 1980s, attracting interest. Elliot Meyerowitz and Chris R. Somerville were awarded the Balzan Prize in 2006 for their work developing A. thaliana as a model. Thirteen prominent American A. thaliana geneticists were selected as investigators of the prestigious Howard Hughes Medical Institute and Gordon and Betty Moore Foundation in 2011: Philip Benfey, Dominique Bergmann, Simon Chan, Xuemei Chen, Jeff Dangl, Xinnian Dong, Joseph R. Ecker, Mark Estelle, Sheng Yang He, Robert A. Martienssen, Elliot Meyerowitz, Craig Pikaard, and Keiko Torii. (Also selected were wheat geneticist Jorge Dubcovsky and photosynthesis researcher Krishna Niyogi, who has extensively used A. thaliana along with the alga Chlamydomonas reinhardtii.) Prior to this, a handful of A. thaliana geneticists had become HHMI investigators: Joanne Chory (1997, also awarded a 2018 Breakthrough Prize in Life Sciences), Daphne Preuss (2000-2006), and Steve Jacobsen (2005). Caroline Dean was awarded many honors including the 2020 Wolf Prize in Agriculture for "pioneering discoveries in flowering time control and epigenetic basis of vernalization" made with A. thaliana. Impact of second- and third-generation sequencing technology A. thaliana continues to be the subject of intense study using new technologies such as high-throughput sequencing. Direct sequencing of cDNA ("RNA-Seq") largely replaced microarray analysis of gene expression, and several studies sequenced cDNA from single cells (scRNA-seq), particularly from root tissue. Mapping of mutations from forward screens is increasingly done with direct genome sequencing, combined in some cases with bulked segregant analysis or backcrossing.A. thaliana is a premier model for studies of the plant microbiome and natural genetic variation, including genome-wide association studies. Short RNA-guided DNA editing with CRISPR tools has been applied to A. thaliana'' since 2013. External links Electronic Arabidopsis Information Service (AIS) archive Multinational Arabidopsis Steering Committee reports (1990 onward) and meeting minutes The Arabidopsis book online 1001 Genomes Project site References Arabidopsis thaliana Molecular genetics Plant genetics Arabidopsis thaliana
History of research on Arabidopsis thaliana
Chemistry,Biology
2,533
184,774
https://en.wikipedia.org/wiki/Amanita%20phalloides
Amanita phalloides (), commonly known as the death cap, is a deadly poisonous basidiomycete fungus and mushroom, one of many in the genus Amanita. Originating in Europe but later introduced to other parts of the world since the late twentieth century, A. phalloides forms ectomycorrhizas with various broadleaved trees. In some cases, the death cap has been introduced to new regions with the cultivation of non-native species of oak, chestnut, and pine. The large fruiting bodies (mushrooms) appear in summer and autumn; the caps are generally greenish in colour with a white stipe and gills. The cap colour is variable, including white forms, and is thus not a reliable identifier. These toxic mushrooms resemble several edible species (most notably Caesar's mushroom and the straw mushroom) commonly consumed by humans, increasing the risk of accidental poisoning. Amatoxins, the class of toxins found in these mushrooms, are thermostable: they resist changes due to heat, so their toxic effects are not reduced by cooking. Amanita phalloides is the most poisonous of all known mushrooms. It is estimated that as little as half a mushroom contains enough toxin to kill an adult human. It is also the deadliest mushroom worldwide, responsible for 90% of mushroom-related fatalities every year. It has been involved in the majority of human deaths from mushroom poisoning, possibly including Roman Emperor Claudius in AD 54 and Holy Roman Emperor Charles VI in 1740. It has also been the subject of much research and many of its biologically active agents have been isolated. The principal toxic constituent is α-Amanitin, which causes liver and kidney failure. Taxonomy The death cap is named in Latin as such in the correspondence between the English physician Thomas Browne and Christopher Merrett. Also, it was described by French botanist Sébastien Vaillant in 1727, who gave a succinct phrase name "Fungus phalloides, annulatus, sordide virescens, et patulus"—a recognizable name for the fungus today. Though the scientific name phalloides means "phallus-shaped", it is unclear whether it is named for its resemblance to a literal phallus or the stinkhorn mushrooms Phallus. In 1821, Elias Magnus Fries described it as Agaricus phalloides, but included all white amanitas within its description. Finally, in 1833, Johann Heinrich Friedrich Link settled on the name Amanita phalloides, after Persoon had named it Amanita viridis 30 years earlier. Although Louis Secretan's use of the name A. phalloides predates Link's, it has been rejected for nomenclatural purposes because Secretan's works did not use binomial nomenclature consistently; some taxonomists have, however, disagreed with this opinion. Amanita phalloides is the type species of Amanita section Phalloideae, a group that contains all of the deadly poisonous Amanita species thus far identified. Most notable of these are the species known as destroying angels, namely A. virosa, A. bisporigera and A. ocreata, as well as the fool's mushroom (A. verna). The term "destroying angel" has been applied to A. phalloides at times, but "death cap" is by far the most common vernacular name used in English. Other common names also listed include "stinking amanita" and "deadly amanita". A rarely appearing, all-white form was initially described A. phalloides f. alba by Max Britzelmayr, though its status has been unclear. It is often found growing amid normally colored death caps. It has been described, in 2004, as a distinct variety and includes what was termed A. verna var. tarda. The true A. verna fruits in spring and turns yellow with KOH solution, whereas A. phalloides never does. Description The death cap has a large and imposing epigeous (aboveground) fruiting body (basidiocarp), usually with a pileus (cap) from across, initially rounded and hemispherical, but flattening with age. The color of the cap can be pale-green, yellowish-green, olive-green, bronze, or (in one form) white; it is often paler toward the margins, which can have darker streaks; it is also often paler after rain. The cap surface is sticky when wet and easily peeled—a troublesome feature, as that is allegedly a feature of edible fungi. The remains of the partial veil are seen as a skirtlike, floppy annulus usually about below the cap. The crowded white lamellae (gills) are free. The stipe is white with a scattering of grayish-olive scales and is long and thick, with a swollen, ragged, sac-like white volva (base). As the volva, which may be hidden by leaf litter, is a distinctive and diagnostic feature, it is important to remove some debris to check for it. Spores: 7-12 x 6-9 μm. Smooth, ellipsoid, amyloid. The smell has been described as initially faint and honey-sweet, but strengthening over time to become overpowering, sickly-sweet and objectionable. Young specimens first emerge from the ground resembling a white egg covered by a universal veil, which then breaks, leaving the volva as a remnant. The spore print is white, a common feature of Amanita. The transparent spores are globular to egg-shaped, measure 8–10 μm (0.3–0.4 mil) long, and stain blue with iodine. The gills, in contrast, stain pallid lilac or pink with concentrated sulfuric acid. Biochemistry The species is now known to contain two main groups of toxins, both multicyclic (ring-shaped) peptides, spread throughout the mushroom and d cell–destroying activity in vitro. An unrelated compound, antamanide, has also been isolated. Amatoxins consist of at least eight compounds with a similar structure, that of eight amino-acid rings; they were isolated in 1941 by Heinrich O. Wieland and Rudolf Hallermayer of the University of Munich. Of the amatoxins, α-Amanitin is the chief component and along with β-amanitin is likely responsible for the toxic effects. Their major toxic mechanism is the inhibition of RNA polymerase II, a vital enzyme in the synthesis of messenger RNA (mRNA), microRNA, and small nuclear RNA (snRNA). Without mRNA, essential protein synthesis and hence cell metabolism grind to a halt and the cell dies. The liver is the principal organ affected, as it is the organ which is first encountered after absorption in the gastrointestinal tract, though other organs, especially the kidneys, are susceptible. The RNA polymerase of Amanita phalloides is insensitive to the effects of amatoxins, so the mushroom does not poison itself. The phallotoxins consist of at least seven compounds, all of which have seven similar peptide rings. Phalloidin was isolated in 1937 by Feodor Lynen, Heinrich Wieland's student and son-in-law, and Ulrich Wieland of the University of Munich. Though phallotoxins are highly toxic to liver cells, they have since been found to add little to the death cap's toxicity, as they are not absorbed through the gut. Furthermore, phalloidin is also found in the edible (and sought-after) blusher (A. rubescens). Another group of minor active peptides are the virotoxins, which consist of six similar monocyclic heptapeptides. Like the phallotoxins, they do not induce any acute toxicity after ingestion in humans. The genome of the death cap has been sequenced. Similarity to edible species A. phalloides is similar to the edible paddy straw mushroom (Volvariella volvacea) and A. princeps, commonly known as "white Caesar". Some may mistake juvenile death caps for edible puffballs or mature specimens for other edible Amanita species, such as A. lanei, so some authorities recommend avoiding the collecting of Amanita species for the table altogether. The white form of A. phalloides may be mistaken for edible species of Agaricus, especially the young fruitbodies whose unexpanded caps conceal the telltale white gills; all mature species of Agaricus have dark-colored gills. In Europe, other similarly green-capped species collected by mushroom hunters include various green-hued brittlegills of the genus Russula and the formerly popular Tricholoma equestre, now regarded as hazardous owing to a series of restaurant poisonings in France. Brittlegills, such as Russula heterophylla, R. aeruginea, and R. virescens, can be distinguished by their brittle flesh and the lack of both volva and ring. Other similar species include A. subjunquillea in eastern Asia and A. arocheae, which ranges from Andean Colombia north at least as far as central Mexico, both of which are also poisonous. Distribution and habitat The death cap is native to Europe, where it is widespread. It is found from the southern coastal regions of Scandinavia in the north, to Ireland in the west, east to Poland and western Russia, and south throughout the Balkans, in Greece, Italy, Spain, and Portugal in the Mediterranean basin, and in Morocco and Algeria in north Africa. In west Asia, it has been reported from forests of northern Iran. There are records from further east in Asia but these have yet to be confirmed as A. phalloides. By the end of the 19th century, Charles Horton Peck had reported A. phalloides in North America. In 1918, samples from the eastern United States were identified as being a distinct though similar species, A. brunnescens, by George Francis Atkinson of Cornell University. By the 1970s, it had become clear that A. phalloides does occur in the United States, apparently having been introduced from Europe alongside chestnuts, with populations on the West and East Coasts. A 2006 historical review concluded the East Coast populations were inadvertently introduced, likely on the roots of other purposely imported plants such as chestnuts. The origins of the West Coast populations remained unclear, due to scant historical records, but a 2009 genetic study provided strong evidence for the introduced status of the fungus on the west coast of North America. Observations of various collections of A. phalloides, from conifers rather than native forests, have led to the hypothesis that the species was introduced to North America multiple times. It is hypothesized that the various introductions led to multiple genotypes which are adapted to either oaks or conifers. A. phalloides were conveyed to new countries across the Southern Hemisphere with the importation of hardwoods and conifers in the late twentieth century. Introduced oaks appear to have been the vector to Australia and South America; populations under oaks have been recorded from Melbourne, Canberra (where two people died in January 2012, of four who were poisoned), Adelaide, and further observed by citizen scientists in Beechworth, Sydney and Albury. It has been recorded under other introduced trees in Argentina. Pine plantations are associated with the fungus in Tanzania and South Africa, found under oaks and poplars in Chile, as well as Uruguay. A number of deaths in India have been attributed to it. Ecology It is ectomycorrhizally associated with several tree species and is symbiotic with them. In Europe, these include hardwood and, less frequently, conifer species. It appears most commonly under oaks, but also under beeches, chestnuts, horse-chestnuts, birches, filberts, hornbeams, pines, and spruces. In other areas, A. phalloides may also be associated with these trees or with only some species and not others. In coastal California, for example, A. phalloides is associated with coast live oak, but not with the various coastal pine species, such as Monterey pine. In countries where it has been introduced, it has been restricted to those exotic trees with which it would associate in its natural range. There is, however, evidence of A. phalloides associating with hemlock and with genera of the Myrtaceae: Eucalyptus in Tanzania and Algeria, and Leptospermum and Kunzea in New Zealand, suggesting that the species may have invasive potential. It may have also been anthropogenically introduced to the island of Cyprus, where it has been documented to fruit within Corylus avellana plantations. Toxicity As the common name suggests, the fungus is highly toxic, and is responsible for the majority of fatal mushroom poisonings worldwide. Its biochemistry has been researched intensively for decades, and , or half a cap, of this mushroom is estimated to be enough to kill a human. On average, one person dies a year in North America from death cap ingestion. The toxins of the death cap mushrooms primarily target the liver, but other organs, such as the kidneys, are also affected. Symptoms of death cap mushroom toxicity usually occur 6 to 12 hours after ingestion. Symptoms of ingestion of the death cap mushroom may include nausea and vomiting, which is then followed by jaundice, seizures, and coma which will lead to death. The mortality rate of ingestion of the death cap mushroom is believed to be around 10–30%. Some authorities strongly advise against putting suspected death caps in the same basket with fungi collected for the table and to avoid even touching them. Furthermore, the toxicity is not reduced by cooking, freezing, or drying. Poisoning incidents usually result from errors in identification. Recent cases highlight the issue of the similarity of A. phalloides to the edible paddy straw mushroom (Volvariella volvacea), with East- and Southeast-Asian immigrants in Australia and the West Coast of the U.S. falling victim. In an episode in Oregon, four members of a Korean family required liver transplants. Many North American incidents of death cap poisoning have occurred among Laotian and Hmong immigrants, since it is easily confused with A. princeps ("white Caesar"), a popular mushroom in their native countries. Of the 9 people poisoned in Australia's Canberra region between 1988 and 2011, three were from Laos and two were from China. In January 2012, four people were accidentally poisoned when death caps (reportedly misidentified as straw mushrooms, which are popular in Chinese and other Asian dishes) were served for dinner in Canberra; all the victims required hospital treatment and two of them died, with a third requiring a liver transplant. Signs and symptoms Death caps have been reported to taste pleasant. This, coupled with the delay in the appearance of symptoms—during which time internal organs are being severely, sometimes irreparably, damaged—makes them particularly dangerous. Initially, symptoms are gastrointestinal in nature and include colicky abdominal pain, with watery diarrhea, nausea, and vomiting, which may lead to dehydration if left untreated, and, in severe cases, hypotension, tachycardia, hypoglycemia, and acid–base disturbances. These first symptoms resolve two to three days after the ingestion. A more serious deterioration signifying liver involvement may then occur—jaundice, diarrhea, delirium, seizures, and coma due to fulminant liver failure and attendant hepatic encephalopathy caused by the accumulation of normally liver-removed substances in the blood. Kidney failure (either secondary to severe hepatitis or caused by direct toxic kidney damage) and coagulopathy may appear during this stage. Life-threatening complications include increased intracranial pressure, intracranial bleeding, pancreatic inflammation, acute kidney failure, and cardiac arrest. Death generally occurs six to sixteen days after the poisoning. It is noticed that after up to 24 hours have passed, the symptoms seem to disappear and the person might feel fine for up to 72 hours. Symptoms of liver and kidney damage start 3 to 6 days after the mushrooms were eaten, with the considerable increase of the transaminases. Mushroom poisoning is more common in Europe than in North America. Up to the mid-20th century, the mortality rate was around 60–70%, but this has been greatly reduced with advances in medical care. A review of death cap poisoning throughout Europe from 1971 to 1980 found the overall mortality rate to be 22.4% (51.3% in children under ten and 16.5% in those older than ten). This was revised to around 10–15% in surveys reviewed in 1995. Treatment Consumption of the death cap is a medical emergency requiring hospitalization. The four main categories of therapy for poisoning are preliminary medical care, supportive measures, specific treatments, and liver transplantation. Preliminary care consists of gastric decontamination with either activated carbon or gastric lavage; due to the delay between ingestion and the first symptoms of poisoning, it is common for patients to arrive for treatment many hours after ingestion, potentially reducing the efficacy of these interventions. Supportive measures are directed towards treating the dehydration which results from fluid loss during the gastrointestinal phase of intoxication and correction of metabolic acidosis, hypoglycemia, electrolyte imbalances, and impaired coagulation. No definitive antidote is available, but some specific treatments have been shown to improve survivability. High-dose continuous intravenous penicillin G has been reported to be of benefit, though the exact mechanism is unknown, and trials with cephalosporins show promise. Some evidence indicates intravenous silibinin, an extract from the blessed milk thistle (Silybum marianum), may be beneficial in reducing the effects of death cap poisoning. A long-term clinical trial of intravenous silibinin began in the US in 2010. Silibinin prevents the uptake of amatoxins by liver cells, thereby protecting undamaged liver tissue; it also stimulates DNA-dependent RNA polymerases, leading to an increase in RNA synthesis. According to one report based on a treatment of 60 patients with silibinin, patients who started the drug within 96 hours of ingesting the mushroom and who still had intact kidney function all survived. As of February 2014 supporting research has not yet been published. SLCO1B3 has been identified as the human hepatic uptake transporter for amatoxins; moreover, substrates and inhibitors of that protein—among others rifampicin, penicillin, silibinin, antamanide, paclitaxel, ciclosporin and prednisolone—may be useful for the treatment of human amatoxin poisoning. N-acetylcysteine has shown promise in combination with other therapies. Animal studies indicate the amatoxins deplete hepatic glutathione; N-acetylcysteine serves as a glutathione precursor and may therefore prevent reduced glutathione levels and subsequent liver damage. None of the antidotes used have undergone prospective, randomized clinical trials, and only anecdotal support is available. Silibinin and N-acetylcysteine appear to be the therapies with the most potential benefit. Repeated doses of activated carbon may be helpful by absorbing any toxins returned to the gastrointestinal tract following enterohepatic circulation. Other methods of enhancing the elimination of the toxins have been trialed; techniques such as hemodialysis, hemoperfusion, plasmapheresis, and peritoneal dialysis have occasionally yielded success, but overall do not appear to improve outcome. In patients developing liver failure, a liver transplant is often the only option to prevent death. Liver transplants have become a well-established option in amatoxin poisoning. This is a complicated issue, however, as transplants themselves may have significant complications and mortality; patients require long-term immunosuppression to maintain the transplant. That being the case, the criteria have been reassessed, such as onset of symptoms, prothrombin time (PT), serum bilirubin, and presence of encephalopathy, for determining at what point a transplant becomes necessary for survival. Evidence suggests, although survival rates have improved with modern medical treatment, in patients with moderate to severe poisoning, up to half of those who did recover suffered permanent liver damage. A follow-up study has shown most survivors recover completely without any sequelae if treated within 36 hours of mushroom ingestion. Notable victims Several historical figures may have died from A. phalloides poisoning (or other similar, toxic Amanita species). These were either accidental poisonings or assassination plots. Alleged victims of this kind of poisoning include Roman Emperor Claudius, Pope Clement VII, the Russian tsaritsa Natalia Naryshkina, and Holy Roman Emperor Charles VI. R. Gordon Wasson recounted the details of these deaths, noting the likelihood of Amanita poisoning. In the case of Clement VII, the illness that led to his death lasted five months, making the case inconsistent with amatoxin poisoning. Natalya Naryshkina is said to have consumed a large quantity of pickled mushrooms prior to her death. It is unclear whether the mushrooms themselves were poisonous or if she succumbed to food poisoning. Charles VI experienced indigestion after eating a dish of sautéed mushrooms. This led to an illness from which he died 10 days later—symptomatology consistent with amatoxin poisoning. His death led to the War of the Austrian Succession. Noted Voltaire, "this mushroom dish has changed the destiny of Europe." The case of Claudius's poisoning is more complex. Claudius was known to have been very fond of eating Caesar's mushroom. Following his death, many sources have attributed it to his being fed a meal of death caps instead of Caesar's mushrooms. Ancient authors, such as Tacitus and Suetonius, are unanimous about poison having been added to the mushroom dish, rather than the dish having been prepared from poisonous mushrooms. Wasson speculated the poison used to kill Claudius was derived from death caps, with a fatal dose of an unknown poison (possibly a variety of nightshade) being administered later during his illness. Other historians have speculated that Claudius may have died of natural causes. In July 2023, four people in Leongatha, Australia were taken to hospital after consuming a Beef Wellington suspected to have contained A. phalloides. Three of the four guests subsequently died, and one survived, later receiving a liver transplant. The woman who cooked the meal, Erin Patterson, was charged with murder in November 2023. See also List of Amanita species List of deadly fungi , known for expertise in treating mushroom poisoning References Cited texts External links UK Telegraph Newspaper (September 2008) - One woman dead, another critically ill after eating Death Cap fungi AmericanMushrooms.com - The Death Cap Mushroom Amanita phalloides Amanita phalloides: the death cap Amanita phalloides: Invasion of the Death Cap Key to species of Amanita Section Phalloideae from North and Central America - Amanita studies website California Fungi—Amanita phalloides Death cap in Australia - ANBG website On the Trail of the Death Cap Mushroom from National Public Radio phalloides Fungi of Africa Fungi of Europe Deadly fungi Hepatotoxins Fungi described in 1821 Fungus species
Amanita phalloides
Biology
4,941
28,277,010
https://en.wikipedia.org/wiki/Poincar%C3%A9%20complex
In mathematics, and especially topology, a Poincaré complex (named after the mathematician Henri Poincaré) is an abstraction of the singular chain complex of a closed, orientable manifold. The singular homology and cohomology groups of a closed, orientable manifold are related by Poincaré duality. Poincaré duality is an isomorphism between homology and cohomology groups. A chain complex is called a Poincaré complex if its homology groups and cohomology groups have the abstract properties of Poincaré duality. A Poincaré space is a topological space whose singular chain complex is a Poincaré complex. These are used in surgery theory to analyze manifold algebraically. Definition Let be a chain complex of abelian groups, and assume that the homology groups of are finitely generated. Assume that there exists a map , called a chain-diagonal, with the property that . Here the map denotes the ring homomorphism known as the augmentation map, which is defined as follows: if , then . Using the diagonal as defined above, we are able to form pairings, namely: , where denotes the cap product. A chain complex C is called geometric if a chain-homotopy exists between and , where is the transposition/flip given by . A geometric chain complex is called an algebraic Poincaré complex, of dimension n, if there exists an infinite-ordered element of the n-dimensional homology group, say , such that the maps given by are group isomorphisms for all . These isomorphisms are the isomorphisms of Poincaré duality. Example The singular chain complex of an orientable, closed n-dimensional manifold is an example of a Poincaré complex, where the duality isomorphisms are given by capping with the fundamental class . See also Poincaré space References – especially Chapter 2 External links Classifying Poincaré complexes via fundamental triples on the Manifold Atlas Algebraic topology Homology theory Duality theories
Poincaré complex
Mathematics
409
7,447,683
https://en.wikipedia.org/wiki/Optibo
Optibo is the product of a collaboration between Swedish firms to solve the housing industry's problem of availability of space caused by high land prices. Optibo's main architect was inspired when he saw the Disney cartoon Mickey's Trailer on TV. The project has led to the construction of an apartment which is only 25 square meters (270 square feet) in area. The single-room living space has the furniture built into the floor and the room can be changed from a living room to a bedroom, to a dining room, and back. The kitchen area is affixed to the wall and does not change. References Building engineering
Optibo
Engineering
130
3,271,984
https://en.wikipedia.org/wiki/BioLinux
BioLinux is a term used in a variety of projects involved in making access to bioinformatics software on a Linux platform easier using one or more of the following methods: Provision of complete systems Provision of bioinformatics software repositories Addition of bioinformatics packages to standard distributions Live DVD/CDs with bioinformatics software added Community building and support systems There are now various projects with similar aims, on both Linux systems and other Unices, and a selection of these are given below. There is also an overview in the Canadian Bioinformatics Helpdesk Newsletter that details some of the Linux-based projects. Package repositories Apple/Mac Many Linux packages are compatible with Mac OS X and there are several projects which attempt to make it easy to install selected Linux packages (including bioinformatics software) on a computer running Mac OS X. (source?) BioArchLinux BioArchLinux repository contain more than 3,770 packages for Arch Linux and Arch Linux based distribution. Debian Debian is another very popular Linux distribution in use in many academic institutions, and some bioinformaticians have made their own software packages available for this distribution in the deb format. Red Hat Package repositories are generally specific to the distribution of Linux the bioinformatician is using. A number of Linux variants are prevalent in bioinformatics work. Fedora is a freely-distributed version of the commercial Red Hat system. Red Hat is widely used in the corporate world as they offer commercial support and training packages. Fedora Core is a community supported derivative of Red Hat and is popular amongst those who like Red Hat's system but don't require commercial support. Many users of bioinformatics applications have produced RPMs (Red Hat's package format) designed to work with Fedora, which you can potentially also install on Red Hat Enterprise Linux systems. Other distributions such as Mandriva and SUSE use RPMs, so these packages may also work on these distributions. Slackware Slackware is one of the less used Linux distributions. It is popular with those who have better knowledge of the Linux operating system and who prefer the command line over the various GUIs available. Packages are in the tgz or tgx format. The most widely known live distribution based on Slackware is Slax and it has been used as a base for many of the bioinformatics distributions. BioSLAX Live DVDs/CDs Live DVDs or CDs are not an ideal way to provide bioinformatics computing, as they run from a CD/DVD drive. This means they are slower than a traditional hard disk installation and have limited ability to be configured. However, they can be suitable for providing ad hoc solutions where no other Linux access is available, and may even be used as the basis for a Linux installation. Standard distributions with good bioinformatics support In general, Linux distributions have a wide range of official packages available, but this does not usually include much in the way of scientific support. There are exceptions, such as those detailed below. Gentoo Linux Gentoo Linux provides over 156 bioinformatics applications (see Gentoo sci-biology herd in the main tree) in the form of ebuilds, which build the applications from source code. Additional 315 packages are in Gentoo science overlay (for testing). Although a very flexible system with excellent community support, the requirement to install from source means that Gentoo systems are often slow to install, and require considerable maintenance. It is possible to reduce some of the compilation time by using a central server to generate binary packages. On the other hand, you can fine tune all to run at the highest speed utilizing the best of your processor (for example to actually use SSE and AVX and AVX2 CPU instructions). Binary-based distro's usually provide binaries using only i686 or even just i386 instruction sets. FreeBSD FreeBSD is not a Linux distribution, but a version of Unix that it is very similar. Its ports are analogous Gentoo's ebuilds. However, the project continuously builds pre-compiled binary packages for Tier-1 platforms such as x86 and ARM. Users can also choose to build and install any port from source in order to enable non-portable optimizations or other build options. The build-from-source option also allows the ports system to automate installation of software with a license that does not permit redistribution. The ports collection contains over 31,000 ports, of which over 2,200 are in scientific categories, and over 240 are biology-related. New ports and updates are listed on the Fresh Ports site. pkgsrc The pkgsrc package manager, originally forked from FreeBSD ports, is maintained by the NetBSD project, but aims to support all POSIX-compatible operating systems. It is well-tested on NetBSD, many Linux distributions, macOS, and SunOS derivatives. Like FreeBSD ports, pre-compiled binary packages are maintained for some platforms. Packages can be built from source on any platform, or if additional optimizations or options are desired. The pkgsrc collection contains over 19,000 packages, of which nearly 800 are in scientific categories, and over 60 are biology-related. Debian There are more than a hundred bioinformatics packages provided as part of the standard Debian installation. NEBC Bio-Linux packages can also be installed on a standard Debian system as long as the bio-linux-base package is also installed. This creates a /usr/local/bioinf directory where our other packages install their software. Debian packages may also work on Ubuntu Linux or other Debian-derived installations. Community building and support systems Providing support and documentation should be an important part of any BioLinux project, so that scientists who are not IT specialists may quickly find answers to their specific problems. Support forums or mailing lists are also useful to disseminate knowledge within the research community. Some of these resources are linked to here. See also List of open-source bioinformatics software List of biomedical cybernetics software References Bioinformatics software Linux Computational science
BioLinux
Mathematics,Biology
1,291
2,377,117
https://en.wikipedia.org/wiki/Gustav%20de%20Vries
Gustav de Vries (22 January 1866 – 16 December 1934) was a Dutch mathematician, who is best remembered for his work on the Korteweg–de Vries equation with Diederik Korteweg. He was born on 22 January 1866 in Amsterdam, and studied at the University of Amsterdam with the distinguished physical chemist Johannes van der Waals and with Korteweg. While doing his doctoral research De Vries supported himself by teaching at the Royal Military Academy in Breda (1892-1893) and at the "cadettenschool" in Alkmaar (1893-1894). Under Korteweg's supervision De Vries completed his doctoral dissertation: Bijdrage tot de kennis der lange golven, (Contributions to the knowledge of long waves) Acad. proefschrift, Universiteit van Amsterdam, 1894, 95 pp, Loosjes, Haarlem. The following year Korteweg and De Vries published the research paper On the Change of Form of Long Waves advancing in a Rectangular Canal and on a New Type of Long Stationary Waves, Philosophical Magazine, 5th series, 39, 1895, pp. 422–443. In 1894 De Vries worked as a high school teacher at the "HBS en Handelsschool" in Haarlem, where he remained until his retirement in 1931. He died in Haarlem on 16 December 1934. The Korteweg-de Vries Institute for Mathematics is named after him. See also Cnoidal wave Korteweg–de Vries equation Further reading Bastiaan Willink, The collaboration between Korteweg and de Vries — An enquiry into personalities, History of Physics, 16 p., October 2007 (arXiv.org). External links 1866 births 1934 deaths 19th-century Dutch mathematicians Scientists from Amsterdam Academic staff of the University of Amsterdam 20th-century Dutch mathematicians Fluid dynamicists
Gustav de Vries
Chemistry
399
158,644
https://en.wikipedia.org/wiki/Paddle
A paddle is a handheld tool with an elongated handle and a flat, widened end (the blade) used as a lever to apply force onto the bladed end. It most commonly describes a completely handheld tool used to propel a human-powered watercraft by pushing water in a direction opposite to the direction of travel (i.e. paddling). A paddle is different from an oar (which can be similar in shape and perform the same function via rowing) – an oar is attached to the watercraft via a fulcrum. The term "paddle" can also be used to describe objects of similar shapes or functions: A rotating set of paddle boards known as a paddle wheel is used to propel a steamboat or paddle steamer. In a number of racquet sports (e.g. ping-pong and paddle ball), a "paddle" or "bat" is a short, solid racket used to strike a ball. A mixing paddle is a device used to stir or mix separate ingredients within a mixture. A spanking paddle is used in corporal punishment, typically to forcefully hit someone (e.g. a juvenile) on the buttocks. Canoe and kayak paddles Materials and designs Paddles commonly used in canoes consist of a wooden, fibreglass, carbon fibre, or metal rod (the shaft) with a handle on one end and a rigid sheet (the blade) on the other end. Paddles for use in kayaks are longer, with a blade on each end; they are handled from the middle of the shaft. Kayak paddles having blades in the same plane (when viewed down the shaft) are called "un-feathered." Paddles with blades in different planes are called "feathered". Feathered paddles are measured by the degree of feather, such as 30, 45, or even 90 degrees. Many modern paddles are made of two pieces which can be snapped together in either feathered or unfeathered settings. The shaft is normally straight but in some cases a 'crank' is added with the aim of making the paddle more comfortable and reducing strain on the wrist. Because the kayak paddle is not supported by the boat, paddles made of lighter materials are desired; it is not uncommon for a kayak paddle to be two pounds ( ) or less and very expensive paddles can be as light as . Weight savings are more desirable at the ends of the paddle rather than in the middle. Cheaper kayak paddles have an aluminium shaft while more expensive ones use a lighter fibreglass or carbon fibre shaft. Some paddles have a smaller diameter shaft for people with smaller hands. Paddle length varies with a longer paddle being better suited for stronger people, taller people, and people using the paddle in a wider kayak. Some paddle makers have an online paddle size calculator. Blades vary in size and shape. A blade with a larger surface area may be desirable for a strong person with good shoulder joints, but tiring for a weaker person or a person with less than perfect shoulder joints. Because normal paddling involves alternately dipping and raising the paddle blades, the colour of the blades may affect the visibility of the kayaker to powerboats operators under limited visibility conditions. For this reason white or yellow blades may offer a safety advantage over black or blue blades. Of course, kayakers should wear a headlamp or have other lighting on their kayak under conditions of limited lighting. However, if a powerboat operator must look straight into a sun low in the sky to see a kayaker, the motion of brightly coloured paddle blades may be of more value than lighting on the kayak. Highly reflective water resistant tape (e.g. SOLAS tape) may be affixed to the paddle blades and boat to enhance visibility. Use The paddle is held with two hands, some distance apart from each other. For normal use, it is drawn through the water from front (bow) to back (stern) to drive the boat forwards. The two blades of a kayak paddle are dipped alternately on either side of the kayak. A paddle is distinguished from an oar in that the paddle is held in the user's hands and completely supported by the paddler, whereas an oar is primarily supported by the boat, through the use of oarlocks. Gloves may be worn to prevent blistering for long periods of paddling. Other types On mechanical paddle steamers, the motorized paddling is not done with a mass of paddles or oars but by rotating one or a few paddle wheels (rather the inverse of a water mill). Racing paddles also have special designs. They are generally less flat and are curved to catch more water, which enable racing paddlers to maximize the efficiency of their stroke. Wing bladed paddles are very popular in kayak racing. A wing paddle looks like a spoon and acts like a wing or sail, generating lift on the convex side, which pulls the paddle forward-outward at the expense of overcoming drag. This gives additional forward thrust as compared with a flat paddle with forward thrust mainly from drag. Bent shaft paddles, popular with tripping and marathon canoers, have a blade that is angled from the shaft, usually 12 to 15 degrees. See also Canoe paddle strokes Mixing paddle Oar Spanking paddle References External links Paddling History Canoeing and kayaking equipment Marine propulsion Fishing equipment
Paddle
Engineering
1,100
65,953,610
https://en.wikipedia.org/wiki/Western%20Polynesian%20tropical%20moist%20forests
The Western Polynesian tropical moist forests is a tropical and subtropical moist broadleaf forests ecoregion in Polynesia. It includes Tuvalu, the Phoenix Islands in Kiribati, Tokelau, and Howland and Baker islands, which are possessions of the United States. Geography The islands are mostly atolls, low islands of coralline sand ringing a central lagoon, or raised platforms of coralline limestone. The ecoregion includes three archipelagos along with some scattered islands. Tuvalu, formerly known as the Ellice Islands, includes nine atolls between 6º to 9º S latitude and 176º to 180º E longitude. The Phoenix Islands include eight atolls between 2º to 5º S latitude and 171º to 175º W longitude. They are part of Kiribati, and mostly uninhabited. Tokelau includes three inhabited atolls, Atafu, Nukunonu, and Fakaofo, and uninhabited Swain's Island, which is disputed with American Samoa. Tokelau lies between 8º to 12º S latitude and 170º to 173º W longitude. Howland and Baker islands lie north of the Phoenix Islands. Climate The climate of the islands is tropical, with little seasonal variation in temperature. Tuvalu and Tokelau are in the trade wind belt, and average annual rainfall ranges 1,500 to 3,500 mm, falling relatively consistently from month to month and year to year. Most of the Phoenix Islands and Howland and Baker islands receive less than 1,000 mm of rain annually, with a March through June dry season. Rainfall on these islands is also more variable from year to year, with droughts during El Niño cycles. Flora Native vegetation on the wetter islands is principally tropical moist forest, with shrub and herbaceous plant communities in rocky areas and shoreline areas exposed to salt spray. Characteristic canopy trees include Pisonia grandis up to 25 meters high, Cordia subcordata, and Tournefortia argentea in single-species or mixed stands, with Calophyllum inophyllum, Pandanus tectorius, Hernandia nymphaeifolia, Ficus tinctoria, and Guettarda speciosa. Understory plants include the shrubs Suriana maritima and Pemphis acidula, the fern Asplenium nidus, and the vine Ipomoea tuba. Forests are interspersed with areas of Scaevola taccada and Morinda citrifolia scrub. The drier islands are covered with low plants, including sparse grassland dominated by Lepturus repens, the creepers Portulaca spp., Sida fallax, and Sesuvium portulacastrum, the grass Eragrostis whitneyi, and occasionally the shrubs Cordia subcordata, Abutilon asiaticum, Suriana maritima, Pemphis acidula, and Tribulus cistoides. The flora is mostly of widespread coastal Indo-Pacific species, with relatively few endemic species. Fauna The native vertebrates are mostly seabirds, who roost in large numbers on many of the islands. The only forest birds are the Pacific pigeon (Ducula pacifica), a year-round resident, and the migratory long-tailed cuckoo (Urodynamis taitensis), which winters in the tropical Pacific and breeds in New Zealand during the spring and summer. There are no native non-marine mammals or amphibians. Polynesian rat (Rattus exulans) and house cats have been introduced to several islands, and prey heavily on native birds. Banded rails (Hypotaenidia philippensis) from Fiji have recently colonized Niulakita in Tuvalu. Protected areas 64.3% of the ecoregion is in protected areas. Protected areas include the Phoenix Islands Protected Area. References External links Western Polynesian tropical moist forests (DOPA) Western Polynesian tropical moist forests (Encyclopedia of Earth) Ecoregions of Kiribati Ecoregions of the United States Biota of Tuvalu Environment of Tuvalu Geography of Tuvalu Oceanian ecoregions Tropical and subtropical moist broadleaf forests Ecoregions of American Samoa
Western Polynesian tropical moist forests
Biology
857
22,480,924
https://en.wikipedia.org/wiki/Tipson%E2%80%93Cohen%20reaction
The Tipson–Cohen reaction is a name reaction first discovered by Stuart Tipson and Alex Cohen at the National Bureau of Standards in Washington D.C. The Tipson–Cohen reaction occurs when two neighboring secondary sulfonyloxy groups in a sugar molecule are treated with zinc dust (Zn) and sodium iodide (NaI) in a refluxing solvent such as N,N-dimethylformamide (DMF) to give an unsaturated carbohydrate. Background Unsaturated carbohydrates are desired as they are versatile building blocks that can be used in a variety of reactions. For example, they can be used as intermediates in the synthesis of natural products, or as dienophiles in the Diels-Alder reaction, or as precursors in the synthesis of oligosaccharides. The Tipson–Cohen reaction goes through a syn or anti elimination mechanism to produce an alkene in high to moderate yields. The reaction depends on the neighboring substituents. A mechanism for glucopyranosides and mannooyranosides is shown below. Scheme 1: Syn elimination occurs with the glucopyranosides. Galactopyranosides follows a similar syn mechanism. Whereas, anti elimination occurs with mannopyranosides. Note that R could be a methanesulfonyl CH2O2S (Ms), or a toluenesulfonyl CH3C6H4O2S (Ts). Reaction mechanism Scheme 3: The scheme illustrates the first displacement, the rate determining step and slowest step, where the starting material is converted to the iodo-intermediate. The intermediate is not detectable as it is rapidly converted to the unsaturated sugar. Experiments with azide instead of the iodide confirmed attack occurs at the C-3 as nitrogen-intermediates were isolated. The order of reactivity from most reactive to least reactive is: β-glucopyranosides > β-mannopyranosides > α-glucopyranosides> α-mannopyranosides. The reaction of β–mannopyranosides gives low yields and required longer reaction times than with β-glucopyranosides due to the presence of a neighboring axial substituent (sulfonyloxy) relative to C-3 sulfonyloxy group in the starting material. The axial substituent increases the steric interactions in the transition state, causing unfavorable eclipsing of the two sulfonyloxy groups. α-Glucopyranosides possess a β-trans-axial substituent relative to C-3 sulfonyloxy (anomeric OCH3 group) in the starting material. The β-trans-axial substituent influences the transition state by also causing an unfavorable steric interaction between the two groups. In the case of α-mannopyranosides, both a neighboring axial substituent (2-sulfonyloxy group) and a β-trans-axial substituent (anomeric OCH3 group) are present, therefore significantly increasing the reaction time and decreasing the yield. Reaction conditions Table 1: Reaction times and yield vary on the substrate. The β-glucopyranoside was found to be the best substrate for the Tipson–Cohen reaction as the reaction time and yield were much superior that any other substrate proposed in the study. aSubstrates possess benzylidene protecting groups at C-4 and C-6, OMe groups at anomeric position and OTs groups at C-2 and C-3. Reaction temperature 95–100 ˚C Reaction scope The reaction has been attempted in the microwave, improving yields with the α-glucopyranoside to 88% and reducing the reaction time significantly to 14 minutes. The original paper by Tipson and Cohen also used acyclic sugars to illustrate the utility of the reaction. Thus the reaction is not limited to cyclic carbohydrate derivatives. Sulphonoxy groups such as methanesulfonyl and toluenesulfonyl were both used, however it was found that substrates with toluenesulfonyl groups gave higher yields and lower reaction times. References Carbohydrate chemistry Organic reactions Name reactions
Tipson–Cohen reaction
Chemistry
916
1,434,840
https://en.wikipedia.org/wiki/ADO.NET
ADO.NET is a data access technology from the Microsoft .NET Framework that provides communication between relational and non-relational systems through a common set of components. ADO.NET is a set of computer software components that programmers can use to access data and data services from a database. It is a part of the base class library that is included with the Microsoft .NET Framework. It is commonly used by programmers to access and modify data stored in relational database systems, though it can also access data in non-relational data sources. ADO.NET is sometimes considered an evolution of ActiveX Data Objects (ADO) technology, but was changed so extensively that it can be considered an entirely new product. Architecture ADO.NET is conceptually divided into consumers and data providers. The consumers are the applications that need access to the data, and the providers are the software components that implement the interface and thereby provide the data to the consumer. Functionality exists in Visual Studio IDE to create specialized subclasses of the DataSet classes for a particular database schema, allowing convenient access to each field in the schema through strongly typed properties. This helps catch more programming errors at compile-time and enhances the IDE's Intellisense feature. A provider is a software component that interacts with a data source. ADO.NET data providers are analogous to ODBC drivers, JDBC drivers, and OLE DB providers. ADO.NET providers can be created to access such simple data stores as a text file and spreadsheet, through to such complex databases as Oracle Database, Microsoft SQL Server, MySQL, PostgreSQL, SQLite, IBM Db2, Sybase ASE, and many others. They can also provide access to hierarchical data stores such as email systems. Because different data store technologies can have different capabilities, every ADO.NET provider cannot implement every possible interface available in the ADO.NET standard. Microsoft describes the availability of an interface as "provider-specific," as it may not be applicable depending on the data store technology involved. Providers may augment the capabilities of a data store; these capabilities are known as "services" in Microsoft parlance. Object-relational mapping Entity Framework Entity Framework (EF) is an open source object-relational mapping (ORM) framework for ADO.NET, part of .NET Framework. It is a set of technologies in ADO.NET that supports the development of data-oriented software applications. Architects and developers of data-oriented applications have typically struggled with the need to achieve two very different objectives. The Entity Framework enables developers to work with data in the form of domain-specific objects and properties, such as customers and customer addresses, without having to concern themselves with the underlying database tables and columns where this data is stored. With the Entity Framework, developers can work at a higher level of abstraction when they deal with data, and can create and maintain data-oriented applications with less code than in traditional applications. LINQ to SQL LINQ to SQL (formerly called DLINQ) allows LINQ to be used to query Microsoft SQL Server databases, including SQL Server Compact databases. Since SQL Server data may reside on a remote server, and because SQL Server has its own query engine, it does not use the query engine of LINQ. Instead, the LINQ query is converted to a SQL query that is then sent to SQL Server for processing. Since SQL Server stores the data as relational data and LINQ works with data encapsulated in objects, the two representations must be mapped to one another. For this reason, LINQ to SQL also defines a mapping framework. The mapping is done by defining classes that correspond to the tables in the database, and containing all or a certain subset of the columns in the table as data members. References External links ADO.NET for the ADO Programmer ADO.NET Connection Strings Data management .NET Framework terminology Microsoft application programming interfaces Microsoft free software SQL data access ADO.NET Data Access technologies Software using the MIT license Windows-only free software
ADO.NET
Technology
830
24,202,127
https://en.wikipedia.org/wiki/C17H24O3
{{DISPLAYTITLE:C17H24O3}} The molecular formula C17H24O3 may refer to: Cyclandelate, a vasodilator Onchidal, a naturally occurring neurotoxin Shogaols, pungent constituents of ginger Molecular formulas
C17H24O3
Physics,Chemistry
63
52,297,221
https://en.wikipedia.org/wiki/Volume%20and%20displacement%20indicators%20for%20an%20architectural%20structure
The volume (W) and displacement (Δ) indicators have been discovered by Philippe Samyn in 1997 to help the search for the optimal geometry of architectural structures. Objective The study is limited to the quest of the geometry giving the structure of minimum volume. The cost of a structure depends on the nature and the quantity of the materials used as well as the tools and human resources required for its production. Although technological progress has reduced the cost of tools and the amount of human resources required, and despite the fact that computerised calculation tools can now be used to determine the dimension of a structure so that the load it bears at every point is within the admissible limits allowed by its constituent materials, it is also necessary for its geometry to be optimal. It is far from simple to find this optimal point because the choice available is so vast. Furthermore, the resistance of the structure is not the only criterion to take into account. In many cases, it is also important to ensure that it will not undergo excessive deformation under static loads or that it does not vibrate to inconvenient or dangerous levels when subjected to dynamic loads. Volume and displacement indicators, W and Δ, discovered by Philippe Samyn in August 1997, are useful tools in this regard. This approach does not take into account phenomena of elastic instability. It can indeed be shown that it is always possible to design a structure so that this effect becomes negligible. The indicators The objective is to ascertain the optimal morphology for a two-dimensional structure with constant thickness, which: fits in a rectangle of pre-determined dimensions, longitudinal L and horizontal H, expressed in metres (m); is made of one (or several) material(s) with a modulus of elasticity E, expressed in Pascals (Pa), and bearing a load at all points within its allowable stress(es) σ, expressed in Pascals (Pa); is resistant to the maximum loads to which it is subjected, in the form of a "resultant" F, expressed in Newtons (N). Each form chosen corresponds to a volume of material V (in m3) and a maximum deformation δ (in m). Their calculation depends on the factors L, H, E, σ and F. These calculations are long and tedious, they cloud the objective of finding the optimal form. It is, nevertheless, possible to overcome this problem by setting each factor to unity: while all other characteristics remain the same. Length L is therefore set to 1m, H to H/L, E and σ to 1Pa, and F to 1N. This "reduced" structure has a volume of material W= σV/ FL (the volume indicator) and a maximum deformation Δ = Eδ / σL (the displacement indicator). Their main characteristic is that they are numbers without physical dimensions (dimensionless) and their value, for every morphology considered, depends only on the ratio L/H, i.e. the geometric slenderness ratio of the form. This method can easily be applied to three-dimensional structures as illustrated in the following examples. The theory related to the indicators has been taught since 2000, and among other institutions, at the department of Civil Engineering and of Architecture at the Vrije Universiteit Brussel (VUB ; section "material mechanics and constructions") leading to research and publications under the direction of Prof. Dr. Ir. Philippe Samyn (from 2000 to 2006); Prof. Dr. Ir. Willy Patrick De Wilde (from 2000 to 2011) and now Prof. Dr. Ir. Lincy Pyl. The "reference book", since the reference thesis, reports the developments of the theory at Samyn and Partners as well as the VUB, up to 2004. The theory is open to everyone who wants to contribute, W and Δ being to be calculated for any resistant structure as defined in paragraph 1 here above. Progresses in material sciences, robotics and three dimensional printing, lead to the creation of new structural forms lighter than the lightest known today. The geometry of minimal surfaces of constant thickness in a homogeneous material is, for example, substantially modified when thickness and/or local allowable stress are varying. Macrostructure, structural element, microstructure and material The macrostructures considered here may be composed of "structural elements" which material presents a "microstructure". Whether searching to limit the stress or the deformation, macrostructure, structural element and microstructure have each, a weight Vρ, when ρ is the volumic weight of materials, in N/m3, function of the solicitations {F0} (for "force" in général) applied to them, of their size {L0} (for length or "size" in general), of their shape {Ge} (for geometry or "shape" in general), and of their constituting material {Ma} (for "material" in general). It can also be expressed as shape and material ({Ge}{Ma}) defining the weight (Vρ) for the structure of a given size under given force ({F0}{L0}). In material mechanics and for the structural elements under a specific loading case, the factor {Ge} corresponds to the "form factor" for elements of continuous section out of a solid material (without voids). The constituting material might however present a microstructure with voids. This cellular structure enhances than the form factor, whatever the loading case. The factor {Ma} characterizes a material which efficiency might be compared to another for a given loading case, and the independently of the form factor {Ge}. The indicators W = σV/FL and Δ = δE/σL just defined, characterize the macrostructures, while the same notations and symbols in small letters, w = σv/fl and Δ = δE/σl, refer to the structural element. The figure 1 gives the values of W and Δ for the structural element subject to traction, compression, bending and shear. The left column relates to the limitation of stress and the right column to the limitation of deformation. It shows the direct relation of W to {Ge}{Ma} as: , thus and or for given dimensions and loading case. Then, as W and Δ depend only on : and: which for a given loading case, is the specific weight of a macrostructure per unit of force and length, depending only from the geometry through L/H, and the materials though σ/ρ. Wρ/σ includes thus, the material factor {Ma} (ρ/σ and ρ/E for tension and compression without buckling, ρ/E1/2 for compression limited by buckling, ρ/σ2/3 and ρ/E1/2 for pure bending, ρ and ρ/G for pure shear) and the form factor {Ge}. All other factor being equal, a cluster of tubes with a diameter H and a wall thickness e, compared to a solid bar of equal volume in a material characterized by ρ, σ, E et G, presents an apparent density ρa = 4k(1 − k)ρ with k = e/H, allowable stress σa = 4k(1 − k)σ, The Young's modulus is and the shear modulus is . Thus and This explains the better performances of lighter materials for structural elements subject to compression or bending. This indicator allows to compare the efficiency of macrostructures including geometry and material. It echos the work of M.F. Ashby: "Materials Selection in Mechanical Design" (1992). He analyses {Ge} and {Ma} separately as, for his studies, {Ma} relates to a large amount of the materials physical properties. Different and complementary, it can also be placed alongside work carried out since 1969 by the Institut Für Leichte Flächentragwerke in Stuttgart under the direction of Frei Otto and now Werner SobekK, which refers to indices named Tra and Bic. The Tra is defined as the product of the length of the trajectory of the force Fr, (causing the collapse of the structure) onto the supports by the intensity of this force, and the Bic is the relationship of the mass of the structure with Tra. Since ρ* is the density of the material (in kg/m3), and α is, like W, a constant depending on the type of structure and the loading case: therefore, with stress reached under and as Unlike W, which is dimensionless, Bic is expressed in kg/Nm. Therefore, depending on the material, an independent comparison of different morphologies is not possible. It is surprising to note that despite the abundance of their works, none of them mention or make any effort to study W and its relationship with L/H. It appears that only V. Quintas Ripoll, and W. Zalewski and St. Kus mentioned the volume indicator W without examining it in depth. Validity limits of W and Δ In general, the second order effects have very little influence on W, but they can have a significant impact on Δ. W and Δ therefore also depend on E/σ. The shearing force T may be crucial in the case of short and continuous elements, subject to bending so that W does not fall below a given value, regardless of the reduction of the slenderness L/H. However, this limitation is very theoretical because it is always possible to remove it by transferring material from the flanges to the web of the section, close to the supports. The stress σ to which the structure can be subjected depends on the nature, the internal geometry, the production method and the implementation of the materials, as well as several other factors, including the dimensional accuracy of the actual construction, the nature of the connections of the components or their fire resistance, but also the skill with which the geometry of the structure is designed to cope with elastic instability. Pierre Latteur, who discovered the buckling indicator, studied the influence of elastic instability on W and Δ. In this regard, it is important to note that the existence of anchoring points of an element in traction may reduce the apparent permissible stress to the same level as the reduction necessary to take into account a moderate level of elastic instability. The influence on W of the buckling of the compressed parts on one side and of the anchoring points at the extremities of an element in traction on the other side is analysed on pages 30 to 58 in the « reference book ». The allowable stress σ is also often reduced by the need to limit the displacement δ of the structure since it is not possible to significantly alter E for a given material. Considerations regarding fatigue, ductility and dynamic forces also limit the working stress. It is not always straightforward to establish the nature and the overall maximum intensity of the forces F(including dead weight) to which the structure is subject, which again has a direct influence on the working stress. The connections of an element in compression or in traction are considered as hinged. Any clamping, even partial ones, introduce parasitic forces which add extra weight to the structure. For certain types of the structures, the volume of the connections adds to the net volume defined by W. Its importance depends on the nature of the material and the context in which it is used; this needs to be determined on a case-by-case basis. It follows that initially, only W and Δ should be taken into account for the morphological design of a structure, assuming that it is ultra-dampened (i.e. its internal damping is greater than the critical damping), which makes it impervious to dynamic stress. The volume V of a structure is therefore directly proportional to the total intensity of the force F which is applied to it, to its length L and to the morphological factor W; it is inversely proportional to the stress σ to which it can be subjected. Furthermore, the weight of a structure is proportional to the density ρ of the material from which it is constructed. However, its maximum displacement δ remains proportional to the span L and the morphological factor Δ, as well as the ratio between its working stress σ and the modulus of elasticity E. If it is a case of limiting the weight (or the volume) and the deformation of a structure for a given stress F and span L, with all other aspects remaining unchanged, then the work of the structural engineer involves minimising W and ρ/σ on one side and Δ and σ/E ont the other. Accuracy of W and Δ Theoretical accuracy For the large majority of compressed elements, it is possible to limit the reduction of the working stress to 25% by taking into account elastic instability, providing that the designer focuses on ensuring an efficient geometric design from as early as the initial sketches. This means that the increase in their volume indicator can also be limited to 25% . The volume of the elements subject to pure traction is also only very rarely limited to the product of the net distance over which a force is applied by a section strained at the permissible stress. In other words, their real volume indicator is thus also higher than the one which results from the calculation of W. A bar under traction can be welded at its extremities; no extra material apart from the negligible welding material is added, but the rigidity introduce parasitic moments which absorb some of the permissible stress. The bar can be articulated at its extremities and work at its permissible stress, but this requires close end sockets or attachment mechanisms whose volume is far from negligible, especially if the bar is short or highly stressed. As L.H. Cox demonstrated, in this case, it is worth taking into account n bars each with a cross-section of Ω/n, strained by force F/n with 2n sockets, instead of one bar with a cross-section Ω strained by a force F with 2 sockets, since the total volume of 2n sockets in the first case is much less than that of 2 sockets in the second. The anchoring of the extremities of a bar under traction can also be ensured by adherence, as is usually the case for the rebars in elements made of reinforced concrete. In this specific case, it is necessary to have an anchoring length at least 30 times the diameter of the bar. The bar then has a length L + 60H for a useful length L; its theoretical volume indicator W = 1 becomes W = 1 + 60H/L. Consequently, L/H must be greater than 240 (which is always theoretically possible) so that W does not increase by more than 25%. This observation also helps to show another reason for taking into account n bars with a crosssection Ω/n instead of one bar with a cross-section Ω. Finally, connections consisting of bolts, dowels, pins or nails, especially in the case of wooden components, significantly reduce usable sections. For elements in traction, a reduction of 25% in the working stress or an increase of 25% in the volume is therefore also necessary in the majority of cases. Determining the volume and the displacement of a structure using the indicators W and Δ is therefore reliable theoretically, providing that: the working stress is reduced by at least 25% ; de dessiner les éléments comprimés et les assemblages avec discernement. great attention is paid to the design of compressed parts and connections. The overall proportions of an optimised structure, without taking buckling into account, are significantly altered when the compressed bars need to be shortened to take account of elastic instability. It becomes sensitive to the scale effect, leading to an extension of the overall proportion and increasing the weight of the structure. Inversely, a shortening of the overall proportions is necessary when the volume of the connections needs to be considered, since the influence of this volume reduces when the bars are lengthened. This shows the advantage of accurately designing not only the compressed parts, but also the connections, in order to avoid these flaws. One of Niki de Saint-Phalle's light sculptures are therefore preferable to Giacometti's slender but heavy structures ! Practical accuracy The volume of the material of the structure, as determined using W, can only be obtained accurately if the theoretical values of the relevant characteristic of the sections under strain σ can be measured in practice. As shown in Figure 1 above, this characteristic is: Ω for an element under pure compression without buckling ; I for an element under pure compression with buckling (as well as for deformation under pure bending) ; I/H for an element under simple bending. It is always possible to obtain the precise value of these characteristics when the parts are made of moulded materials, such as reinforced concrete, or squared-off materials, such as wood or stone. However, this is not the case for laminated or extruded materials, produced on an industrial production line, such as steel or aluminium. It is important therefore to produce these elements with the smallest possible difference in size between two of them in order to avoid an unnecessary use of material. This use is consistent when the related deviation c between two successive values kn and kn+1 is constant, thus (kn+1 − kn) / kn = c or kn+1 = (c + 1) kn or kn+1 = k0. This is the principle of the geometric series known as the Renard Series (named after Colonel Renard who was the first to use them in calculating the diameter of cabling on aircraft) featured in the French standard NF X01-002. When all the necessary values are only very slightly greater than a series value, c represents the maximum increase and c/2 the average increase of W. Being universally used, the case of the steel profiles requires an in-depth examination (see the "reference book"; page 26 to 29). Consequently, the use of industrial steel profiles automatically leads to a significant increase of W: by half of the theoretical inaccuracy for pure compression ; practically identical for bending or compression subject to buckling. This situation is magnified when the number of profiles available is restricted, which may explain the use of forms which are not theoretically optimal but which tend to subject the available profiles to the permissible stress σ (such as, for example, pylons for high-voltage electric lines or variable height truss bridges). For structures subject to pure bending, this also explains the use of flat plates of variable lengths added to the flanges of these I profiles to obtain the inertia or resisting moment required, with the greatest degree of accuracy. Conversely, the significant variety in the tubes available enables a relative deviation value c which is both smaller and more constant. They also cover a much wider range in both the lower and higher characteristic values. Since their geometric performance is practically identical to that of the I profiles, tubes are the most appropriate industrial solution in order to practically eliminate any increase in the volume indicator W. Nevertheless, practical issues of availability and corrosion may limit their use. Some examples of W and Δ The following figures show the values of the indicators according to the ratio L/H for a number of types of structures. Figure 2 and 3: W and Δ for a horizontal isostatic span under a uniformly distributed vertical load made up of: profiles with a constant cross-section, from I-section to solid cylinder; different types of trusses; parabolic arches with or without hangers or small columns, with constant or variable cross-sections. Figure 4: for the transfer to two equidistant supports on the horizontal of a vertical point load (in this case Δ=W) or evenly distributed lead: F = 1. Figure 5 and 6: W for a vertical mast, with a constant width, subject to a horizontal load which is evenly distributed along its height or concentrated at the top. Figure 7: W for a membrane of revolution on a vertical axis, with a constant or variable thickness, under an evenly distributed vertical load. It is surprising to note that the minimum value is reached for a conical dome of variable thickness with an opening angle of 90° (L/H = 2 ; W = 0,5!). Developments Applications discussed in the « reference book » are: trusses, straight continuous beams, arches, cables and guyed structures, masts, gantries, membranes of revolution. Some examples of composite structures with minimum W W can easily be determined in order to optimise structures made up of a number of different construction elements (see « reference book » pages 100–106) as shown, for instance, for the wind turbine in Figure 8. Or a parabolic roof coupled with large vertical glazed gables subject to wind loads, as seen at Leuven station in Belgium, shown in Figure 9 (see reference for a detailed analysis). The optimisation of the King Cross truss for the facade of the Europa building in Brussels (see reference pages 93–101 for detailed analysis) is another example. See also Architectural engineering Notes and references Structural engineering
Volume and displacement indicators for an architectural structure
Engineering
4,336
30,871,279
https://en.wikipedia.org/wiki/Recoil%20operation
Recoil operation is an operating mechanism used to implement locked-breech autoloading firearms. Recoil operated firearms use the energy of recoil to cycle the action, as opposed to gas operation or blowback operation using the pressure of the propellant gas. History The earliest mention of recoil used to assist the loading of firearms is sometimes claimed to be in 1663 when an Englishman called Palmer proposed to employ either it or gases tapped along a barrel to do so. However no one has been able to verify this claim in recent times, although there is another automatic gun that dates from the same year, but its type and method of operation are unknown. Recoil-operation, if it was invented in 1663, would then lie dormant until the 19th century, when a number of inventors started to patent designs featuring recoil operation; this was due to the fact that the integrated disposable cartridge (both bullet and propellant in one easily interchangeable unit) made these designs viable. The earliest mention of recoil operation in the British patent literature is a patent by Joseph Whitworth filed in 1855 which proposed to use recoil to partially open the breech of a rifle, the breech then being manually pulled the rest of the way back by hand. Around this time, an American by the name of Regulus Pilon is sometimes stated to have patented in Britain a gun that used a limited form of recoil operation. He had three British patents related to firearms around the 1850s to the 1860s; however, all of them refer to a means of dampening recoil in firearms, which wasn't a new idea at the time, rather than true recoil operation. The next to mention recoil operation in the British patent literature is by Alexander Blakely in 1862, who clearly describes using the recoil of a fired cannon to open the breech. In 1864 after the Second Schleswig War, Denmark started a program intended to develop a gun that used the recoil of a fired shot to reload the firearm, though a working model would not be produced until 1888. Later in the 1870s, a Swedish captain called D. H. Friberg patented a design which introduced both flapper-locking and the fully automatic recoil operated machine gun. Furthermore, in 1875 a means of cocking a rifle through recoil was patented through the patent agent Frank Wirth by a German called Otto Emmerich. Finally came Maxim's 1883 automatic recoil operated machine gun which introduced the modern age of automatic machine guns. Design The same forces that cause the ejecta of a firearm (the projectile(s), propellant gas, wad, sabot, etc.) to move down the barrel also cause all or a portion of the firearm to move in the opposite direction. The result is required by the conservation of momentum such that the ejecta momentum and recoiling momentum are equal. These momenta are calculated by: Ejecta mass × ejecta velocity = recoiling mass × recoil velocity The barrel is a moving part of the action in recoil-operated firearms. In non-recoil-operated firearms, it is generally the entire firearm that recoils. However, in recoil-operated firearms, only a portion of the firearm recoils while inertia holds another portion motionless relative to a mass such as the ground, a ship's gun mount, or a human holding the firearm. The moving and the motionless masses are coupled by a spring that absorbs the recoil energy as it is compressed by the movement and then expands providing energy for the rest of the operating cycle. Since there is a minimum momentum required to operate a recoil-operated firearm's action, the cartridge must generate sufficient recoil to provide that momentum. Therefore, recoil-operated firearms work best with a cartridge that yields a momentum approximately equal to that for which the mechanism was optimized. For example, the M1911 design with factory springs is optimized for a bullet at factory velocity. Changes in caliber or drastic changes in bullet weight and/or velocity require modifications to spring weight or slide mass to compensate. Similarly the use of blank ammunition will typically cause the mechanism not to work correctly, unless a device is fitted to boost the recoil. Categories Recoil-operated designs are broadly categorized by how the parts move under recoil. Long recoil Long recoil operation is found primarily in shotguns, particularly ones based on John Browning's Auto-5 action. In 1885 a locked breech, long recoil action was patented by the Britons Schlund and Arthur. In a long recoil action, the barrel and bolt remain locked together during recoil, compressing the recoil springs. Following this rearward movement, the bolt locks to the rear and the barrel is forced forward by its spring. The bolt is held in position until the barrel returns completely forward during which time the spent cartridge has been extracted and ejected, and a new shell has been positioned from the magazine. The bolt is released and forced closed by its recoil spring, chambering a fresh round. The long recoil system was invented in the late 19th century and dominated the automatic shotgun market for more than half that century before it was supplanted by new gas-operated designs. While Browning halted production of the Auto-5 design in 1999, Franchi still makes a long-recoil–operated shotgun line, the AL-48, which shares both the original Browning action design, and the "humpbacked" appearance of the original Auto-5. Other weapons based on the Browning system were the Remington Model 8 semi-automatic rifle (1906), the Remington Model 11 & "The Sportsman" model (a model 11 with only a two-shell magazine) shotguns, the Frommer Stop line of pistols (1907), and the Chauchat automatic rifle (1915). Cycle diagram explanation Ready to fire position. Bolt is locked to barrel, both are fully forward. Recoil of firing forces bolt and barrel fully to the rear, compressing the return springs for both. Bolt is held to rear, while barrel unlocks and returns to battery under spring force. Fired round is ejected. Bolt returns under spring force, loads new round. Barrel locks in place as it returns to battery. Short recoil The short recoil action dominates the world of centerfire semi-automatic pistols, being found in nearly all weapons chambered for high-pressure pistol cartridges of 9×19mm Parabellum and larger, while low-pressure pistol cartridges of .380 ACP and smaller generally use the blowback method of operation. Short recoil operation differs from long recoil operation in that the barrel and bolt recoil together only a short distance before they unlock and separate. The barrel stops quickly, and the bolt continues rearward, compressing the recoil spring and performing the automated extraction and feeding process. During the last portion of its forward travel, the bolt locks into the barrel and pushes the barrel back into battery. The method of locking and unlocking the barrel differentiates the wide array of short recoil designs. Most common are the John Browning tilting barrel designs based on either the swinging link and locking lugs as used in the M1911 pistol or the linkless cam design used in the Hi Power and CZ 75. Other designs are the locking block design found in the Walther P38 and Beretta 92, rollers in the MG42, or a rotating barrel used in the Beretta 8000 and others. An unusual variant is the toggle bolt design of the Borchardt C-93 and its descendant, the Luger pistol. While the short recoil design is most common in pistols, the very first short-recoil–operated firearm was also the first machine gun, the Maxim gun. It used a toggle bolt similar to the one Borchardt later adapted to pistols. Vladimirov also used the short recoil principle in the Soviet KPV-14.5 heavy machine gun which has been in service with the Russian military and Middle Eastern armed forces since 1949. Melvin Johnson also used the short recoil principle in his M1941 Johnson machine gun and M1941 rifle, other rifles using short recoil are LWRCI SMG 45 and LoneStar Future Weapons RM-277R. Cycle diagram explanation Ready to fire position. Bolt is locked to barrel, both are fully forward. Upon firing, bolt and barrel recoil backwards a short distance while locked together. Near the end of the barrel travel, the bolt and barrel unlock. The barrel stops, but the unlocked bolt continues to move to the rear, ejecting the empty shell and compressing the recoil spring. The bolt returns forward under spring force, loading a new round into the barrel. Bolt locks into barrel, and forces barrel to return to battery. Inertia An alternative design concept for recoil-operated firearms is the inertia operated system, the first practical use of it being the Sjögren shotgun, developed by Carl Axel Theodor Sjögren in the early 1900s, a Swedish engineer who was awarded a number of patents for his inertia operated design between 1900 and 1908 and sold about 5,000 automatic shotguns using the system in 1908–1909.<ref>{{cite web|url=http://www.jaktojagare.se/kategorier/vapen-och-utrustning/sjogrens-halvautomat---en-udda-klassiker/ |title=Sjögrens halvautomat - en udda klassiker|work= Jakt och Jägare|date=27 April 2010 |access-date= 31 August 2016|trans-title=Sjögren's semi-automatic - an odd classic|archiveurl=https://web.archive.org/web/20160319094239/http://www.jaktojagare.se/kategorier/vapen-och-utrustning/sjogrens-halvautomat---en-udda-klassiker/|archive-date=March 19, 2016}}</ref> In a reversal of the other designs, some inertia systems use nearly the entire firearm as the recoiling component, with only the bolt remaining stationary during firing. Because of this, the inertia system is only applied to heavily recoiling firearms, particularly shotguns. A similar system using inertia operation was then developed by Paolo Benelli in the early 1980s and patented in 1986. With the exception of Sjögren's shotguns and rifles in the early 1900s, all inertia-operated firearms made until 2012 were either made by Benelli or used a design licensed from Benelli, such as the Franchi Affinity. Then the Browning Arms Company introduced the inertia-operated A5 (trademarked as Kinematic Drive) as successor to the long-recoil operated Auto-5. Both the Benelli and Browning systems are based on a rotating locking bolt, similar to that used in many gas-operated firearms. Before firing, the bolt body is separated from the locked bolt head by a stiff spring. As the shotgun recoils after firing, inertia of the bolt body is large enough for it to remain stationary while the recoiling gun and locked bolt head move rearward. This movement compresses the spring between the bolt head and bolt body, storing the energy required to cycle the action. Since the spring can only be compressed a certain amount, this limits the amount of force the spring can absorb, and provides an inherent level of self-regulation to the action, allowing a wide range of shotshells to be used, from standard to magnum loads, as long as they provide the minimum recoil level to compress the spring. Note that the shotgun must be free to recoil for this to work—the compressibility of the shooter's body is sufficient to allow this movement, but firing the shotgun from a secure position in a rest or with the stock against the ground will not allow it to recoil sufficiently to operate the mechanism. Likewise, care must be exercised when modifying weapons of this type (e.g. addition of extended magazines or ammunition storage on the stock), as any sizable increase in weapon mass can reduce the work done from recoil below that required to cycle the action. As the recoil spring returns to its uncompressed state, it pushes the bolt body backward with sufficient force to cycle the action. The bolt body unlocks and retracts the bolt head, extracts and ejects the cartridge, cocks the hammer, and compresses the return spring. Once the bolt reaches the end of its travel, the return spring provides the force to chamber the next round from the magazine, and lock the bolt closed. Cycle diagram explanation Ready to fire position. Bolt is locked to barrel, both are fully forward. Upon firing, the firearm recoils backwards into the shooter's body. The inertial mass remains stationary, compressing a spring. The bolt remains locked to the barrel, which in turn is rigidly attached to the frame. The compressed spring forces the inertial mass rearwards until it transfers its momentum to the bolt. The bolt unlocks and moves to the rear, ejecting the fired round and compressing the return spring. The bolt returns to battery under spring force, loading a new round and locking into place. The shooter recovers from the shot, moving the firearm forward into position for the next shot. Muzzle booster Some short-recoil–operated firearms, such as the German MG 42 and MG 3, use a mechanism at the muzzle to extract some energy from the escaping powder gases to push the barrel backwards, in addition to the recoil energy. This boost provides higher rates of fire and/or more reliable operation. This type of mechanism is also found in some suppressors used on short recoil firearms, under the name gas assist or Nielsen device'', where it is used to compensate for the extra mass the suppressor adds to the recoiling parts both by providing a boost and decoupling some of the suppressor's mass from the firearm's recoiling parts. Muzzle boosters are also used on some recoil-operated firearms' blank-firing attachments to normalize the recoil force of a blank round (with no projectile) with the greater force of a live round, in order to allow the mechanism to cycle properly. Automatic revolvers Several revolvers use recoil to cock the hammer and advance the cylinder. In these designs, the barrel and cylinder are affixed to an upper frame which recoils atop a sub-frame. As the upper receiver recoils, the cylinder is advanced and hammer cocked, functions that are usually done manually. Notable examples are the Webley–Fosbery and Mateba. Other autoloading systems Other autoloading systems are: Delayed blowback firearms uses an operation that delays the bolt opening until the gas pressure is at a safe level to extract. Blow forward firearms lack the use of a bolt but instead a moving barrel that gets dragged forward by the bullet until it leaves the barrel to cycle its action. Blowback firearms use the expanding gas impinging on the cartridge itself to push the bolt of the firearm rearward. Gas-operated firearms tap off a small amount of the expanding gas to power the moving parts of the action. See also Bump stock Slamfire References Bibliography External links How Does it Work: Long Recoil Forgotten Weapons How Does it Work: Short Recoil Forgotten Weapons Recoil operation, Animations and explanations of (short) recoil operation principle at howstuffworks.com 1660s introductions 1663 beginnings Firearm actions Artillery components
Recoil operation
Technology
3,159
12,059,291
https://en.wikipedia.org/wiki/Homologation%20reaction
In organic chemistry, a homologation reaction, also known as homologization, is any chemical reaction that converts the reactant into the next member of the homologous series. A homologous series is a group of compounds that differ by a constant unit, generally a methylene () group. The reactants undergo a homologation when the number of a repeated structural unit in the molecules is increased. The most common homologation reactions increase the number of methylene () units in saturated chain within the molecule. For example, the reaction of aldehydes or ketones with diazomethane or methoxymethylenetriphenylphosphine to give the next homologue in the series. Examples of homologation reactions include: Kiliani-Fischer synthesis, where an aldose molecule is elongated through a three-step process consisting of: Nucleophillic addition of cyanide to the carbonyl to form a cyanohydrin Hydrolysis to form a lactone Reduction to form the homologous aldose Wittig reaction of an aldehyde with methoxymethylenetriphenylphosphine, which produces a homologous aldehyde. Arndt–Eistert reaction is a series of chemical reactions designed to convert a carboxylic acid to a higher carboxylic acid homologue (i.e. contains one additional carbon atom) Kowalski ester homologation, an alternative to the Arndt-Eistert synthesis. Has been used to convert β-amino esters from α-amino esters through an ynolate intermediate. Seyferth–Gilbert homologation in which an aldehyde is converted to a terminal alkyne and then hydrolyzed back to an aldehyde. Some reactions increase the chain length by more than one unit. For example, the DeMayo reaction can be considered a two-carbon homologation reaction. Chain reduction Likewise the chain length can also be reduced: In the Gallagher–Hollander degradation (1946) pyruvic acid is removed from a linear aliphatic carboxylic acid yielding a new acid with 2 carbon atoms less. The original publication concerns the conversion of bile acid in a series of reactions: acid chloride (2) formation with thionyl chloride, diazoketone formation (3) with diazomethane, chloromethyl ketone formation (4) with hydrochloric acid, organic reduction of chlorine to methylketone (5), ketone halogenation to 6, elimination reaction with pyridine to enone 7 and finally oxidation with chromium trioxide to bisnorcholanic acid 8. In the Hooker reaction (1936) an alkyl chain in a certain naphthoquinone (phenomenon first observed in the compound lapachol) is reduced by one methylene unit as carbon dioxide in each potassium permanganate oxidation. Mechanistically oxidation causes ring-cleavage at the alkene group, extrusion of carbon dioxide in decarboxylation with subsequent ring-closure. See also Homologous series References Carbon-carbon bond forming reactions
Homologation reaction
Chemistry
658
43,349,052
https://en.wikipedia.org/wiki/Vixotrigine
Vixotrigine (, ), formerly known as raxatrigine (, ), is an analgesic which is under development by Convergence Pharmaceuticals for the treatment of lumbosacral radiculopathy (sciatica) and trigeminal neuralgia (TGN). Vixotrigine was originally claimed to be a selective central Nav1.3 blocker, but was subsequently redefined as a selective peripheral Nav1.7 blocker. Following this, vixotrigine was redefined once again, as a non-selective voltage-gated sodium channel blocker. As of January 2018, it is in phase III clinical trials for trigeminal neuralgia and is in phase II clinical studies for erythromelalgia and neuropathic pain. It was previously under investigation for the treatment of bipolar disorder, but development for this indication was discontinued. See also List of investigational analgesics References External links Vixotrigine - AdisInsight Analgesics Carboxamides Enantiopure drugs Pyrrolidines Sodium channel blockers
Vixotrigine
Chemistry
237
29,762,256
https://en.wikipedia.org/wiki/Psilocybe%20gallaeciae
Psilocybe gallaeciae is a species of psilocybin mushroom in the family Hymenogastraceae. It was found in Spain in 1997, and published as a new species in 2003. See also List of Psilocybe species References External links gallaeciae Fungi described in 2003 Fungi of Europe Taxa named by Gastón Guzmán Fungus species
Psilocybe gallaeciae
Biology
76
701,200
https://en.wikipedia.org/wiki/Landau%20Institute%20for%20Theoretical%20Physics
The L. D. Landau Institute for Theoretical Physics () of the Russian Academy of Sciences is a research institution, located in the small town of Chernogolovka near Moscow (there is also a subdivision in Moscow, on the territory of the P. L. Kapitza Institute for Physical Problems). History The Landau Institute was formed in 1964 to keep the Landau school alive after the tragic car accident of Lev D. Landau. Since its foundation, the institute grew rapidly to about one hundred scientists, becoming one of the worldwide best-known and leading institutes for theoretical physics. Unlike many other scientific centers in Russia, the Landau Institute had the strength to cope with the crisis of the nineties in the last century. Although about one half of the scientists accepted positions at leading scientific centers and universities abroad, most of them kept ties with their home institute, forming a scientific network in the tradition of the Landau school and supporting young theoretical physicists in the Landau Institute. Prominent members Up to 1992, the institute was headed by Isaak Markovich Khalatnikov, who was then replaced by Vladimir E. Zakharov. Its numerous prominent scientists, mathematicians as well as physicists, include the Nobel laureate Alexei Alexeyevich Abrikosov as well as Igor Dzyaloshinsky, Lev Gor'kov, Vladimir Gribov, Arkady Migdal, Alexander Migdal, Anatoly Larkin, Sergei Novikov, Alexander Polyakov, Mark Azbel, Valery Pokrovsky, Emmanuel Rashba, Sergey Iordanskii, Ioshua Levinson, Alexei Starobinsky, Alexei Kitaev, Vadim Berezinskii (whose early death prevented him from sharing the Nobel Prize for the Berezinskii–Kosterlitz–Thouless transition theory), Serguei Brazovskii, Konstantin Efetov, David Khmel'nitsky, Vladimir Mineev, Grigory Volovik, Paul Wiegmann, Leonid Levitov, Alexander Zamolodchikov, Vadim Knizhnik, Konstantin Khanin, and Yakov G. Sinai. Fields of Research The main fields of research are: Mathematical physics Computational physics Nonlinear dynamics Condensed matter theory Nuclear and elementary particle physics Quantum field theory See also Institute for Theoretical Physics (disambiguation) Center for Theoretical Physics (disambiguation) References External links Institute's web site (English version) Talk by Vinay Ambegaokar on the history of the Landau school Further reading Isaak Markovich Khalatnikov and Vladimir P. Mineev (eds.), 30 years of the Landau Institute- selected papers (World Scientific, 1996) 1965 establishments in the Soviet Union Institutes of the Russian Academy of Sciences Research institutes in the Soviet Union Physics research institutes Nuclear research institutes in Russia Theoretical physics institutes
Landau Institute for Theoretical Physics
Physics
581
41,547,928
https://en.wikipedia.org/wiki/Phendioxan
Phendioxan is an α1-adrenergic receptor antagonist. See also WB-4101 References Alpha-1 blockers Amines Dioxanes Heterocyclic compounds with 2 rings Methoxy compounds Oxygen heterocycles
Phendioxan
Chemistry
55
11,112,810
https://en.wikipedia.org/wiki/NGC%201260
NGC 1260 is a spiral or lenticular galaxy located 250 million light years away from earth in the constellation Perseus. It was discovered by astronomer Guillaume Bigourdan on 19 October 1884. NGC 1260 is a member of the Perseus Cluster and forms a tight pair with the galaxy PGC 12230. This galaxy is dominated by a population of many old stars. In 2006, it was home to the second brightest supernova in the observable universe, supernova SN 2006gy. This supernova was the most energetic and brightest supernova on record so far. References External links Brightest object found in NGC 1260 (Space.com : 7 May 2007) http://www.solstation.com/x-objects/sn2006gy.htm Spiral galaxies Perseus (constellation) 1260 12219 02634 Astronomical objects discovered in 1884 Perseus Cluster Lenticular galaxies
NGC 1260
Astronomy
189
32,047,031
https://en.wikipedia.org/wiki/Hanwha%20Group
Hanwha Group () is a large business conglomerate (chaebol) in South Korea. Founded in 1952 as Korea Explosives Co. (), the group has grown into a large multi-profile business conglomerate, with diversified holdings stretching from explosivestheir original businessto energy, materials, aerospace, mechatronics, finance, retail, and lifestyle services. In 1992, the company adopted its abbreviation as its new name: "Hanwha". History 1952–1999 Kim Chong-hee founded Korea Explosives Co. in October 1952. Prior to founding the company, Kim worked as a gunpowder engineer for the "Chosun Explosives Factory", a Japanese company. Later, he won the bid for the company and its Incheon factory and started the company there. From 1952 to 1963, the Korea Explosives Co. produced industrial explosives domestically, which was needed for construction and engineering of infrastructure. In the same time period, the Korea Explosives Co. started producing nitroglycerin, which gave it a monopoly in the field of explosives and gunpowder. In 1959, the Hanwha group started producing domestic dynamite. From 1964 to 1980 the Hanwha group started to make investments in various fields, starting the foundation of it becoming a chaebol. Later in the mid 1960s, Korea Hwasung industrial Co. was founded (now Hanwha Solutions), and entered the petrochemical market. Hanwha increased its competitiveness in the machinery market by acquiring Shinhan Bearing Industrial. Hanwha founded Kyung-in Energy in 1969, and the Hankook Precision Tools (now Hanwha Corporation/Momentum) followed suit, being founded in 1971. From 1981 to 1995, Kim Seung-youn became the second chairman of the company, and more investments in diverse markets were initiated. It expanded further into the chemical industry by acquiring both Hanyang Chemicals (now also Hanwha Solutions) and Dow Chemicals Korea in 1982, expanded into the resorts industry by acquiring the Junga group (now Hanwha Hotels & Resorts) in 1985, and expanded into the leisure and distribution industry by acquiring Hanyang Stores (now Hanwha Galleria) in 1986. In the 1990s, the Hanwha Group founded Hanwha BASF Urethane, Hanwha NSK Precision, Hanwha GKN, Hanwha Machinery Hub Eye Bearings, SKF Hanwha Auto Parts, and Hanwha Motors. In 1992, the Korea Explosives Co. changed its name to Hanwha, and Binggrae was separated and made independent from the company. 2000–Present In 2002, Hanwha expanded into the life insurance industry by acquiring Korea Life Insurance. From 2007–present, Hanwha is undergoing global expansion. Hanwha acquired Azdel, an American company in 2007 and established a PVC plant in Ningbo, Zhejiang, China in 2011. Hanwha Q CELLS was launched in 2012. In 2014, Hanwha acquired Samsung Techwin, Samsung Thales, and Samsung Total. Since 2019, Hanwha is operating the largest solar module plant (1.7 GW) in the United States; It is located in Dalton, Georgia. As of 2019, Hanwha has a total of 466 affiliates, 84 being domestic and 382 overseas. Hanwha completed its take over of Daewoo Shipbuilding & Marine Engineering (DSME), renaming it Hanwha Ocean in 2023. Controversy In 2011, Kim Seung-yeon, the current chairman of the Hanwha Group, was fined 5.1 billion KRW and was jailed for 4 years on charges of embezzlement and breach of trust. Key networks R&D: Germany, Malaysia, USA, China, and South Korea Manufacturing: Germany, Czech Republic, Malaysia, China, South Korea, Iraq and USA Marketing & Sales: Australia, Canada, Japan, USA, China, South Korea, and Iraq Business areas Energy & Materials Solar Energy In a meeting held on May 21, 2022, Kim Dong-kwan, the president of Hanwha Solutions, laid down a plan to spend 36.7 trillion KRW on its energy and aerospace sectors. He said he wanted to commit the company to solar energy to reduce its carbon footprint and supply high quality energy. Additionally, he announced plans to build another modular factory in the United States. In order to tackle the problem of stagnating bee populations, Hanwha has created a solar beehive, that helps to protect and maintain a stable bee population. Hanwha QCELLS has launched a new brand that ventures into the electric vehicle charging market called Hanwha Motiev in 2022. Wind Energy Hanwha operates in the onshore and offshore wind power generation sectors, with a focus on Europe through its subsidiary Q Energy, as well as the domestic market. Hydrogen Hanwha develops a hydrogen value chain, which includes production, utilization, and storage solutions, utilizing renewable energy sources. Hanwha acquired Thomassen Energy/PSM in July 2021, which enables retrofitting of gas turbines for hydrogen use. Materials Hanwha Solutions produces basic petrochemical products, such as PVC (Polyvinyl Chloride), LLDPE (Linear Low-Density Polyethylene), CA (Caustic Soda), ASR (Alkali Water Soluble Resin), and TDI (Toluene Diisocyanate), as well as working on eco-friendly technologies including low-impact plasticizers. Aerospace Space Hanwha is also extensively investing in the space market, and jointly established "Space Research Center" with KAIST. Hanwha Aerospace contributes to South Korea's space industry, including satellite observation services and the development of the Nuri space launch vehicle. Hanwha took an 8.8% stake in British company OneWeb, a satellite communication service provider as of 2022. Aircraft Engine Hanwha Aerospace is South Korea's sole aircraft engine producer, partnering with global aviation engine companies for component manufacturing. Urban Air Mobility (UAM) Hanwha Systems collaborates with Overair, a U.S. eVTOL startup, to develop the ‘Butterfly’ personal air vehicle. Vision Solutions Hanwha Vision (formerly Samsung Techwin, before getting acquired by Hanwha) delivers video surveillance solutions with a focus on optics design, image processing, and AI-based platforms to improve service offerings. Finance Hanwha operates in the financial industry, providing insurance, securities, and asset-management services, and investing in technology-based financial solutions. Retail & Services Hanwha engages in retail & services such as department stores, hotels, and resorts, as well as developing large-scale complexes. Affiliates Hanwha Group's business area has five types, defence, technologies, energy, finance, and service. On 23 May 2023, Hanwha acquires Daewoo Shipbuilding & Marine Engineering, and renamed Hanwha Ocean. Aerospace·Defence·Maritime Hanwha Corporation (Holding company) Hanwha Aerospace Hanwha Ocean Precision Machinery Hanwha Systems Hanwha Vision Hanwha Robotics Energy Hanwha Advanced Materials Hanwha Energy Hanwha Impact Hanwha Power Systems Hanwha Qcells Hanwha Solutions Hanwha TotalEnergies Petrochemical Yeochun NCC Finance Carrot Insurance Hanwha Asset Management Hanwha General Insurance Hanwha Investment & Securities Hanwha Life Insurance Hanwha Life Financial Service Hanwha Saving Bank Distribution·Hospitality Service Hanwha Galleria Hanwha Hotels & Resorts Hanwha Foodtech Hanwha Connect Awards Old Tower Industrial Medal, November 1982 $100 Million Export Tower Award, November 1982 Silver Tower Industrial Medal and the $1 Billion Export Tower Award, November 1998 Gold Tower Industrial Medal (45th Annual Trade Day Ceremony), December 2008 $2 Billion Export Tower Award (45th Annual Trade Day Ceremony), December 2008 Sports teams Hanwha Eagles Hanwha Life Esports See also List of Korean companies Economy of South Korea Aqua Planet (aquarium) Binggrae Notes References External links Hanwha Chaebol Conglomerate companies established in 1952 Construction and civil engineering companies of South Korea Electrical engineering companies Engineering companies of South Korea Defence companies of South Korea Explosives manufacturers Holding companies of South Korea Multinational companies headquartered in South Korea Energy companies of South Korea Companies based in Seoul Companies listed on the Korea Exchange Companies in the KOSPI 200 South Korean brands Construction and civil engineering companies established in 1952 1952 establishments in South Korea South Korean companies established in 1952 Manufacturing companies established in 1952
Hanwha Group
Engineering
1,756
29,459,180
https://en.wikipedia.org/wiki/Stellar%20collision
A stellar collision is the coming together of two stars caused by stellar dynamics within a star cluster, or by the orbital decay of a binary star due to stellar mass loss or gravitational radiation, or by other mechanisms not yet well understood. Any stars in the universe can collide, whether they are "alive", meaning fusion is still active in the star, or "dead", with fusion no longer taking place. White dwarf stars, neutron stars, black holes, main sequence stars, giant stars, and supergiants are very different in type, mass, temperature, and radius, and accordingly produce different types of collisions and remnants. Types of stellar collisions and mergers Binary star mergers About half of all the stars in the sky are part of binary systems, with two stars orbiting each other. Some binary stars orbit each other so closely that they share the same atmosphere, giving the system a peanut shape. While most such contact binary systems are stable, some do become unstable and either eject one partner or eventually merge. Astronomers predict that events of this type occur in the globular clusters of our galaxy about once every 10,000 years. On 2 September 2008 scientists first observed a stellar merger in Scorpius (named V1309 Scorpii), though it was not known to be the result of a stellar merger at the time. Type Ia supernovae White dwarfs are the remnants of low-mass stars which, if they form a binary system with another star, can cause large stellar explosions known as type Ia supernovae. The normal route by which this happens involves a white dwarf drawing material off a main sequence or red giant star to form an accretion disc. Much more rarely, a type Ia supernova occurs when two white dwarfs orbit each other closely. Emission of gravitational waves causes the pair to spiral inward. When they finally merge, if their combined mass approaches or exceeds the Chandrasekhar limit, carbon fusion is ignited, raising the temperature. Since a white dwarf consists of degenerate matter, there is no safe equilibrium between thermal pressure and the weight of overlying layers of the star. Because of this, runaway fusion reactions rapidly heat up the interior of the combined star and spread, causing a supernova explosion. In a matter of seconds, all of the white dwarf's mass is thrown into space. Neutron star mergers Neutron star mergers occur in a fashion similar to the rare type Ia supernovae resulting from merging white dwarfs. When two neutron stars orbit each other closely, they spiral inward as time passes due to gravitational radiation. When they meet, their merger leads to the formation of either a heavier neutron star or a black hole, depending on whether the mass of the remnant exceeds the Tolman–Oppenheimer–Volkoff limit. This creates a magnetic field that is trillions of times stronger than that of Earth, in a matter of one or two milliseconds. Astronomers believe that this type of event is what creates short gamma-ray bursts and kilonovae. A gravitational wave event that occurred on 25 August 2017, GW170817, was reported on 16 October 2017 to be associated with the merger of two neutron stars in a distant galaxy, the first such merger to be observed via gravitational radiation. Thorne–Żytkow objects If a neutron star collides with red giant of sufficiently low mass and density, the merger is conjectured to produce a Thorne–Żytkow object, an hypothetical type of compact star containing a neutron star enveloped by a red giant. Formation of planets When two low-mass stars in a binary system merge, mass may be thrown off in the orbital plane of the merging stars, creating an excretion disk from which new planets can form. Discovery While the concept of stellar collision has been around for several generations of astronomers, only the development of new technology has made it possible for it to be more objectively studied. For example, in 1764, a cluster of stars known as Messier 30 was discovered by astronomer Charles Messier. In the twentieth century, astronomers concluded that the cluster was approximately 13 billion years old. The Hubble Space Telescope resolved the individual stars of Messier 30. With this new technology, astronomers discovered that some stars, known as blue stragglers, appeared younger than other stars in the cluster. Astronomers then hypothesized that stars may have "collided", or "merged", giving them more fuel so they continued fusion while fellow stars around them started going out. Stellar collisions and the Solar System While stellar collisions may occur very frequently in certain parts of the galaxy, the likelihood of a collision involving the Sun is very small. A probability calculation predicts the rate of stellar collisions involving the Sun is 1 in 1028 years. For comparison, the age of the universe is of the order 1010 years. The likelihood of close encounters with the Sun is also small. The rate is estimated by the formula: N ≈ 4.2 · D2 Myr−1 where N is the number of encounters per million years that come within a radius D of the Sun in parsecs. For comparison, the mean radius of the Earth's orbit, 1 AU, is . Our star will likely not be directly affected by such an event because there are no stellar clusters close enough to cause such interactions. KIC 9832227 and binary star mergers An analysis of the eclipses of KIC 9832227 initially suggested that its orbital period was indeed shortening, and that the cores of the two stars would merge in 2022. However subsequent reanalysis found that one of the datasets used in the initial prediction contained a 12-hour timing error, leading to a spurious apparent shortening of the stars' orbital period. The mechanism behind binary star mergers is not yet fully understood, and remains one of the main focuses of those researching KIC 9832227 and other contact binaries. References External links Stellar dynamics Stellar astronomy Impact events Concepts in stellar astronomy Articles containing video clips
Stellar collision
Physics,Astronomy
1,215
65,824,604
https://en.wikipedia.org/wiki/Paul%20J.%20Scheuer
Paul Josef Scheuer (born 25 May 1915 in Heilbronn; died 12 January 2003 in Hawaii) was a German/American chemist. Biography Born in 1915 in Heilbronn, Scheuer completed his school education in 1934 at the Realgymnasium Heilbronn. As a Jew, he was unable to take up studies in Germany because of the racial laws[3]. He began training in a leather tannery. Arranged by his supervisor, he switched to a tannery in Pécs in southern Hungary, which specialised in fine leather, in December 1935 and later worked in Simontornya. There the technical manager, a doctor of chemistry, taught him the chemical background of leather production. He was "fascinated with chemistry as an intellectual challenge" and decided to become a chemist. In 1937, he visited Germany for the funeral of his mother one time before last. Until autumn 1938 he spent time in tanneries in Yugoslavia and England. As the threat of war in Europe increased, he emigrated to the United States in 1938, working first as a packer of leather and later as a foreman in a tannery in Ayer, Massachusetts. In autumn 1939 he enrolled as an evening student at Northeastern University in Boston. A year later he moved to Boston and studied full-time at the College of Liberal Arts, where he received a B.S. in 1943. He then moved to Harvard University and chose Robert B. Woodward as his supervisor. He worked on addition reactions to bind ketene to alpha-vinylpyridine. For two years and four months he was contracted for the Chemical Warfare Service, which is responsible for chemical weapons in the U.S. Army. In January 1945, he was transferred to Fort Ritchie, Maryland, and trained in military intelligence. A few days before the end of the war, he flew to Paris and travelled on to Bavaria. With the exception of the Nuremberg Trials, he describes his fourteen months as a special agent in Germany as "uneventful". He resumed his studies in September 1946, financed by the G. I. Bill. Among his instructors were Gilbert Stork and Morris Kupchan. Scheuer received his Ph.D. in organic chemistry in 1950. In July 1950 he was appointed assistant professor at the University of Hawaii, and decided to set off for a "nebulous future" on the island with his fiancée Alice Dash. They married at Harvard on September 5 and travelled from San Francisco to Hawaii on the passenger ship SS Lurline. He remained at the University of Hawaii until his retirement in 1985. Paul Scheuer had four children. He died in Hawaii at the age of 87 of leukemia. Career At the University of Hawaii, Scheuer came into contact with researchers from botany, marine biology, and agricultural science. He recognised that Hawaii, with its largely unexplored endemic flora, offered good opportunities for research into biodiversity and natural products. For example, he did research on the kava plant with Rudolf Hänsel from the Free University of Berlin, but soon turned his attention to the chemical ecology of marine ecosystems. For 20 years, his institute conducted research on ciguatoxins, the structure of which his former post-doctoral researcher Takeshi Yasumoto was able to unlock in 1989. Later, Scheuer participated in the "War on Cancer" proclaimed by U.S. President Richard Nixon and developed drugs based on substances he had extracted from Elysia rufescens, a sea slug. He contributed to nearly 300 scientific articles and reviews. The field of molecular and chemical biotechnology, which he co-founded, has developed into an important branch of organic chemistry. Awards and honours His former students initiated the Paul J. Scheuer award in Marine Natural Products in 1992. He was the first recipient. In 1994, he received the Ernest Guenther Award of the American Chemical Society and the Norman R. Farnsworth Research Achievement Award of the American Society of Pharmacognosy. Since 2004, the awards the for marine biotechnology and materials research. Books Paul J. Scheuer: Chemistry of Marine Natural Products. Academic Press, New York 1973, . Paul J. Scheuer (ed.): Marine Natural Products: Chemical and Biological Perspectives. 5 volumes, Academic Press 1978–1983. Volume I, 1978, , . Volume II, 1978, , . Volume III, 1980, , . Volume IV, 1981, , . Volume V, 1983, , . Paul J. Scheuer (ed.): Bioorganic Marine Chemistry. Six volumes, , Springer, Berlin/Heidelberg 1987–1992. Paul J. Scheuer (ed.): Marine Natural Products — Diversity and Biosynthesis (Topics in Current Chemistry 167), Springer, Berlin/Heidelberg 1993, , . Literature István Hargittai: Paul J. Scheuer. In: Candid Science: Conversations with Famous Chemists, 2000, p. 93–113, , . P. Zurer: Paul Scheuer’s life, work celebrated. In: Chemical & Engineering News 79(4), p. 70, 22 January 2001, . Festschrift Issue of Tetrahedron in Honor of Paul Josef Scheuer, Professor Emeritus of Chemistry, The University of Hawaii at Manoa. In: Tetrahedron 56, 2000, p. vii–ix, . References 20th-century German chemists 20th-century American chemists 1915 births 2003 deaths University of Hawaiʻi at Mānoa faculty Northeastern University alumni Harvard University alumni Chemical ecologists Ritchie Boys Emigrants from Nazi Germany Immigrants to the United States
Paul J. Scheuer
Chemistry
1,149
635,253
https://en.wikipedia.org/wiki/Ricky%20Ricotta%27s%20Mighty%20Robot
Ricky Ricotta's Mighty Robot is a series of children's graphic novels written by Dav Pilkey (best known for his Captain Underpants books) and first seven books illustrated by Martin Ontiveros and all nine books illustrated by Dan Santat. In each book, Ricky Ricotta, a mouse, with the help of his mighty robot, saves the world from an evil villain. The books also have an alien animal from a different planet in order from closest-to-sun to farthest-from-sun including Earth, as the villain of the first book is from Earth. The reader could see the villains being jailed in each series and later notice the familiar villains from previous books. Publication history The first three books were initially published as Ricky Ricotta's Giant Robot. Dav Pilkey stopped using Giant in 2002 after many young fans pointed out that the Robot is not a giant. He's just 12 times taller than a mouse, so he's only about tall. Many young readers suggested the adjective "Mighty" instead, so after careful consideration, Pilkey changed the title of all the books. Dan Santat replaced Martin Ontiveros as the illustrator in 2014 and had the first seven books re-drawn by Santat plus illustrated the remaining two books that were not published with Ontiveros as the illustrator. Books 1 through 5 with Santat's illustrations were released in 2014, books 6 and 7 were released in 2015, and the new books 8 and 9 were released in 2016. This happened in 2012 when a Ricky Ricotta fan in Canada begged Pilkey for more Ricky Ricotta books. Shortly after that, his editor and the publisher decided to not only create more Ricky Ricotta books but also to re-illustrate the previous books modernly in colors plus add never-before-seen mini-comics. Pilkey's editor asked who he wanted for re-illustrations, and he chose Dan Santat as the only illustrator in mind. Fortunately for Pilkey, Santat agreed to re-illustrate all seven books, plus illustrate two new remaining, unpublished Ricky Ricotta's Mighty Robot adventures. Graphic novels Ricky Ricotta's Mighty Robot Ricky Ricotta's Mighty Robot (2000) is by Dav Pilkey and illustrated by Martin Ontiveros (2000 version) and Dan Santat (2014 version), the first in the series. The book was initially called Ricky Ricotta's Giant Robot. However, it was changed because the robot is only about 12 times taller than a mouse, and therefore, that does not make him very tall. Ricky Ricotta is a lonely mouse who lives in the fictional city of Squeakyville. He always wishes he had a close companion to keep him company. Once Ricky starts school, he gets bullied because of his small size. Ricky was full of sadness because bullies picked on him. Ricky wishes that something big would happen. In a secret cave on a mountaintop, the crazed rat scientist, Dr. Stinky McNasty, creates a giant robot to destroy Earth. When the robot refuses to do so, Dr. Stinky zaps the robot as punishment. Luckily, though, Ricky happens to be watching at the time. He kicks a ball at the evil scientist, and the ray gun is smashed, forcing Dr. Stinky to retreat. When the robot sees what Ricky has done, he befriends him. Ricky shows his parents, teachers, and schoolmates how helpful the robot can be, but since the robot can't fit in the house, Ricky resorts to stashing him in the garage. Meanwhile, Dr. Stinky is plotting revenge. He brings a potion of hatred and feeds it to the classroom lizard. (At that time, Ricky's teacher and classmates were on a field trip riding the robot.) The lizard becomes Dr. Stinky's new, more obedient servant and attacks the robot when he returns. After Ricky's robot bests the monster in battle, the lizard returns to normal. Enraged at what has just happened, Dr. Stinky decides to destroy the robot himself. He pulls out a rocket launcher, but Ricky jumps at Dr. Stinky just as he pulls the trigger. The rocket ends up landing on Dr. Stinky's secret cave, destroying it. Dr. Stinky then bursts into tears, saying it has been a very bad day for him. Together, Ricky and his robot imprison the mad scientist in the city jail. During the cookout, Ricky talks to his parents about the adventures. He, therefore, hopes his Mighty Robot will be by his side when he says the line, That's what friends are for. Ricky Ricotta's Mighty Robot vs. the Mutant Mosquitoes from Mercury Ricky Ricotta's Mighty Robot vs. the Mutant Mosquitoes from Mercury (2000) is by Dav Pilkey and illustrated by Martin Ontiveros (2000 version) and Dan Santat (2014 version), the second in the series. The story opens with a demonstration of how Ricky's robot helps him out. Ricky looks through the robot's telescopic eye, sees Mercury, and thinks it's cool. But Mr. Mosquito does not feel the same since he lives on Mercury. Mr. Mosquito hates the long scorching days and the long freezing nights, so he tries to find a place with more favorable conditions. He decides on Earth and creates "Mutant Mosquitoes," which Mr. Mosquito uses to take the world over. Meanwhile, Ricky spots the Mutant Mosquitoes, and he asks his teacher if he can go, but Ricky is refused because he has to finish his math test first. Thanks to his robot, Ricky solves the final problems in an instant. The Mighty Robot defeats the mosquitoes with bug spray, sending them back to Mercury. Angry, Mr. Mosquito drags Ricky into his ship and turns into a robotic Mecha-Mosquito, which battles Ricky's robot. However, Ricky's Robot refuses to fight back because he is worried Ricky will get harmed if he does. Ricky distracts Mr. Mosquito by requesting to use the restroom and climbing out the window. With Ricky now out of danger, the robot battles the Mecha-Mosquito and defeats the menace. Mr. Mosquito laments having a bad day and is thrown in jail afterward. In the end, they go home with milk and grilled cheese. Ricky, therefore, hopes his robot will be by his side. Ricky Ricotta's Mighty Robot vs. the Voodoo/Video Vultures from Venus Ricky Ricotta's Mighty Robot vs. the Voodoo Vultures from Venus (2001) is by Dav Pilkey and illustrated by Martin Ontiveros (2001 version) and Dan Santat (2014 version), the third in the series. The title was later updated, renaming the Voodoo Vultures to "Video Vultures," likely because the Vultures do not use voodoo in the book. The story starts when Ricky Ricotta and his Mighty Robot are late for supper because they were in the Hawaii collecting seashells. Ricky's mom says that Ricky and his robot have been late for supper three times that week and tells them that they cannot watch TV until they both learn the value of responsibility. Since the Mighty Robot is unfamiliar with responsibility, Ricky explains that it means doing the right thing at the right time. Of course, while the duo is good at the right things, they have trouble with doing them at the right time. Ricky and his robot sleep under the stars while every other mouse in Squeakyville watches TV. Meanwhile, on Venus, the Voodoo/Video Vultures are frustrated with the unbearably hot atmosphere of their planet, which reduces their food into a melted mess. Hoping to eat better food, they fire a transmission beam to all of the TVs in Squeakyville. All TVs broadcast a signal that hypnotizes all the mice watching TV. Ricky and his Mighty Robot realize the following day that the mice are hypnotized and have been ordered to bring food to the army of Voodoo/Video Vultures. Ricky comes up with a plan to stuff chocolate chip cookies with hot chili peppers. Ricky then pretends to be hypnotized and takes the cookies to the vultures. The Voodoo/Video Vultures fall for the trick and dance around in pain from the spicy taste. This gives the Mighty Robot some time to break the remote control. The Squeakyville mice are now free from hypnosis, and they all flee back to their houses. Ricky's Mighty Robot then grabs the tyrannical leader of the vultures, Victor Von Vulture, and then battles against Voodoo/Video Vultures. In the end, the Mighty Robot wins the fight. The Voodoo/Video Vultures then fly back to Venus, except for Victor Von Vulture. The Mighty Robot and Ricky then put him in jail, and Ricky arrogantly brags at Victor Von Vulture, saying that he needs to learn responsibility. Ricky and his Mighty Robot then come home just in time for supper as Ricky's parents are very proud of them for learning their responsibility. The story ends with Ricky and his Mighty Robot watching Rocket Rodent on TV, complete with TV dinners. Ricky Ricotta's Mighty Robot vs. the Mecha Monkeys from Mars Ricky Ricotta's Mighty Robot vs. the Mecha-Monkeys from Mars (2002) is by Dav Pilkey and illustrated by Martin Ontiveros (2002 version) and Dan Santat (2014 version), the fourth in the series. Ricky and his robot grow tired of playing hide-and-seek, so they decide to skateboard instead. However, there is no skateboard big enough for the Mighty Robot to ride on, so Ricky resorts to using his parents' minivan. After Ricky and his robot accidentally destroy the minivan, Ricky's parents scold them for what they've just done. On Mars, Major Monkey still feels lonely, even after building many Robo-Chimps for company. So he decides to enslave the mice on Earth, but not before disposing of the Mighty Robot first. Ricky and his robot are trying to figure out how to deal with the problem about the minivan when a space mouse floats down to Earth, telling the robot about trouble on Mars. Ricky's robot follows the ship, but it is a trap. (The "space mouse" is revealed to be Major Monkey.) With the robot captured, Major Monkey and his army of Mecha-Monkeys launch an all-out assault on Squeakyville. Ricky feels terrible about what happened to his friend and wonders what to do about it. A general from SASA arrives to save his robot from Mars. When Ricky approaches the lab, it turns out to be an Orangu-Tron. Ricky infiltrates the lab and frees his robot from the mech's clutches, then sets it to self-destruct. The Mighty Robot takes the shuttle back to Earth, and after fighting the rest of the machines, Major Monkey is peeved because the monkeys are monkeying around. When the Mecha-Monkeys are defeated, they return to Mars, and Major Monkey realizes that he has made a big mistake. As payment for his mistake, Ricky and his robot put Major Monkey in the Squeakyville Jail. When the SASA general asks how they can repay Ricky and his robot, Ricky requests a new minivan. Ricky's parents race home the heroes in their new rocket-powered minivan, and everyone has cheese pizza and root beer for dinner. Ricky Ricotta's Mighty Robot vs. the Jurassic Jackrabbits from Jupiter Ricky Ricotta's Mighty Robot vs. the Jurassic Jackrabbits from Jupiter (2002) is a children's novel by Dav Pilkey and illustrated by Martin Ontiveros (2002 version) and Dan Santat (2014 version), the fifth in the series. It is Ricky Ricotta's birthday, and his parents plan to take him to the dinosaur museum with his annoying cousin, Lucy ("Oh, NO! Not Lucy! She is a little pest!"). Meanwhile, on Jupiter, General Jackrabbit plans to take over Earth. He goes to Earth in his rocket ship, which lands on the dinosaur museum. Lucy, Ricky, and his parents notice the skulls on the dinosaur skeletons are gone, which General Jackrabbit stole. Ricky and his Mighty Robot then go the rooftop of the museum. They find the spaceship there. General Jackrabbit then gets some hairs from his tail and puts them in the DNA machine with the dinosaur skulls (the DNA strands were incomplete), and then the machine works. Three eggs pop out of the machine. They then hatch (into a Rabbidactyl, Trihareatops, and a Bunnysaurus Rex). Then, General Jackrabbit makes them grow bigger with his "Meany Machiney". The Mighty Robot confronts the Jackrabbits, but General Jackrabbit blasts the Jackrabbits with the Meany Machiney again. The Jackrabbits defeat the Mighty Robot. Ricky wants to save his robot, so he rings the doorbell on General Jackrabbit's rocket ship. Two Robo-Rabbits open the door, replying that they will only accept Jackrabbits. Ricky gets help from Lucy when she disguises herself as a Jackrabbit. The Robo-Rabbits fall in love with Lucy. While the Robo-Rabbits bake food for her, Ricky sneaks upstairs and finds General Jackrabbit with the Meany Machiney. General Jackrabbit smells carrot cake, so he goes downstairs to see what the Robo-Rabbits are up to. Ricky then switches the complex controls from "Big, Ugly 'N' Evil" to "Little, Cute 'N' Sweet" and then zaps the Jurassic Jackrabbits. When the General catches Ricky messing around with his machine, Lucy hits General Jackrabbit in the face with a pie. Then, the three rabbits are put into their rocket ship and thrown back to their home planet by the Mighty Robot. Ricky and the Mighty Robot return the dinosaur skulls and bust General Jackrabbit. Lucy finds the little Jurassic Jackrabbits cute and keeps them as pets before it was time for Ricky's birthday; (to eat pizza, Ricky blows candles, and everyone eats the cake.) Ricky Ricotta's Mighty Robot vs. the Stupid Stinkbugs from Saturn Ricky Ricotta's Mighty Robot vs. the Stupid Stinkbugs from Saturn (2003) is a children's novel by Dav Pilkey and illustrated by Martin Ontiveros (2003 version) and Dan Santat (2015 version), the sixth in the series. When they play cops and robbers in their yard, Ricky wants to be the robber because he is good at hiding, and his robot wanted to be the cop because he is good at finding things. Ricky tells his robot that using x-ray vision is cheating. Ricky and his robot then have to visit Ricky's Uncle Freddie, Aunt Ethel, and his annoying cousin Lucy, who has adopted the Jurassic Jackrabbits from the previous book and named them "Fudgie", "Cupcake", and "Waffles". Ricky gets irritated because Lucy always wants to play Princess. (Lucy will be the beautiful princess, Ricky will be the ugly prince, Cupcake, Fudgie, and Waffles will be the royal ponies, and the Mighty Robot will be the brave knight) Meanwhile, on the dark, smelly, polluted world of Saturn, Sergeant Stinkbug grows tired of the planet and decides to find another world to dump junk. Sergeant Stinkbug decides to kidnap Earth's leader and spies on the planet. When he sees Lucy, Sergeant Stinkbug swoops down to Earth and kidnaps her while Ricky and his robot and the three Jurassic Jackrabbits are snacking away. While Sergeant Stinkbug tries to interrogate Lucy, her Jurassic Jackrabbits come to the rescue. However, Sergeant Stinkbug summons a few of his greatest stinkbug warriors and gives them "Grow-Big Gumballs", causing them to grow. The Mighty Robot battles well against them at first, but Sergeant Stinkbug gives them more Gumballs, making them bigger than they were before. Fudgie and Cupcake manage to swipe the Gumballs from under the stinkbugs' noses, and Ricky chomps them down. Once he is as big as the stinkbugs, Ricky battles against them and wins. But then, he soon realizes that he still needs to get rid of the foes. Ricky then sees that Lucy had eaten the rest of the Gumballs (as she enjoys eating them), and has grown way bigger than anyone else. She grabs the rest of the stinkbugs, stuffs them back inside the spaceship, and hurls them back to Saturn. Fudgie and Cupcake find "Super-Shrinking Saltwater Taffy" and feed it to the two cousins. Once they are back to normal, they throw Sergeant Stinkbug into Squeakyville Jail. When Ricky and Lucy get back and tell everyone what happened, the adults think it is all just part of a game they were playing. Ricky's father is happy that Ricky played nice with Lucy, though. The book finishes with the heroes celebrating their victory. Ricky Ricotta's Mighty Robot vs. the Uranium Unicorns from Uranus Ricky Ricotta's Mighty Robot vs. the Uranium Unicorns from Uranus (2005) is a children's novel by written Dav Pilkey and illustrated by Martin Ontiveros (2005 version) and Dan Santat (2015 version), the seventh in the series. The book starts with Ricky trying to play with his mighty robot, but he is unable to do certain things with his robot due to his massive size, saying the lines "This is fun!" every time the robot is able to successfully do something and "This is NOT fun" every time they fail. Ricky wishes that his Mighty Robot would have someone to play with himself; however, Ricky's wish actually turns out to be a plan of the evil Uncle Unicorn, whose planet is now polluted, as previous villains now dump their trash there. Uncle Unicorn creates a Ladybot, certain that he will be the first one to succeed at conquering Earth. Seemingly, Uncle Unicorn does, and Ricky's robot, hypnotized, falls in love with the Ladybot and gets abducted, unknown to Ricky, who now regrets his wish. Ricky's parents asked him to tuck him in tonight, but Ricky declines and goes to bed with a sad heart. Finally, he decides to find his robot and does so in the woods, where his robot is chained to a rocket ship and guarded by Uncle Unicorn's Uranium Unicorns, who are under the command of the Ladybot beside the evil Uncle Unicorn. Ricky, shocked, stops by his cousin Lucy's house to borrow her Jurassic Jackrabbit, Waffles, along with Lucy herself. However, upon entering the Ladybot, they are captured by Uncle Unicorn and tied up, hanging above the Generator. Ricky discovers that the Generator is sensitive to water and tries to sweat more and spit. He asks Lucy to spit for him, but she refuses when she thinks princesses do not spit. Ricky thinks up a plan and makes Lucy burst into tears by saying that everything she loves will be destroyed (ice cream, chocolate chip cookies, cotton candy, coconut cream pie, vanilla wafers, and grape lollipops). The robot breaks free of the spell and battles the Unicorns. However, Uncle Unicorn starts the backup generator and makes the Ladybot grow into a bigger, deadlier machine ... who trips over almost immediately after activating. The heroes discover the reason why - her shoelaces were tied together by none other than Fudgie and Cupcake, the other two Jurassic Jackrabbits. Ricky's robot throws the Ladybot and the Uranium Unicorns into space and puts Uncle Unicorn in prison, which is getting quite crowded. The book ends with Ricky brushing the robot's teeth, putting on the robot's pajamas, and tucking him to bed. Ricky's parents were glad that they were having fun again since they are different before Ricky reads a bedtime story to his robot. Ricky Ricotta's Mighty Robot vs. the Naughty Nightcrawlers from Neptune Ricky Ricotta's Mighty Robot vs. the Naughty Nightcrawlers from Neptune (2016) is a children's novel by Dav Pilkey and illustrated by Dan Santat. The 8th book in the series, it released in 2016 after spending over a decade in development hell. Before Ricky's parents go shopping, they tell him and the robot not to make any mess while they are gone. Ricky and his robot build a fort using debris from an abandoned building. But Farmer Feta, Ricky's next door neighbor, is annoyed by all the commotion. When Ricky's parents return home and notice the stylish fort, Ricky's mother is most displeased and calls it a mess. Later, Ricky's aunt and uncle arrive with cousin Lucy and her Jurassic Jackrabbits. Lucy calls the fort a castle and says it should be painted pink, much to Ricky's dismay. Meanwhile, on planet Neptune, Nimrod Nightcrawler is planning to take over Earth due to the environment being clouded by methane vapors from fossil fuel digging and forcing the nightcrawlers to live underground. However, he knows that Ricky and the Mighty Robot stand in his way and wants to avoid the other villains' mistakes. Nimrod launches a rocket with an inflatable teleporter (wormhole) to planet Earth, which crash-lands on Farmer Feta's property. Through the wormhole, Nimrod convinces the Farmer that he can stop Ricky and his robot for their annoyances by digging tunnels under his property. The army of Naughty Nightcrawlers dig straight under Ricky's fort, causing the ground to become unstable. When the robot tries to save the fort, the ground gives way and the robot ends up falling into the pit, imprisoned by the rubble from the ruined fort. Farmer Feta owns up to his mistake, and the kids decide to enter the wormhole. Finding themselves in Neptune's caverns, Ricky and Lucy happen upon nightcrawler guards watching the destruction of Squeakyville. Unfortunately, their presence is given away, and the guards put the squeeze on their uninvited guests. The heroes tickle the nightcrawlers to release their grip and head back to Earth through the wormhole, but the nightcrawlers follow and trap their prey. At that moment, the Mighty Robot comes to the rescue, having been dug up by the drilling suit Lucy rode on the way back. The Nightcrawlers retreat through the wormhole, and the robot takes on Nimrod and his army, who are still wreaking havoc in the city. After the Naughty Nightcrawlers' defeat, Nimrod himself is taken to the Squeakyville Jail. There is still the issue of the big mess in the backyard, but Ricky remedies the issue by using the wormhole. Everyone dumps all the scrap into the wormhole, and it winds up in the nightcrawler tunnels on Neptune, crushing their technology in the process. With the wormhole powered down, a deep pit remains in the yard, but Lucy turns it into an artificial pond. The story concludes with Ricky, the Robot (using the deactivated wormhole as his inflatable tube), Lucy, and her pets swimming around in the backyard pond. Ricky Ricotta's Mighty Robot vs. the Unpleasant Penguins from Pluto Ricky Ricotta's Mighty Robot vs. the Unpleasant Penguins from Pluto (2016) is a children's novel by Dav Pilkey and illustrated by Dan Santat. The 9th book in the series, it released in 2016 after spending over a decade in development hell. Ricky and his robot are having fun in their backyard pond until cousin Lucy shows up. She pours pink bubble bath in the pond, thinking it will be glamorous. However, Ricky disagrees, and calls out on Lucy while the adults are playing cards. Ricky's parents scold him for being mean, but Lucy starts crying and leaves before Ricky can get the change to apologize. On the world of Pluto, President Penguin is feeling dishonored that his home world is no longer considered a planet. Determined to teach Earth to show respect, the President sets off to Earth to have everyone repay the insult. Back on Earth, Ricky decides to show Lucy how sorry he really is (he has to explain the meaning of "making amends" since the robot is unfamiliar). The robot flies Ricky to Hawaii to gather beautiful wildflowers, and when the two return, they spend all night planting them on a mountain in Squeakyville for Lucy to see. At daybreak, President Penguin's spaceship lands on the mountain. Ricky calls Lucy and tells her about the surprise he has for her. When Lucy spies the castle-like spaceship on the mountain with her name planted in flowers (the surprise was the flowers spelling the phrase "HOORAY FOR LUCY". President Penguin lands on the word "HOORAY"), Lucy and her Jurassic Jackrabbits race to the spaceship, unaware that the ship is not actually part of the surprise. When Lucy and her pets approach the President's head guards, Clancy and Nigel, the penguins treat Lucy as a real princess and bake goods for her in the "castle". Meanwhile, President Penguin has launched an invasion on the city of Squeakyville. Ricky and his robot catch sight of the Penguin-Mobiles and try to halt the assault, but the machines encase the robot's feet in ice with their freezer beams. Lucy and her pets hear the commotion, and when they see what has happened, Clancy and Nigel explain the reason behind the President's actions. Knowing that she must help her cousin, Lucy tries to intervene, but President Penguin subdues them as well. At that moment, two missiles from nowhere clip the two Penguin-Mobiles guarding the Mighty Robot. It turns out that Clancy and Nigel are intervening as well (we are making amends), and they destroy the President's Penguin-Mobile. The explosion sends everyone flying, and after the Mighty Robot saves Ricky in the nick of time, President Penguin is put into the cooler. With the Squeakyville Jail now full, an expansion is under construction. Then, Lucy declares Clancy and Nigel as Pluto's new Presidents. Having made amends, Clancy and Nigel take the captain and the first mate and fly back to Pluto. At home, Ricky and the robot welcome Lucy and her pets to the pond, having fun together. Ricky's parents thank their son for apologizing and making amends to Lucy. The book ends with everyone having a pond party. Activity book Ricky Ricotta's Mighty Robot Astro-Activity Book o' Fun (2006) is an activity book by Dav Pilkey and illustrated by Martin Ontiveros. It contains puzzles, true-or-false, and even a first peek at the Naughty Nightcrawlers from Neptune and the Un-Pleasant Penguins from Pluto (though these two installments were not released with illustrations by Ontiveros). Notes References External links Ricky Ricotta's Mighty Robot (Ricky Ricotta official site) Pilkey.com (Ricky Ricotta's Mighty Robot on Dav Pilkey's official site) Classroom guide (Classroom guide to Ricky Ricotta) Children's science fiction novels Science fiction book series Fictional mice and rats Fictional robots Works by Dav Pilkey Novels set on Mercury (planet) Novels set on Venus Novels set on Mars Fiction set on Jupiter Novels set on Jupiter Fiction set on Saturn Novels set on Uranus Novels set on Neptune Fiction set on Pluto Pluto's planethood American children's book series
Ricky Ricotta's Mighty Robot
Astronomy
5,798
3,164,917
https://en.wikipedia.org/wiki/Epsilon%20Capricorni
Epsilon Capricorni, Latinized from ε Capricorni, is a possible binary star system in the constellation Capricornus. It has the traditional star name Kastra, meaning "fort" or "military camp" in Latin. Based upon an annual parallax shift of 3.09 mas as seen from the Earth, the star is located about 1,060 light years from the Sun. It can be seen with the naked eye, having a combined apparent visual magnitude of 4.62. In Chinese, (), meaning Line of Ramparts, refers to an asterism consisting of ε Capricorni, κ Capricorni, γ Capricorni, δ Capricorni, ι Aquarii, σ Aquarii, λ Aquarii, φ Aquarii, 27 Piscium, 29 Piscium, 33 Piscium and 30 Piscium. Consequently, the Chinese name for ε Capricorni itself is (, .) The binary system has an orbital period of 129 days. The primary, component Aa, is a Be star that is surrounded by ionized gas that is producing the emission lines in the spectrum. This circumstellar shell is inclined by 80° to the line of sight from the Earth. The system is undergoing both short term and long term variations in luminosity, with the short period variations showing a phase cycle of 1.03 days. It is classified as a Gamma Cassiopeiae variable with an amplitude of 0.16 in magnitude. Epsilon Capricorni Aa is a blue-white hued B-type main sequence star with a stellar classification of B2.5 Vpe and a visual magnitude of +4.62. It has 7.6 times the mass of the Sun and 4.8 times the Sun's radius. The star is spinning rapidly, with a projected rotational velocity of 225 km/s. This is giving it an oblate shape with an equatorial bulge that is 7% larger than the polar radius. The system has two visual companions. Component B is a visual magnitude 10.11 star at an angular separation of 65.8 arc seconds along a position angle of 46°, as of 2013. Component C with visual magnitude of 14.1 lies at an angular separation of 62.7 arc seconds along a position angle of 164°, as of 1999. Both stars are likely to be unrelated and at different distances to Epsilon Capricorni. References External links B-type main-sequence stars Gamma Cassiopeiae variable stars Binary stars Capricornus Castra Capricorni, Epsilon Durchmusterung objects Capricorni, 39 205637 105723 8260 de:Castra
Epsilon Capricorni
Astronomy
563
4,048,014
https://en.wikipedia.org/wiki/Monoamine%20oxidase%20A
Monoamine oxidase A, also known as MAO-A, is an enzyme (E.C. 1.4.3.4) that in humans is encoded by the MAOA gene. This gene is one of two neighboring gene family members that encode mitochondrial enzymes which catalyze the oxidative deamination of amines, such as dopamine, norepinephrine, and serotonin. A mutation of this gene results in Brunner syndrome. This gene has also been associated with a variety of other psychiatric disorders, including antisocial behavior. Alternatively spliced transcript variants encoding multiple isoforms have been observed. Structures Gene Monoamine oxidase A, also known as MAO-A, is an enzyme that in humans is encoded by the MAOA gene. The promoter of MAOA contains conserved binding sites for Sp1, GATA2, and TBP. This gene is adjacent to a related gene (MAOB) on the opposite strand of the X chromosome. In humans, there is a 30-base repeat sequence repeated several different numbers of times in the promoter region of MAO-A. There are 2R (two repeats), 3R, 3.5R, 4R, and 5R variants of the repeat sequence, with the 3R and 4R variants most common in all populations. The variants of the promoter have been found to appear at different frequencies in different ethnic groups in an American sample cohort. The epigenetic modification of MAOA gene expression through methylation likely plays an important role in women. A study from 2010 found epigenetic methylation of MAOA in men to be very low and with little variability compared to women, while having higher heritability in men than women. Protein MAO-A shares 70% amino acid sequence identity with its homologue MAO-B. Accordingly, both proteins have similar structures. Both MAO-A and MAO-B exhibit an N-terminal domain that binds flavin adenine dinucleotide (FAD), a central domain that binds the amine substrate, and a C-terminal α-helix that is inserted in the outer mitochondrial membrane. MAO-A has a slightly larger substrate-binding cavity than MAO-B, which may be the cause of slight differences in catalytic activity between the two enzymes, as shown in quantitative structure-activity relationship experiments. Both enzymes are relatively large, about 60 kilodaltons in size, and are believed to function as dimers in living cells. Function Monoamine oxidase A catalyzes O2-dependent oxidation of primary arylalkyl amines, most importantly neurotransmitters such as dopamine and serotonin. This is the initial step in the breakdown of these molecules. The products are the corresponding aldehyde, hydrogen peroxide, and ammonia: R-Amine + + → R-Aldehyde + + This reaction is believed to occur in three steps, using FAD as an electron-transferring cofactor. First, the amine is oxidized to the corresponding imine, with reduction of FAD to FADH2. Second, O2 accepts two electrons and two protons from FADH2, forming and regenerating FAD. Third, the imine is hydrolyzed by water, forming ammonia and the aldehyde. Compared to MAO-B, MAO-A has a higher specificity for serotonin and norepinephrine, while the two enzymes have similar affinity for dopamine and tyramine. MAO-A is a key regulator for normal brain function. In the brain, the highest levels of transcription occur in the brain stem, hypothalamus, amygdala, habenula, and nucleus accumbens, and the lowest in the thalamus, spinal cord, pituitary gland, and cerebellum. Its expression is regulated by the transcription factors SP1, GATA2, and TBP via cAMP-dependent regulation. MAO-A is also expressed in cardiomyocytes, where it is induced in response to stress such as ischemia and inflammation. Clinical significance Cancer MAO-A produces an amine oxidase, which is a class of enzyme known to affect carcinogenesis. Clorgyline, an MAO-A enzyme inhibitor, prevents apoptosis in melanoma cells, in vitro. Cholangiocarcinoma suppresses MAO-A expression, and those patients with higher MAO-A expression had less adjacent organ invasion and better prognosis and survival. Cardiovascular disease MAOA activity is linked to apoptosis and cardiac damage during cardiac injury following ischemic-reperfusion. Behavioral and neurological disorders There is some association between low activity forms of the MAOA gene and autism. Mutations in the MAOA gene results in monoamine oxidase deficiency, or Brunner syndrome. Other disorders associated with MAO-A include Alzheimer's disease, aggression, panic disorder, bipolar disorder, major depressive disorder, and attention deficit hyperactivity disorder. Effects of parenting on self-regulation in adolescents appear to be moderated by 'plasticity alleles', of which the 2R and 3R alleles of MAOA are two, with "the more plasticity alleles males (but not females) carried, the more and less self-regulation they manifested under, respectively, supportive and unsupportive parenting conditions." Depression MAO-A levels in the brain as measured using positron emission tomography are elevated by an average of 34% in patients with major depressive disorder. Genetic association studies examining the relationship between high-activity MAOA variants and depression have produced mixed results, with some studies linking the high-activity variants to major depression in females, depressed suicide in males, major depression and sleep disturbance in males and major depressive disorder in both males and females. Other studies failed to find a significant relationship between high-activity variants of the MAOA gene and major depressive disorder. In patients with major depressive disorder, those with MAOA G/T polymorphisms (rs6323) coding for the highest-activity form of the enzyme have a significantly lower magnitude of placebo response than those with other genotypes. Antisocial behavior In humans, an association between the 2R allele of the VNTR region of the gene and an increase in the likelihood of committing serious crime or violence has been found. The VNTR 2R allele of MAOA has been found to be a risk factor for violent delinquency, when present in association with stresses, i.e. family issues, low popularity or failing school. A connection between the MAO-A gene 3R version and several types of anti-social behaviour has been found: Maltreated children with genes causing high levels of MAO-A were less likely to develop antisocial behavior. Low MAO-A activity alleles which are overwhelmingly the 3R allele in combination with abuse experienced during childhood resulted in an increased risk of aggressive behaviour as an adult, and men with the low activity MAOA allele were more genetically vulnerable even to punitive discipline as a predictor of antisocial behaviour. High testosterone, maternal tobacco smoking during pregnancy, poor material living standards, dropping out of school, and low IQ predicted violent behavior are associated with men with the low-activity alleles. According to a large meta-analysis in 2014, the 3R allele had a small, nonsignificant effect on aggression and antisocial behavior, in the absence of other interaction factors. Owing to methodological concerns, the authors do not view this as evidence in favor of an effect. The MAO-A gene was the first candidate gene for antisocial behavior and was identified during a "molecular genetic analysis of a large, multigenerational, and notoriously violent, Dutch kindred". A study of Finnish prisoners revealed that a MAOA-L (low-activity) genotype, which contributes to low dopamine turnover rate, was associated with extremely violent behavior. For the purpose of the study, "extremely violent behavior" was defined as at least ten committed homicides, attempted homicides or batteries. However, a large genome-wide association study has failed to find any large or statistically significant effects of the MAOA gene on aggression. A separate GWAS on antisocial personality disorder likewise did not report a significant effect of MAOA. Another study, while finding effects from a candidate gene search, failed to find any evidence in a large GWAS. A separate analysis of human and rat genome wide association studies, Mandelian randomization studies, and causal pathway analyses likewise failed to reveal robust evidence of MAOA in aggression. This lack of replication is predicted from the known issues of candidate gene research, which can produce many substantial false positives. Aggression and the "Warrior gene" Low-activity variants of the VNTR promoter region of the MAO-A gene have been referred to as the warrior gene. When faced with social exclusion or ostracism, individuals with the low activity MAO-A variants showed higher levels of aggression than individuals with the high activity MAO-A gene. Low activity MAO-A could significantly predict aggressive behaviour in a high provocation situation: Individuals with the low activity variant of the MAO-A gene were more likely (75% as opposed to 62%, out of a sample size of 70) to retaliate, and with greater force, as compared to those with a normal MAO-A variant if the perceived loss was large. The effects of MAOA genes on aggression have also been criticized for being heavily overstated. Indeed, the MAOA gene, even in conjunction with childhood adversity, is known to have a very small effect. The vast majority of people with the associated alleles have not committed any violent acts. Legal implications In a 2009 criminal trial in the United States, an argument based on a combination of "warrior gene" and history of child abuse was successfully used to avoid a conviction of first-degree murder and the death penalty; however, the convicted murderer was sentenced to 32 years in prison. In a second case, an individual was convicted of second-degree murder, rather than first-degree murder, based on a genetic test that revealed he had the low-activity MAOA variant. Judges in Germany are more likely to sentence offenders to involuntary psychiatric hospitalization on hearing an accused's MAOA-L genotype. Epigenetics Studies have linked methylation of the MAOA gene with nicotine and alcohol dependence in women. A second MAOA VNTR promoter, P2, influences epigenetic methylation and interacts with having experienced child abuse to influence antisocial personality disorder symptoms, only in women. A study of 34 non-smoking men found that methylation of the gene may alter its expression in the brain. Animal studies A dysfunctional MAOA gene has been correlated with increased aggression levels in mice, and has been correlated with heightened levels of aggression in humans. In mice, a dysfunctional MAOA gene is created through insertional mutagenesis (called 'Tg8'). Tg8 is a transgenic mouse strain that lacks functional MAO-A enzymatic activity. Mice that lacked a functional MAOA gene exhibited increased aggression towards intruder mice. Some types of aggression exhibited by these mice were territorial aggression, predatory aggression, and isolation-induced aggression. The MAO-A deficient mice that exhibited increased isolation-induced aggression reveals that an MAO-A deficiency may also contribute to a disruption in social interactions. There is research in both humans and mice to support that a nonsense point mutation in the eighth exon of the MAOA gene is responsible for impulsive aggressiveness due to a complete MAO-A deficiency. Interactions Transcription factors A number of transcription factors bind to the promoter region of MAO-A and upregulate its expression. These include:Sp1 transcription factor, GATA2, TBP. Inducers Synthetic compounds that up-regulate the expression of MAO-A include Valproic acid (Depakote) Inhibitors Substances that inhibit the enzymatic activity of MAO-A include: Synthetic compounds Befloxatone (MD370503) Brofaromine (Consonar) Cimoxatone Clorgyline (irreversible) Methylene Blue Minaprine (Cantor) Moclobemide (Aurorix, Manerix) Phenelzine (Nardil) Pirlindole (Pirazidol) Toloxatone (Humoryl) Tyrima (CX 157) Tranylcypromine (nonselective and irreversible) Natural products (herbal sources) Incarviatone A (Incarvillea delavayi) Garlic (Garlic) β-Carboline alkaloids (Syrian Rue, Passion Flower, Tobacco smoke, Ayahuasca) Harmine Harmaline Isoquinoline alkaloids Piperine (Black pepper) Rosiridin (in vitro) See also Monoamine oxidase B Monoamine oxidase inhibitor - a class of antidepressant drugs that block or inactivate one or both MAO isoforms References Further reading External links Aggression Criminology EC 1.4.3 Human proteins de:Warrior Gene
Monoamine oxidase A
Biology
2,736
237,788
https://en.wikipedia.org/wiki/Clayoquot%20Sound
Clayoquot Sound is located on the west coast of Vancouver Island in the Canadian province of British Columbia. It is bordered by the Esowista Peninsula to the south, and the Hesquiaht Peninsula to the North. It is a body of water with many inlets and islands. Major inlets include Sydney Inlet, Shelter Inlet, Herbert Inlet, Bedwell Inlet, Lemmens Inlet, and Tofino Inlet. Major islands include Flores Island, Vargas Island, and Meares Island. The name is also used for the larger region of land around the waterbody (essentially its watershed). Origin of the name The name Clayoquot is derived from the name of a subgroup of the Nuu-chah-nulth, who lived at Clayoqua. In the late 20th century, this group merged into the multi-group band government known as the , meaning "different" or "changing" in their language. History First Nations have inhabited the area for thousands of years. The oldest dated location within Nuu-cha-nulth territory is 4,200 years (at Yuquot, Nootka Island). Because post-glacial sea-levels are known to have risen, overtaking earlier locations, most scholars will date the beginnings of human habitation beyond 9,000 years BP before present. In the late 18th century, Clayoquot Sound and the Native American peoples were explored by ship by various Europeans and Americans who were involved mainly in the fur trade. In 1791, the complex inner waters were explored and mapped by José María Narváez and Juan Carrasco; their commander, Francisco de Eliza, met and befriended Wickaninnish, the chief of the peoples. These explorers recognized the region's wealth of natural resources. These resources attracted growing numbers of non-First Nations peoples, who limited First Nation access to land, and generated resentment among the locals. Government support of private company resource extraction allowed for the growth of such industry over time. Logging companies were active in harvesting timber in the Clayoquot Sound area as late as the 1980s and 1990s. Logging protests In the late 20th century, First Nations became more active in trying to defend their rights and resources. They developed Native lobbying organizations and insisted on negotiations regarding governmental policies about such resources. In the late 1980s, the situation escalated when the government approved MacMillan Bloedel Corporation's permit to log Meares Island. The First Nations peoples expressed their opposition to the MacMillan Bloedel Corporation logging in the Clayoquot Sound by several peaceful protests and blockades of logging roads from 1980 to 1994. In the summer of 1993, over 800 protestors were arrested, and many were tried for interfering with approved industry. Protestors included members of the local First Nation and Ahousaht First Nation bands, as well as NDP MP Svend Robinson, and environmental groups such as Greenpeace and Friends of Clayoquot Sound. International mass media covered the protests and blockades, helping to create national support for environmental movements in British Columbia and foster strong advocacy for anti-logging campaigns. Media reported the perceived injustice of numerous individuals being arrested for joining peaceful protests and blockades. In some cases, law enforcement responded aggressively, which eventually helped strengthen public support for non-violent protests. After the 1990 protests, the provincial government made its first significant change in policy. It commissioned a scientific panel to examine issues related to Clayoquot Sound. In July 1995, the Forests Minister of British Columbia, Andrew Petter, and the Environment Minister, Elizabeth Cull, officially accepted the 127 panel recommendations, on behalf of the NDP government. Members of Greenpeace were reported to play a significant role in these protests and instigated a boycott of BC forest products to apply pressure on the industry. After the government accepted the scientific panel's recommendations, specifically deferring logging until an inventory of pristine areas was completed, Greenpeace lifted the boycott. After the inventory, the government reduced the Annual Allowable Cut, and clear-cuts in the area were limited to a maximum of four hectares. In addition, once biological and cultural inventories were completed, the government required Eco-Based Planning. Salmon farming and related controversy The sound's ecological features have made it a major site for the farming of salmon, a fish traditional to this area. Floating feedlots have been installed, consisting of giant fenced pens. There are roughly twenty such farms in operation. A massive die-off of fish, possibly linked to an algal bloom caused by the intensive farming operation, occurred in 2019. The densely packed farms have the disadvantage of providing conditions that allow for the rapid spread of disease. A highly contagious virus variant found in Norwegian salmon farms has been found in Clayoquot Sound farmed salmon. Environmental advocacy organizations have stated such events are evidence of the environmental damage associated with this type of fish farming. The British Columbia provincial government has closed other salmon farming sites on Vancouver Island. For instance, it is phasing out salmon farms by 2022 in the Discovery Islands on Vancouver Island's east side. Indigenous peoples and governments The members of three major First Nations band governments of the peoples inhabit the Clayoquot Sound: the Hesquiaht in the North, the Ahousaht in the middle, and the in the south. The latter group is based in the village of Opitsaht on Meares Island. The village of Tofino lies opposite Opitsaht on the southern promontory of the entrance to the sound. In 1985, for the first time in British Columbia history, the courts froze resource development on crown land because of a related Aboriginal title claim. Chiefs of the Ahousaht and first nations obtained an injunction halting logging on Meares Island in Clayoquot Sound pending treaty negotiations with the provincial government. These negotiations resulted in the provincial government and first nations signing the Interim Measures Act (IMA) on March 19, 1994. (This followed protests in 1993 that gained international coverage on this issue, increasing the pressure.) Since the IMA was signed, the First Nations and government have negotiated to co-manage local land and resources, including economic development strategies. With the reduction in logging in this area, in the early 21st century, the communities surrounding Clayoquot Sound (Tofino, Ucluelet, and Ahousaht) have been developing new sources of income. They are emphasizing ecotourism and selective logging, based on co-management strategies. Ecology, parks and terrain The land around Clayoquot Sound includes vast coastal temperate rain forest, rivers, lakes, marine areas and beaches. It includes part of the Pacific Rim National Park Reserve and some of Strathcona Provincial Park. The total size of the Clayoquot Sound region, including both land and water, is . More than have been included as the subject of a multi-year study using Terrestrial Ecosystem Mapping (TEM) to identify areas prone to geologic and geomorphic hazards, in particular, landslides, soil erosion, and sedimentation. The study is also to identify and characterize terrain conditions associated with these hazards. The region contains the largest area of intact (unlogged) temperate rainforest left on Vancouver Island. Clayoquot Sound is home to wolves, black bears, cougars, grey whales, orcas, porpoises, seals, sea lions, river otters, bald eagles, osprey, marbled murrelets, Pacific loons, Roosevelt elk, martens, and raccoons. In 2000, Clayoquot Sound Biosphere Reserve was designated as a Biosphere Reserve by UNESCO. The designation created world recognition of Clayoquot Sound's biological diversity, and a $12M monetary fund to "support research, education and training in the Biosphere region". At the end of July 2006, a new set of Watershed Plans was approved for Clayoquot Sound. This enabled logging in some of the forest, including pristine old-growth valleys. As of 2007, both logging tenures within Clayoquot Sound are controlled by aboriginal-owned logging companies Iisaak Forest Resources controls Timber Forest License (TFL) 57 in Clayoquot Sound, and MaMook Natural Resources Ltd, in conjunction with Coulson Forest Products, manages TFL54 in Clayoquot Sound. See also Kennedy Lake Cougar Annie Clayoquot Arm Provincial Park Clayoquot Plateau Provincial Park Great Bear Rainforest References External links Integrated Land Management Bureau, Government of British Columbia website Biosphere reserves of Canada Old-growth forests Sounds of British Columbia Clayoquot Sound region Nuu-chah-nulth
Clayoquot Sound
Biology
1,771
3,509,077
https://en.wikipedia.org/wiki/Ply%20%28game%20theory%29
In two-or-more-player sequential games, a ply is one turn taken by one of the players. The word is used to clarify what is meant when one might otherwise say "turn". The word "turn" can be a problem since it means different things in different traditions. For example, in standard chess terminology, one move consists of a turn by each player; therefore a ply in chess is a half-move. Thus, after 20 moves in a chess game, 40 plies have been completed—20 by white and 20 by black. In the game of Go, by contrast, a ply is the normal unit of counting moves; so for example to say that a game is 250 moves long is to imply 250 plies. In poker with n players the word "street" is used for a full betting round consisting of n plies; each dealt card may sometimes also be called a "street". For instance, in heads up Texas hold'em, a street consists of 2 plies, with possible plays being check/raise/call/fold: the first by the player at the big blind, and the second by the dealer, who posts the small blind; and there are 4 streets: preflop, flop, turn, river (the latter 3 corresponding to community cards). The terms "half-street" and "half-street game" are sometimes used to describe, respectively, a single bet in a heads up game, and a simplified heads up poker game where only a single player bets. The word "ply" used as a synonym for "layer" goes back to the 15th century. Arthur Samuel first used the term in its game-theoretic sense in his seminal paper on machine learning in checkers in 1959, but with a slightly different meaning: the "ply", in Samuel's terminology, is actually the depth of analysis ("Certain expressions were introduced which we will find useful. These are: Ply, defined as the number of moves ahead, where a ply of two consists of one proposed move by the machine and one anticipated reply by the opponent"). In computing, the concept of a ply is important because one ply corresponds to one level of the game tree. The Deep Blue chess computer which defeated Kasparov in 1997 would typically search to a depth of between six and sixteen plies to a maximum of forty plies in some situations. See also Minimax algorithm References Further reading External links Chess terminology Game artificial intelligence Game theory
Ply (game theory)
Mathematics
516
945,521
https://en.wikipedia.org/wiki/Fatty%20liver%20disease
Fatty liver disease (FLD), also known as hepatic steatosis and steatotic liver disease (SLD), is a condition where excess fat builds up in the liver. Often there are no or few symptoms. Occasionally there may be tiredness or pain in the upper right side of the abdomen. Complications may include cirrhosis, liver cancer, and esophageal varices. The main subtypes of fatty liver disease are metabolic dysfunction–associated steatotic liver disease (MASLD, formerly "non-alcoholic fatty liver disease" (NAFLD)) and alcohol-associated liver disease (ALD), with the category "metabolic and alcohol associated liver disease" (metALD) describing an overlap of the two. The primary risks include alcohol, type 2 diabetes, and obesity. Other risk factors include certain medications such as glucocorticoids, and hepatitis C. It is unclear why some people with NAFLD develop simple fatty liver and others develop nonalcoholic steatohepatitis (NASH), which is associated with poorer outcomes. Diagnosis is based on the medical history supported by blood tests, medical imaging, and occasionally liver biopsy. Treatment of NAFLD is generally by dietary changes and exercise to bring about weight loss. In those who are severely affected, liver transplantation may be an option. More than 90% of heavy drinkers develop fatty liver while about 25% develop the more severe alcoholic hepatitis. NAFLD affects about 30% of people in Western countries and 10% of people in Asia. NAFLD affects about 10% of children in the United States. It occurs more often in older people and males. Classification Fatty liver disease was classified into: Non-alcoholic fatty liver disease (NAFLD) made up of: Non-alcoholic fatty liver (NAFL) or simple fatty liver Non-alcoholic steatohepatitis (NASH) Alcoholic liver disease (ALD). In 2023, a new nomenclature was chosen, with the classifications including: Metabolic dysfunction–associated steatotic liver disease (MASLD), including: Metabolic dysfunction-associated steatohepatitis (MASH) Metabolic and alcohol associated liver disease (metALD). Describes those with MASLD who consume greater amounts of alcohol per week but not enough to be categorized as ALD) Alcohol-associated liver disease (ALD) Specific aetiology SLD (including drug-induced, monogenic diseases and others) Signs and symptoms Often there are no or few symptoms. Occasionally there may be tiredness or pain in the upper right side of the abdomen. Complications Fatty liver can develop into hepatic fibrosis, cirrhosis or liver cancer. For people affected by NAFLD, the 10-year survival rate was about 80%. The rate of progression of fibrosis is estimated to be one per 7 years in NASH and one per 14 years in NAFLD, with an increasing speed. There is a strong relationship between these pathologies and metabolic illnesses (diabetes type II, metabolic syndrome). These pathologies can also affect non-obese people, who are then at a higher risk. Less than 10% of people with cirrhotic alcoholic FLD will develop hepatocellular carcinoma, the most common type of primary liver cancer in adults, but up to 45% people with NASH without cirrhosis can develop hepatocellular carcinoma. The condition is also associated with other diseases that influence fat metabolism. Causes Fatty liver (FL) is commonly associated with metabolic syndrome (diabetes, hypertension, obesity, and dyslipidemia), but can also be due to any one of many causes: Alcohol Alcohol use disorder is one of the causes of fatty liver due to production of toxic metabolites like aldehydes during metabolism of alcohol in the liver. This phenomenon most commonly occurs with chronic alcohol use disorder. Metabolic abetalipoproteinemia, glycogen storage diseases, Weber–Christian disease, acute fatty liver of pregnancy, lipodystrophy Nutritional obesity, malnutrition, total parenteral nutrition, severe weight loss, refeeding syndrome, jejunoileal bypass, gastric bypass, jejunal diverticulosis with bacterial overgrowth Drugs and toxins amiodarone, methotrexate, diltiazem, expired tetracycline, highly active antiretroviral therapy, glucocorticoids, tamoxifen, environmental hepatotoxins (e.g., phosphorus, mushroom poisoning) Other celiac disease, inflammatory bowel disease, HIV, hepatitis C (especially genotype 3), and alpha 1-antitrypsin deficiency Pathology The fatty change represents the intracytoplasmatic accumulation of triglycerides (neutral fats). At the beginning, the hepatocytes present small fat vacuoles (liposomes) around the nucleus (microvesicular fatty change). In this stage, liver cells are filled with multiple fat droplets that do not displace the centrally located nucleus. In the late stages, the size of the vacuoles increases, pushing the nucleus to the periphery of the cell, giving a characteristic signet ring appearance (macrovesicular fatty change). These vesicles are well-delineated and optically "empty" because fats dissolve during tissue processing. Large vacuoles may coalesce and produce fatty cysts, which are irreversible lesions. Macrovesicular steatosis is the most common form and is typically associated with alcohol, diabetes, obesity, and corticosteroids. Acute fatty liver of pregnancy and Reye's syndrome are examples of severe liver disease caused by microvesicular fatty change. The diagnosis of steatosis is made when fat in the liver exceeds 5–10% by weight. Defects in fatty acid metabolism are responsible for pathogenesis of FLD, which may be due to imbalance in energy consumption and its combustion, resulting in lipid storage, or can be a consequence of peripheral resistance to insulin, whereby the transport of fatty acids from adipose tissue to the liver is increased. Impairment or inhibition of receptor molecules (PPAR-α, PPAR-γ and SREBP1) that control the enzymes responsible for the oxidation and synthesis of fatty acids appears to contribute to fat accumulation. In addition, alcohol use disorder is known to damage mitochondria and other cellular structures, further impairing cellular energy mechanism. On the other hand, non-alcoholic FLD may begin as excess of unmetabolised energy in liver cells. Hepatic steatosis is considered reversible and to some extent nonprogressive if the underlying cause is reduced or removed. Severe fatty liver is sometimes accompanied by inflammation, a situation referred to as steatohepatitis. Progression to alcoholic steatohepatitis (ASH) or non-alcoholic steatohepatitis (NASH) depends on the persistence or severity of the inciting cause. Pathological lesions in both conditions are similar. However, the extent of inflammatory response varies widely and does not always correlate with degree of fat accumulation. Steatosis (retention of lipid) and onset of steatohepatitis may represent successive stages in FLD progression. Liver disease with extensive inflammation and a high degree of steatosis often progresses to more severe forms of the disease. Hepatocyte ballooning and necrosis of varying degrees are often present at this stage. Liver cell death and inflammatory responses lead to the activation of hepatic stellate cells, which play a pivotal role in hepatic fibrosis. The extent of fibrosis varies widely. Perisinusoidal fibrosis is most common, especially in adults, and predominates in zone 3 around the terminal hepatic veins. The progression to cirrhosis may be influenced by the amount of fat and degree of steatohepatitis and by a variety of other sensitizing factors. In alcoholic FLD, the transition to cirrhosis related to continued alcohol consumption is well-documented, but the process involved in non-alcoholic FLD is less clear. Diagnosis Most individuals are asymptomatic and are usually discovered incidentally because of abnormal liver function tests or hepatomegaly noted in unrelated medical conditions. Elevated liver enzymes are found in as many as 50% of patients with simple steatosis. The serum alanine transaminase (ALT) level usually is greater than the aspartate transaminase (AST) level in the nonalcoholic variant and the opposite in alcoholic FLD (AST:ALT more than 2:1). Simple blood tests may help to determine the magnitude of the disease by assessing the degree of liver fibrosis. For example, AST-to-platelets ratio index (APRI score) and several other scores, calculated from the results of blood tests, can detect the degree of liver fibrosis and predict the future formation of liver cancer. Imaging studies are often obtained during the evaluation process. Ultrasonography reveals a "bright" liver with increased echogenicity. Pocket-sized ultrasound devices might be used as point-of-care screening tools to diagnose liver steatosis. Medical imaging can aid in diagnosis of fatty liver; fatty livers have lower density than spleens on computed tomography (CT), and fat appears bright in T1-weighted magnetic resonance images (MRIs). Magnetic resonance elastography, a variant of magnetic resonance imaging, is investigated as a non-invasive method to diagnose fibrosis progression. Online calculators have been developed to assist in evaluating hepatic steatosis using CT and MRI findings. Histological diagnosis by liver biopsy is the most accurate measure of fibrosis and liver fat progression as of 2018. Conventional imaging methods, such as ultrasound, CT and MRI, are not specific enough to detect fatty liver disease unless fat occupies at least 30% of the liver volume. Treatment Decreasing caloric intake by at least 30% or by approximately 750–1,000 kcal/day results in improvement in hepatic steatosis. For people with NAFLD or NASH, weight loss via a combination of diet and exercise was shown to improve or resolve the disease. In more serious cases, medications that decrease insulin resistance, hyperlipidemia, and those that induce weight loss such as bariatric surgery as well as vitamin E have been shown to improve or resolve liver function. Bariatric surgery, while not recommended in 2017 as a treatment for FLD alone, has been shown to revert FLD, NAFLD, NASH and advanced steatohepatitis in over 90% of people who have undergone this surgery for the treatment of obesity. In the case of long-term total-parenteral-nutrition-induced fatty liver disease, choline has been shown to alleviate symptoms. This may be due to a deficiency in the methionine cycle. Epidemiology NAFLD affects about 30% of people in Western countries and 10% of people in Asia. In the United States, rates are around 35% with about 7% having the severe form NASH. NAFLD affects about 10% of children in the United States. Recently the term Metabolic dysfunction-associated fatty liver disease (MAFLD) has been proposed to replace NAFLD. MAFLD is a more inclusionary diagnostic name as it is based on the detection of fatty liver by histology (biopsy), medical imaging or blood biomarkers but should be accompanied by either overweight/obesity, type 2 diabetes mellitus, or metabolic dysregulation. The new definition no longer excludes alcohol consumption or coexistence of other liver diseases such as viral hepatitis. Using this more inclusive definition, the global prevalence of MAFLD is an astonishingly high 50.7%. Indeed, also using the old NAFLD definition, the disease is observed in up to 80% of obese people, 35% of whom progress to NASH, and in up to 20% of normal weight people, despite no evidence of excessive alcohol consumption. FLD is the most common cause of abnormal liver function tests in the United States. Fatty liver is more prevalent in Hispanic people than white, with black people having the lowest prevalence. In the study Children of the 90s, 2.5% born in 1991 and 1992 were found by ultrasound at the age of 18 to have non-alcoholic fatty liver disease; five years later transient elastography found over 20% to have the fatty deposits on the liver, indicating non-alcoholic fatty liver disease; half of those were classified as severe. The scans also found that 2.4% had a degree of liver fibrosis, which can lead to cirrhosis. After the lockdown of the COVID-19 pandemic, a study demonstrated that 48% of patients with liver steatosis gained weight, while 16% had a worsened steatosis grade. Weight gain was associated with poor adherence to the suggested diet, reduced levels of physical activity, and increased prevalence of homozygosity for the PNPLA3 rs738409 single nucleotide polymorphism. PNPLA3 rs738409 is already a known risk factor for NAFLD. Research A systematic review and meta-analysis, published in 2024, found that growth hormone therapy may help in the management of fatty liver disease. In animals Fatty liver disease can occur in pets such as reptiles (particularly turtles) and birds as well as mammals like cats and dogs. The most common cause is overnutrition. A distinct sign in birds is a misshapen beak. Fatty livers can be induced via gavage in geese or ducks to produce foie gras. Fatty liver can also be induced in ruminants such as sheep by a high-caloric diet. References External links Photo at Atlas of Pathology Diseases of liver Medical conditions related to obesity Histopathology Wikipedia medicine articles ready to translate no:Steatose pl:Alkoholowe stłuszczenie wątroby
Fatty liver disease
Chemistry
2,918
2,969,851
https://en.wikipedia.org/wiki/EcoHealth%20Alliance
EcoHealth Alliance (EHA) is a US-based non-governmental organization with a stated mission of protecting people, animals, and the environment from emerging infectious diseases. The nonprofit organization focuses on research aimed at preventing pandemics and promoting conservation in hotspot regions worldwide. The EcoHealth Alliance focuses on diseases caused by deforestation and increased interaction between humans and wildlife. The organization has researched the emergence of diseases such as Severe Acute Respiratory Syndrome (SARS), Nipah virus, Middle East respiratory syndrome (MERS), Rift Valley fever, the Ebola virus, and COVID-19. The EcoHealth Alliance also advises the World Organization for Animal Health (OIE), the International Union for Conservation of Nature (IUCN), the United Nations Food and Agriculture Organization (FAO), and the World Health Organization (WHO) on global wildlife trade, threats of disease, and the environmental damage posed by these. Following the outbreak of the COVID-19 pandemic, EcoHealth's ties with the Wuhan Institute of Virology were put into question in relation to investigations into the origin of COVID-19. Citing these concerns, the National Institutes of Health (NIH) withdrew funding to the organization in April 2020. Significant criticism followed this decision, including a joint letter signed by 77 Nobel laureates and 31 scientific societies. The NIH later reinstated funding to the organization as one of 11 institutions partnering in the Centers for Research in Emerging Infectious Diseases (CREID) initiative in August 2020, but all activities funded by the grant remain suspended. In 2022, the NIH terminated the EcoHealth Alliance grant, stating that "EcoHealth Alliance had not been able to hand over lab notebooks and other records from its Wuhan partner that relate to controversial experiments involving modified bat viruses, despite multiple requests." In 2023, an audit by the Office of Inspector General of the Department of Health and Human Services found that "NIH did not effectively monitor or take timely action to address" compliance problems with the EcoHealth Alliance. In December 2023, the EcoHealth Alliance denied allegations that it double-billed the NIH and United States Agency for International Development for research in China. In May 2024, the United States Department of Health and Human Services banned all federal funding for the EcoHealth Alliance. History Founded under the name Wildlife Preservation Trust International in 1971 by British naturalist, author, and television personality, Gerald Durrell, it then became The Wildlife Trust in 1999. In the fall of 2010, the organization changed its name to EcoHealth Alliance. The rebrand reflected a change in the organization's focus, moving solely from a conservation nonprofit, which focused mainly on the captive breeding of endangered species, to an environmental health organization with its foundation in conservation. The organization held an early professional conservation medicine meeting in 1996. In 2002, they published an edited volume on the field through Oxford University Press: Conservation Medicine: Ecological Health in Practice. In February 2008, they published a paper in Nature entitled “Global trends in emerging infectious diseases” which featured an early rendition of a global disease hotspot map. Using epidemiological, social, and environmental data from the past 50 years, the map outlined regions of the globe most at risk for emergent disease threats. EcoHealth Alliance's funding comes mostly from U.S. federal agencies such as the Department of Defense, Department of Homeland Security, and U.S. Agency for International Development. Between 2011 and 2020, its annual budget fluctuated between US$9 and US$15 million per year. COVID-19 pandemic Following the outbreak of the COVID-19 pandemic, EcoHealth Alliance has been the subject of controversy and increased scrutiny due to its ties to the Wuhan Institute of Virology (WIV)—which has been at the center of speculation since early 2020 that SARS-CoV-2 may have escaped in a lab incident. Prior to the pandemic, EcoHealth Alliance was the only U.S.-based organization researching coronavirus evolution and transmission in China, where they partnered with the WIV, among others. EcoHealth president Peter Daszak co-authored a February 2020 letter in The Lancet condemning "conspiracy theories suggesting that COVID-19 does not have a natural origin". However, Daszak failed to disclose EcoHealth's ties to the WIV, which some observers noted as an apparent conflict of interest. In June 2021, The Lancet published an addendum in which Daszak disclosed his cooperation with researchers in China. In April 2020, the NIH ordered EcoHealth Alliance to cease spending the remaining $369,819 from its current NIH grant at the request of the Trump administration, pressuring them by stating "it must hand over information and materials from the Chinese research facility to resume funding for suspended grant" in reference to the Wuhan Institute of Virology. The canceled grant was supposed to run through 2024. Funding from NIH resumed in August 2020 after an uproar from "77 U.S. Nobel laureates and 31 scientific societies". Work conducted at the Wuhan Institute of Virology under an NIH grant to the EHA has been at the center of political controversies during the pandemic. One such controversy centered on whether any experiments conducted under the grant could be accurately described as "gain-of-function" (GoF) research. NIH officials (including Anthony Fauci) unequivocally denied during 2020 congressional hearings that the EHA had conducted GoF research with NIH funding. In October 2021, the EHA submitted a progress report detailing the results of a past experiment where some laboratory mice lost more weight than expected after being infected with a modified bat coronavirus. The NIH subsequently sent a letter to the congressional House Committee on Energy and Commerce describing this experiment, but did not refer to it as "gain-of-function." Whether such research qualifies as "gain-of-function" is a matter of considerable debate among relevant experts. In May 2024, the United States Department of Health and Human Services banned all federal funding for the EcoHealth Alliance, saying that the EcoHealth Alliance did not properly monitor research activities at the WIV and failed to report on their high-risk experiments. On 17 January, 2025, the Department of Health and Human Services (HHS) issued formal, 5-year debarments for both Daszak and his group. EcoHealth had dismissed Daszak as president as of 6 January, according to an HHS notice. Programs PREDICT EcoHealth Alliance partners with USAID on the PREDICT subset of USAID's EPT (Emerging Pandemic Threats) program. PREDICT seeks to identify which emerging infectious diseases are of the greatest risk to human health. Many of EcoHealth Alliance's international collaborations with in-country organizations and institutions fall under the PREDICT umbrella. Scientists in the field collect samples from local fauna in order to track the spread of potentially harmful pathogens and to stop them from becoming outbreaks. Scientists also train local technicians and veterinarians in animal sampling and information gathering. Active countries include Bangladesh, Cameroon, China, Democratic Republic of the Congo, Egypt, Ethiopia, Guinea, India, Indonesia, Jordan, Kenya, Liberia, Malaysia, Myanmar, Nepal, Sierra Leone, Sudan, South Sudan, Thailand, Uganda, and Vietnam. IDEEAL IDEEAL (Infectious Disease Emergence and Economics of Altered Landscapes Program) attempts to investigate the impact of deforestation and land-use change on the risk of zoonoses in Sabah, Malaysia. This project focuses on the local palm oil industry in particular. The study also offers to the country's corporate leaders and policymakers long-term alternatives to large-scale deforestation. The program is headquartered at the Malaysian Development Health Research Unit (DHRU), which was developed in collaboration with the Malaysian University of Sabah. Bat Conservation A growing body of research indicates that bats are an important factor in both ecosystem health and disease emergence. A number of hypotheses have been proposed for the high number of zoonoses that have come from bat populations in recent decades. One group of researchers hypothesized “that flight, a factor common to all bats but to no other mammals, provides an intensive selective force for coexistence with viral parasites through a daily cycle that elevates metabolism and body temperature analogous to the fever response in other mammals. On an evolutionary scale, this host-virus interaction might have resulted in the large diversity of zoonotic viruses in bats, possibly through bat viruses adapting to be more tolerant of the fever response and less virulent to their natural hosts.” Project Deep Forest According to the FAO (Food and Agriculture Organization), roughly 18 million acres of forest (roughly the size of Panama) are lost every year due to deforestation. Increased contact between humans and the animal species whose habitat is being destroyed has led to increases in zoonotic disease. EcoHealth Alliance scientists are testing species for pathogens in areas with very little, moderate, and complete deforestation in order to track potential outbreaks. This data is used to promote the preservation of natural lands and diminish the negative effects of land-use change. Project DEFUSE Project DEFUSE was a rejected DARPA grant application, which proposed to sample bat coronaviruses from various locations in China and Southeast Asia. To evaluate whether bat coronaviruses might spill over into the human population, the grantees proposed to create chimeric coronaviruses which were mutated in different locations, before evaluating their ability to infect human cells in the laboratory. One proposed alteration was to modify bat coronaviruses to insert a cleavage site for the Furin protease at the S1/S2 junction of the spike (S) viral protein. Another part of the grant aimed to create noninfectious protein-based vaccines containing just the spike protein of dangerous coronaviruses. These vaccines would then be administered to bats in caves in southern China to help prevent future outbreaks. Co-investigators on the rejected proposal included Ralph Baric from UNC, Linfa Wang from Duke–NUS Medical School in Singapore, and Shi Zhengli from the Wuhan Institute of Virology. See also Durrell Wildlife Conservation Trust Wildlife Preservation Canada References External links Durrell Wildlife Conservation Trust Wildlife Preservation Canada Environmental organizations based in the United States Environmental microbiology
EcoHealth Alliance
Environmental_science
2,146
47,188,866
https://en.wikipedia.org/wiki/Art%20Jewelry%20Forum
Art Jewelry Forum (AJF) is a nonprofit international organization founded in 1997 that advocates for the field of contemporary art jewelry through education, discourse, publications, grants, and awards. Publications Art Jewelry Forum publishes online articles as well as in print books. AJF's online articles cover historical pieces and movements, theoretical interpretations of work, and exhibition reviews. Contributors for the online articles include staff writers as well as professionals in the field. Printed books from Art Jewelry Forum include Geography (exhibition catalog), AJF Best of Interviews, and Show and Tales. Art jewelry Forum also initiated and funded the publication of Contemporary Jewelry in Perspective by Lark Crafts. The exhibition catalog Geography was Art Jewelry Forum's first publication in 2011. Geography was printed in conjunction with an exhibition of the same name that was presented at SOFA Chicago 2011 and at the Society of North American Goldsmiths conference in Seattle of 2011. Art Jewelry Forum worked with Lark Crafts, a subsidiary of Sterling Publishing, in 2013 to publish Contemporary Jewelry in Perspective. Contemporary Jewelry in Perspective is broken into three sections "the first exploring what kind of thing contemporary jewelry is, the second exploring its history, and the third exploring opportunities and challenges for the field". Bruce Metcalf reviews that within these sections "There are two themes that run throughout the book. One is that studio jewelry should be critical. The other is that the most fertile territory for the present-day practitioner is in the realm of the hybrid." AJF Best of Interviews was published in 2014 by Art Jewelry Forum. AJF Best of Interviews "corrals some of the site’s most interesting content: interviews with jewelry makers and others central to the field. Taking part in the 20 lively conversations are makers such as Lola Brooks, Tanel Veenre, and Jamie Bennett; dealers such as Sienna Patti; curators such as Bruce Pepich and Ursula Ilse-Neuman; and jewelry aficionados such as Madeleine Albright... The focus is on intelligent questions and the voices of the interviewees – captured in fresh, informal exchanges that will captivate lovers of art jewelry" writes Monica Moses, editor in chief at the American Craft Magazine published by the American Craft Council. Show and Tales published by Art Jewelry Forum in 2015 and was released in Munich in conjunction with the annual Schmuck fair. Show and Tales focuses on exhibition making in regards to jewelry, making it the first ever publication on the topic. Show and Tales is broken into three sections that cover historical landmark exhibitions of jewelry, challenges in curating craft and jewelry, and exhibition reviews. It contains essays by Glenn Adamson (USA), David Beytelmann (AR), Susan Cummins (USA), (NL), Monica Gaspar (ES), Toni Greenbaum (USA), Marthe Le Van (USA), Benjamin Lignel (FR), Kellie Riggs (USA), Damian Skinner (NZ), (NO), Namita Gupta Wiggers (USA), among others. Exhibitions To date, Art Jewelry Forum has produced one exhibition titled Geography, which was shown in 2011 at SOFA Chicago and at the Society of North American Goldsmiths conference in Seattle in 2011. Geography was a thematic exhibition focusing on the scientific view of physical geography, the relational cultural geography, and the effects of natural surroundings on artists. Geography was curated by Susan Cummins and Mike Holmes and features over seventy pieces of jewelry from a wide array of international artists: Fran Allison, Talya Baharal, Agelio Batle, Suzanne Beautyman, , Alexander Blank, , Angela Bubash, Eric Burris, Suzanne Carlsen, Attai Chen, Jim Cotter, , Bettina Dittlmann, Georg Dobler, Iris Eichenberg, , Karen Gilbert, Gabrielle Gould, Mielle Harvey, Stefan Heuser, Rory Hooper, Marian Hosking, Sergey Jiventin, Soyeon Kim, Jenny Klemming, Brooke Marks Swanson, Sharon Massey, Christine Matthias, , Malaika Najem, Annelies Planteydt, Alan Preston, , Tina Rath, Miriam Rowe, Deborah Rudolph, Estela Saez, Dana Seachuga, Nolia Shakti, Deganit Stern Schocken, Joyce Scott, Helen Shirk, Despo Sophocleous, Cynthia Toops, Julia Turner, Tarja Tuupanen, Sally von Bargen, Lisa Walker, Areta Wilkinson, Francis Willemstijn, Andrea Williams, Nancy Worden Grants Art Jewelry Forum awards grants in three categories; Emerging Artist Award, Exhibition Award, and Speakers and Writers Awards. Emerging Artist Award The Emerging Artist Award is a prestigious annual juried award of emerging artists who make wearable art jewelry with a prize of US$7,500. Past winners include: 2014-Seulgi Kwon 2013- Sooyeon Kim 2012- Noon Passama Sanpatchayapong 2011-Farrah Al-Dujaili 2010 Agnes Larsson 2009- Sharon Massey 2008- Masumi Kataoka 2007- Andrea Janosik 2006- Natalya Pinchuk Exhibition Award The Exhibition Award aims to financially assist with exhibitions and catalogs that focus on art jewelry. Unlike the annual Artist Award, the Exhibition Award applications are rolling; the Exhibition Award is based on merit of the proposed project, and Art Jewelry Forums annual funds. Past support of the Exhibition Award has gone to: 2012- Shift: Contemporary Makers That Define, Expand and Contradict The Field of Art Jewelry, Grunwald Gallery of Art, Indiana University, for exhibition support 2011- Geography, Art Jewelry Forum, for catalog publication 2010- Atelier Janiyé and the Legacy of Miye Matsukata, Fuller Craft Museum, for catalog publication 2009- Lisa Gralnick: The Gold Standard, Bellevue Arts Museum, for exhibition support 2009- Adornment and Excess:Jewelry in the 21st Century, Miami University Art Museum, for exhibition support 2008- Decorative Resurgence, Rowan University, for catalog publication 2007- Women of Metal, University of Wisconsin at Whitewater, for exhibition support 2006- For Zymrina, A Prostitute of Pompeii, Houston Museum of Fine Arts, acquisition of Keith Lewis’ Neckpiece 2005- Craft Emergency Relief Fund Speakers and Writers Award Art Jewelry Forum awards the Speaker and Writers Award to individuals who are critically engaged in the field. Most often the award is granted to help cover expenses of speakers and panelists at the annual Sculptural Objects and Functional Art (SOFA) NY and the SOFA Chicago fairs, and the annual Society of North American Goldsmiths conference. Past recipients of the Speakers and Writers Award are: 2012- Kiff Slemmons, More Than One to Make One, lecture, SOFA Chicago 2012- Garth Clark, Who's Your Daddy, keynote lecture, Society of North American Goldsmiths 2012, Ursula Ilse-Neuman, The Transcendent Jewelry of Margaret De Patta: Vision in Motion, lecture, SOFA NY 2011- Contributing Writers: Jillian Moore, Gabriel Craig, commissioning of articles for Art Jewelry Forum 2011- Davra Taragin, Iris Eichenberg, Seth Papac, Gemma Draper, Monomater, panel discussion, SOFA Chicago 2011- Jeannine Falino, For People Who Are Slightly Mad, lecture, SOFA NY Founder and select staff Susan Cummins is the founder of AJF. She is also the director for the Rotasa Foundation, and previously owned and operated Susan Cummins Gallery for eighteen years until its closing in 2002. Yvonne Montoya is the current Executive Director of Art Jewelry Forum Nathalie Mornu is the current editor and writer for AJF. Nathalie Mornu has edited nonfiction and DIY books for the last 15 years; she has a particular interest in jewelry and crafts. She spent five years at the Appalachian Center for Craft studying jewelry fabrication and furniture-making before changing course altogether and getting a degree in journalism. Nathalie then spent a dozen years in the editorial department at Lark Books, where her background in crafts proved an excellent fit. In her tenure at Lark, she worked with former Art Jewelry Forum editor Damian Skinner to copy edit Contemporary Jewelry in Perspective. Noto Board of Directors/Chairpersons Bonnie Levine Board Chair | Chair of the Trips Committee Sarah Turner Treasurer John Rose Marketing Director Cindi Strauss Chair of the Editorial Committee Bella Neyman Chair of the Events and Trips Committee Marta Costa Reis Chair of the Award and Grant Committee Board Members Sofia Björkman Raïssa Bump (past Board Chair) Emily Cobb Barbara Paris Gifford Toni Greenbaum David Dao Noto References External links Art and design organizations Jewellery organizations Charities based in California
Art Jewelry Forum
Engineering
1,741
3,183,496
https://en.wikipedia.org/wiki/Boost%20gauge
A boost gauge is a pressure gauge that indicates manifold air pressure or turbocharger or supercharger boost pressure in an internal combustion engine. They are commonly mounted on the dashboard, on the driver's side pillar, or in a radio slot. Turbochargers and superchargers are both engine-driven air compressors (exhaust-driven or mechanically driven, respectively) and provide varying levels of boost according to engine rpm, load etc. Quite often there is a power band within a given range of available boost pressure and it is an aid to performance driving to be aware of when that power band is being approached, in the same way a driver wants to be aware of engine rpm. A boost gauge is used to ensure excessive pressure is not being generated when boost pressure is being modified to levels higher than OEM standard on a production turbocharged car. Simple methods can be employed to increase factory boost levels, such as bleeding air off the wastegate diaphragm to 'fool' it into staying closed longer, or installing a boost controller. To prevent the Air-fuel ratio from going lean (caused by increasing the boost beyond the fuel systems capacity) care must be taken to monitor boost pressure levels, along with oxygen levels in the exhaust gas, using an air-fuel ratio meter that monitors the oxygen sensor. A boost gauge will measure pressure in psi, bar or kPa; many also measure manifold vacuum pressure in inches of mercury (in. Hg) or mm of mercury (mm Hg). See also Automatic Performance Control Pressure measurement Notes Auto parts Pressure gauges
Boost gauge
Technology,Engineering
321
6,160,585
https://en.wikipedia.org/wiki/58%20Eridani
58 Eridani is a main-sequence star in the constellation Eridanus. It is a solar analogue, having similar physical properties to the Sun. The star has a relatively high proper motion across the sky, and it is located 43 light years distant. It is a probable member of the IC 2391 moving group of stars that share a common motion through space. Characteristics This is a BY Draconis variable with the designation IX Eridani, which ranges in magnitude from 5.47 down to 5.51 with a period of 11.3 days. The X-ray emissions from this star's corona indicate an age of less than a billion (109) years, compared to 4.6 billion for the Sun, so it is still relatively young for a star of its mass. Starspot activity has also been detected, which varies from year to year. A circumstellar disc of dust particles has been detected in orbit around 58 Eridani. See also List of nearest bright stars References External links Eridanus (constellation) G-type main-sequence stars Solar analogs Eridani, 58 0177 Durchmusterung objects 1532 022263 030495 BY Draconis variables Eridani, IX
58 Eridani
Astronomy
258
23,294,692
https://en.wikipedia.org/wiki/Prothipendyl
Prothipendyl (brand names Dominal, Timovan, Tolnate), also known as azapromazine or phrenotropin, is an anxiolytic, antiemetic, and antihistamine of the azaphenothiazine group which is marketed in Europe and is used to treat anxiety and agitation in psychotic syndromes. It differs from promazine only by the replacement of one carbon atom with a nitrogen atom in the tricyclic ring system. Prothipendyl is said to not possess antipsychotic effects, and in accordance, appears to be a weaker dopamine receptor antagonist than other phenothiazines. Synthesis See also: Pipazetate. 1-Azaphenothiazine [261-96-1] (1) 3-Dimethylaminopropyl chloride [109-54-6] (2) Sodium hydride suspension References Antiemetics Anxiolytics H1 receptor antagonists Hypnotics Phenothiazines Sedatives
Prothipendyl
Biology
222
30,894,361
https://en.wikipedia.org/wiki/EDAS
EDAS was a database of alternatively spliced human genes. It doesn't seem to exist anymore. See also AspicDB database References External links http://www.gene-bee.msu.ru/edas/. Genetics databases Gene expression Spliceosome RNA splicing
EDAS
Chemistry,Biology
64
27,132,938
https://en.wikipedia.org/wiki/MasterSpec
MasterSpec is a master guide building and construction specification system used within the United States by architects, engineers, landscape architects, and interior designers to express results expected in construction. MasterSpec content and software is exclusively developed and distributed by Deltek (formerly Avitru) for the American Institute of Architects (AIA). It was developed in 1969 by the AIA to provide architects a means to create technical specifications without spending a lot of time researching products and writing up to date technical specifications from scratch. Content for MasterSpec is vetted by AIA-sponsored architectural and engineering review committees. In 2019, the company was acquired by Deltek, Inc. Content Libraries Today, MasterSpec consists of over 900 sections packaged in practice-specific libraries, following the MasterFormat 2018 standard: Landscape Site/Civil Structural Historic Preservation Commissioning Interiors Mechanical Electrical + Communication Architectural Building Architecture + Engineering Each MasterSpec section is organized into three parts following SectionFormat and consists of 5 components: Summary - Overview of section scope and content Evaluations - Qualitative overview of products and discussion of recent technologies, including: Testing procedures and applicable codes Application and implementation suggestions Environmental considerations, green building, or LEED information References and standards Links to the manufacturer and standards organizations Master guide technical specifications in three-part CSI format along with editor's notes (instructions) and cross-references to Evaluations. Drawing Coordination Checklist: - Checklist of items to coordinate section with the drawings. Specification Coordination Checklist - Checklist of items to coordinate this section with other sections. Formats The MasterSpec technical specifications are available in three distinct formats or type: Full Length - For moderate- to large-scale, complex projects and varied bidding and contracting situations Short Form - Abridged versions of the sections with most common products Outline - Corresponding outline specifications for use during design development and schematic phases Timeline 1969: AIA’s MasterSpec is first distributed in paper form and includes Architectural and Civil & Structural Engineering Content 1973: ARCOM provides MasterSpec in ASCII format on magnetic tape 1984: MasterSpec is distributed only in floppy disc format and paper 1988: AIA assigns ARCOM to be the exclusive developer and distributor of all electronic and paper versions of MasterSpec; AIA dissolves contracts with all other “Automators,” asking them to work under ARCOM 1995: AIA awards ARCOM exclusive license to develop and distribute MasterSpec 1999: ARCOM introduces MasterSpec on CD-ROM 2005: ARCOM issues MasterSpec in MasterFormat 2006 a major change in the 40 year old organization of specifications 2017: ARCOM acquires InterSpec. This brings e-SPECS and specification services to ARCOM ARCOM changes name to Avitru as part of acquiring InterSpec 2019: Deltek acquires Avitru References External links Official website Architectural design
MasterSpec
Engineering
586
5,721,649
https://en.wikipedia.org/wiki/Dok-7
Dok-7 is a non-catalytic cytoplasmic adaptor protein that is expressed specifically in muscle and is essential for the formation of neuromuscular synapses. Further, Dok-7 contains pleckstrin homology (PH) and phosphotyrosine-binding (PTB) domains that are critical for Dok-7 function. Finally, mutations in Dok-7 are commonly found in patients with limb-girdle congenital myasthenia. Dok-7 regulates neuromuscular synapse formation by activating MuSK The formation of neuromuscular synapses requires the muscle-specific receptor tyrosine kinase (MuSK). In mice genetically mutant for MuSK, acetylcholine receptors (AChRs) fail to cluster and motor neurons fail to differentiate. Because Dok-7 mutant mice are indistinguishable from MuSK mutant mice, these observations suggest Dok-7 might regulate MuSK activation. Indeed, Dok-7 binds phosphorylated MuSK and activates MuSK in purified protein preparations and in muscle in-vivo by transgenic overexpression. Furthermore, the nerve-derived organizing factor agrin fails to stimulate MuSK activation in muscle cells genetically null for Dok-7. Thus, Dok-7 is both necessary and sufficient for the activation of MuSK. Dok-7 signaling The requirement for MuSK in the formation of the NMJ was primarily demonstrated by mouse "knockout" studies. In mice which are deficient for either agrin or MuSK, the neuromuscular junction does not form. Upon activation by its ligand agrin, MuSK signals via the proteins called Dok-7 and rapsyn, to induce "clustering" of acetylcholine receptors (AChR). Cell signaling downstream of MuSK requires Dok-7. Mice which lack this protein fail to develop endplates. Further, forced expression of Dok-7 induces the tyrosine phosphorylation, and thus the activation of MuSK. Dok-7 interacts with MuSK by way of protein "domain" called a "PTB domain." In addition to the AChR, MuSK, and Dok-7 other proteins are then gathered, to form the endplate to the neuromuscular junction. The nerve terminates onto the endplate, forming the neuromuscular junction—a structure which is required to transmit nerve impulses to the muscle, and thus initiating muscle contraction. Congenital Myasthenia Syndrome Homozygous mutation of Dok-7 is responsible for a form of congenital myasthenic syndrome (CMS) that is unique among disorders in this category because it affects muscles in the limbs and trunk but mostly spares the face, eyes, and functions of the mouth and pharynx (chewing, swallowing and speech). Salbutamol can be effective in relieving CMS symptoms attributable to Dok-7 mutations. References Developmental neuroscience Proteins
Dok-7
Chemistry
637
77,135,154
https://en.wikipedia.org/wiki/Zimlovisertib
Zimlovisertib (PF-06650833) is a drug which acts as a selective inhibitor of the enzyme Interleukin-1 receptor-associated kinase 4 (IRAK-4). It has antiinflammatory effects and has been trialed for various indications including hidradenitis suppurativa and treatment of COVID-19 infection, and while it has not been adopted into clinical use it continues to be used for research in this area. See also Emavusertib References Interleukin-1 receptor-associated kinase 4 inhibitors Amides Methoxy compounds Pyrrolidones Ethers Organofluorides Isoquinolines
Zimlovisertib
Chemistry
147
3,362,709
https://en.wikipedia.org/wiki/Partition%20equilibrium
Partition equilibrium is a special case of chemical equilibrium wherein one or more solutes are in equilibrium between two immiscible solvents. The most common chemical equilibrium systems involve reactants and products in the same phase - either all gases or all solutions. However, it is also possible to get equilibria between substances in different phases, such a liquid and gas that do not mix (are immiscible). One example is gas-liquid partition equilibrium chromatography, where an analyte equilibrates between a gas and liquid phase. Partition equilibria are described by Nernst's distribution law. Partition equilibrium are most commonly seen and used for Liquid–liquid extraction. The time until a partition equilibrium emerges is influenced by many factors, such as: temperature, relative concentrations, surface area of interface, degree of stirring, and the nature of the solvents and solute. Example For example, ammonia (NH3) is soluble in both water (aq) and the organic solvent trichloromethane (CHCl3) - two immiscible solvents. If ammonia is first dissolved in water, and then an equal volume of trichloromethane is added, and the two liquids shaken together, the following equilibrium is established: Kc = [NH3 (CHCl3)]/[NH3 (aq)]        (where Kc is the equilibrium constant) The equilibrium concentrations of ammonia in each layer can be established by titration with standard acid solution. It can thus be determined that Kc remains constant, with a value of 0.4 in this case. Partition coefficient This kind of equilibrium constant measures how a substance distributes or partitions itself between two immiscible solvents. It is called the partition coefficient or distribution coefficient. Partition equilibrium chromatography See: Partition chromatography, Gas chromatography Partition equilibrium chromatography is a type of chromatography that is typically used in gas chromatography (GC) and high performance liquid chromatography (HPLC). The stationary phase in GC is a high boiling liquid bonded to solid surface and the mobile phase is a gas. In gas-liquid chromatography, analyte from the mobile gas phase equilibrates with the liquid phase. Molecules more soluble in the liquid phase will remain longer in the column, allowing for separation using partition equilibriums. See also Liquid–liquid extraction References Equilibrium chemistry Chromatography Gas chromatography
Partition equilibrium
Chemistry
518
22,709,409
https://en.wikipedia.org/wiki/Workplace%20deviance
Workplace deviance, in group psychology, may be described as the deliberate (or intentional) desire to cause harm to an organization – more specifically, a workplace. The concept has become an instrumental component in the field of organizational communication. More accurately, it can be seen as "voluntary behavior that violates institutionalized norms and in doing so threatens the well-being of the organization". Reasons Psychological contract Employees often create a set of expectations about their workplace; people tend to make psychological contracts with their organizations. When his or her expectations are not met, the employee may "perceive a psychological contract breach by their employers". This "breach" of the psychological contract then presents potential problems, particularly in the workplace. Workplace deviance may arise from the worker's perception that their organization has mistreated him or her in some manner. Employees then resort to misbehaving (or acting out) as a means of avenging their organization for the perceived wrongdoing. Workplace deviance may be viewed as a form of negative reciprocity. "A negative reciprocity orientation is the tendency for an individual to return negative treatment for negative treatment". In other words, the maxim "an eye for an eye" is a concept that some employees strongly feel is a suitable approach to their problem. However, what is critical in understanding employee deviance is that the employee perceives being wronged, whether or not mistreatment actually occurred. Abusive supervision Workplace deviance is also closely related to abusive supervision. Abusive supervision is defined as the "subordinates' perceptions of the extent to which their supervisors engage in the sustained display of hostile verbal and nonverbal behaviors". This could be when supervisors ridicule their employees, give them the silent treatment, remind them of past failures, fail to give proper credit, wrongfully assign blame or blow up in fits of temper. It may seem like employees who are abused by their supervisor will either directly retaliate or withdraw by quitting the job but in reality many strike out against their employer by engaging in organizational deviant behaviors. Since employees control many of the organization's resources, they often use, or abuse anything they can. This abuse of resources may come in the form of time, office supplies, raw materials, finished products or the services that they provide. This usually occurs in two steps. First step is that commitment is destroyed and employees stop caring about the welfare of the employer. The second step is that the abused employee will get approval (normally implied) of their coworkers to commit deviant acts. Workplace experiences may fuel the worker to act out. Research has been conducted demonstrating that the perception of not being respected is one of the main causes for workplace deviance; workplace dissatisfaction is also a factor. According to Bolin and Heatherly, "dissatisfaction results in a higher incidence of minor offenses, but does not necessarily lead to severe offense". An employee who is less satisfied with his or her work may become less productive as their needs are not met. In the workplace, "frustration, injustices and threats to self are primary antecedents to employee deviance". Although workplace deviance does occur, the behavior is not universal. There are two preventive measures that business owners can use to protect themselves. The first is strengthening the employee's commitment by reacting strongly to abusive supervision so that the employee knows that the behavior is not accepted. Holding the employee at high esteem by reminding them of their importance, or setting up programs that communicate concern for the employee may also strengthen employee commitment. Providing a positive ethical climate can also help. Employers can do this by having a clear code of conduct that is applied to both managers and employees alike. Types Workplace deviance may be expressed in various ways. Employees can engage in minor, extreme, nonviolent or violent behavior, which ultimately leads to an organization's decline in productivity. Interpersonal and organizational deviance are two forms of workplace deviance which are directed differently; however, both cause harm to an organization. Interpersonal deviance Interpersonal deviance can occur when misconduct "target(s) specific stakeholders such as coworkers". Behavior falling within this subgroup of employee deviance includes gossiping about coworkers and assigning blame to them. These minor (but unhealthy) behaviors, directed at others, are believed to occur as some employees perceive "a sense of entitlement often associated with exploitation". In other words, they feel the need to misbehave in ways that will benefit them. Organizational deviance Deviant behavior typically aimed directly at the organization is often referred to as organizational deviance. Organizational deviance encompasses production and property deviance. Workplace-deviant behavior may be expressed as tardiness or excessive absenteeism. These behaviors have been cited by some researchers as "withdraw(al) behaviors…such behaviors allow employees to withdraw physically and emotionally from the organization". Silence Employee silence is also considered a deviant behavior in the workplace, falling into the realms of both interpersonal and organizational deviance. Silence becomes employee deviance when "an employee intentionally or unintentionally withholds any kind of information that might be useful to the organization". The problem occurs if an employee fails to disclose important information, which detrimentally affects the effectiveness of the organization due to poor communication. Coworker backstabbing Coworker backstabbing occurs to some degree in many workplaces. It consists of an employee's doing something to another employee to get a "leg up" on the other employee. Strategies used for backstabbing include dishonesty, blame (or false accusation), discrediting others and taking credit for another's work. Motives for backstabbing include disregarding others' rights in favor of one's own gain, self-image management, revenge, jealousy, and personal reasons. Cyber loafing A novel form of workplace deviance has emerged in recent years, as technology becomes a bigger part of people's work lives. Internet workplace deviance (or "cyber loafing") has become another way for employees to avoid the tasks at hand. This includes surfing the web and doing non-work-related tasks on the internet such as chatting on social-networking sites, online shopping and other activities. Production deviance All behaviors in which deviant employees partake ultimately have a negative impact on the overall productivity of the organization. For this reason, all are considered production deviance. Production deviance is "behavior that violates formally prescribed organizational norms with respect to minimal quality and quantity of work to be accomplished as part of one's job". Property deviance More serious cases of deviant behavior harmful to an organization concern property deviance. Property deviance is "where employees either damage or acquire tangible assets…without authorization". This type of deviance typically involves theft but may include "sabotage, intentional errors in work, misusing expense accounts", among other examples. Other types Deviant behavior can be much more extreme, involving sexual harassment and even violence. All these deviant behaviors create problems for the organization. It is costly for an organization to pay employees who are not working efficiently. Reducing The relationships employees have with their organization are crucial, as they can play an important role in the development of workplace deviance. Employees who perceive their organization or supervisor(s) as more caring (or supportive) have been shown to have a reduced incidence of workplace-deviant behaviors. Supervisors, managers and organizations are aware of this, and "assess their own behaviors and interactions with their employees and understand while they may not intend to abuse their employees they may be perceived as doing so…". Organizational justice and the organizational climate are also critical, since the quality of the work experience can impact employee behavior in the workplace. Organizational justice may be organized into three subcategories: procedural, distributive and interactional justice. Procedural justice is concerned with how the decision-making process was made. Distributive justice, on the other hand, considers the actual decision. Interactional justice involves the interpersonal relationship and sense of fairness which employees have with supervisors and other authority figures within the organization. Research indicates that procedural justice (combined with interactional justice) is beneficial in reducing workplace-deviant behavior. Employees who are consulted (and given an opportunity to be involved in the decision-making processes at their organization) are less likely to act out, since their voices are valued. Workplace deviance is a phenomenon which occurs frequently within an organization. Ultimately, it is the managers' and the organization's responsibility to uphold the norms to which the organization wishes to adhere; it is the organization's job to create an ethical climate. If organizations have authority figures who demonstrate their ethical values, a healthier workplace environment is created. "Research has suggested that managers' behavior influences employee ethical decision-making". Employees who perceive themselves as being treated respectfully and valued are those less likely to engage in workplace deviance. See also Counterproductive work behavior Deviance (sociology) Gaming the system Machiavellianism in the workplace Malicious compliance Workplace bullying Workplace harassment Workplace revenge Footnotes References Bennett and Robinson (2003) Bolin, A. and Heatherly, L . ( 2001). Predictors of Employee Deviance: The Relationship between Bad Attitudes and Bad Behaviors." Journal of Business and Psychology, 15(3), pg 405. Chiu. S and Peng, J. (2008) "The relationship between psychological contract breach and employee deviance: The moderating role of hostile attributional style." Journal of Vocational Behavior, 73 (4), 426-433. Everton, W.J., et al. (2007). "Be nice or else: understanding reasons for employee's deviant behaviors." The Journal of Management Development. 26 (2), 117. Griffin, R.W. and O'Leary-Kelly, A.M. (2004). The Dark Side of Organizational Behavior. () Wiley, New York. Harris, L.C. and Ogbonna, E. (2006). "Service Sabotage: A Study of Antecedents and Consequences." Academy of Marketing Science Journal. 34(4), 543-599. Hollinger, R. and Clark, J. (1982)." Employee Deviance: A response to Perceived Quality of the Work Experience." Work and Occupations, 9 (1), 97-114. Litzky, B.E., et al. (2006). "The Good, the Bad, and the Misguided: How Managers Inadvertently Encouraged deviant Behaviors" Academy of Management Perspectives, 13 (5), 91-100. Malone, Patty. "Coworker Backstabbing: Strategies, Motives, and Responses" Paper presented at the annual meeting of the International Communication Association, TBA, San Francisco, CA, May 23, 2007 Mitchell, M. and Ambrose, M.L. (2007). "Abusive Supervision and Workplace Deviance and the Moderating Effects of Negative Reciprocity Beliefs." Journal of Applied Psychology, 92 ( 4), 1159-1168. Pulich, M. and Tourigny, L. (2004). "Workplace deviance: Strategies for Modifying Employee Behavior." The Health Care Manager, 23 (4), 290-301. Tangirala, Subrahmaniam, and Rangaraj Ramanujam. (2008): "Employee Silence on Critical Work Issue: The Cross Level Effects of Procedural Justice Climate." Personnel Psychology, 61 (2), 40-68. Zoghbi-Manrique-de-Lara, P. (2006) "Fear in Organizations, Does intimidation by formal punishment mediate the relationship between interactional justice and workplace internet deviance?" Journal of Managerial Psychology, 21( 6), 580. Deviance (sociology) Human behavior Workplace Workplace harassment and bullying
Workplace deviance
Biology
2,441
53,596,516
https://en.wikipedia.org/wiki/The%20Computer%20Language%20Benchmarks%20Game
The Computer Language Benchmarks Game (formerly called The Great Computer Language Shootout) is a free software project for comparing how a given subset of simple algorithms can be implemented in various popular programming languages. The project consists of: A set of very simple algorithmic problems Various implementations to the above problems in various programming languages A set of unit tests to verify that the submitted implementations solve the problem statement A framework for running and timing the implementations A website to facilitate the interactive comparison of the results Supported languages Due to resource constraints, only a small subset of common programming languages are supported, up to the discretion of the game's operator. Metrics The following aspects of each given implementation are measured: overall user runtime peak memory allocation gzip'ped size of the solution's source code sum of total CPU time over all threads individual CPU utilization It is common to see multiple solutions in the same programming language for the same problem. This highlights that within the constraints of a given language, a solution can be given which is either of high abstraction, is memory efficient, is fast, or can be parallelized better. Benchmark programs It was a design choice from the start to only include very simple toy problems, each providing a different kind of programming challenge. This provides users of the Benchmark Game the opportunity to scrutinize the various implementations. binary-trees chameneos-redux fannkuch-redux fasta k-nucleotide mandelbrot meteor-contest n-body pidigits regex-redux reverse-complement spectral-norm thread-ring History The project was known as The Great Computer Language Shootout until 2007. A port for Windows was maintained separately between 2002 and 2003. The sources have been archived on GitLab. There are also older forks on GitHub. The project is continuously evolving. The list of supported programming languages is updated approximately once per year, following market trends. Users can also submit improved solutions to any of the problems or suggest testing methodology refinement. Caveats The developers themselves highlight the fact that those doing research should exercise caution when using such microbenchmarks: Impact The benchmark results have uncovered various compiler issues. Sometimes a given compiler failed to process unusual, but otherwise grammatically valid constructs. At other times, runtime performance was shown to be below expectations, which prompted compiler developers to revise their optimization capabilities. Various research articles have been based on the benchmarks, its results and its methodology. See also Benchmark (computing) Comparison of programming languages References External links Programming language comparisons Benchmarks (computing)
The Computer Language Benchmarks Game
Technology
524
212,409
https://en.wikipedia.org/wiki/Transaction%20processing
In computer science, transaction processing is information processing that is divided into individual, indivisible operations called transactions. Each transaction must succeed or fail as a complete unit; it can never be only partially complete. For example, when you purchase a book from an online bookstore, you exchange money (in the form of credit) for a book. If your credit is good, a series of related operations ensures that you get the book and the bookstore gets your money. However, if a single operation in the series fails during the exchange, the entire exchange fails. You do not get the book and the bookstore does not get your money. The technology responsible for making the exchange balanced and predictable is called transaction processing. Transactions ensure that data-oriented resources are not permanently updated unless all operations within the transactional unit complete successfully. By combining a set of related operations into a unit that either completely succeeds or completely fails, one can simplify error recovery and make one's application more reliable. Transaction processing systems consist of computer hardware and software hosting a transaction-oriented application that performs the routine transactions necessary to conduct business. Examples include systems that manage sales order entry, airline reservations, payroll, employee records, manufacturing, and shipping. Since most, though not necessarily all, transaction processing today is interactive, the term is often treated as synonymous with online transaction processing. Description Transaction processing is designed to maintain a system's Integrity (typically a database or some modern filesystems) in a known, consistent state, by ensuring that interdependent operations on the system are either all completed successfully or all canceled successfully. For example, consider a typical banking transaction that involves moving $700 from a customer's savings account to a customer's checking account. This transaction involves at least two separate operations in computer terms: debiting the savings account by $700, and crediting the checking account by $700. If one operation succeeds but the other does not, the books of the bank will not balance at the end of the day. There must, therefore, be a way to ensure that either both operations succeed or both fail so that there is never any inconsistency in the bank's database as a whole. Transaction processing links multiple individual operations in a single, indivisible transaction, and ensures that either all operations in a transaction are completed without error, or none of them are. If some of the operations are completed but errors occur when the others are attempted, the transaction-processing system "rolls back" all of the operations of the transaction (including the successful ones), thereby erasing all traces of the transaction and restoring the system to the consistent, known state that it was in before processing of the transaction began. If all operations of a transaction are completed successfully, the transaction is committed by the system, and all changes to the database are made permanent; the transaction cannot be rolled back once this is done. Transaction processing guards against hardware and software errors that might leave a transaction partially completed. If the computer system crashes in the middle of a transaction, the transaction processing system guarantees that all operations in any uncommitted transactions are cancelled. Generally, transactions are issued concurrently. If they overlap (i.e. need to touch the same portion of the database), this can create conflicts. For example, if the customer mentioned in the example above has $150 in his savings account and attempts to transfer $100 to a different person while at the same time moving $100 to the checking account, only one of them can succeed. However, forcing transactions to be processed sequentially is inefficient. Therefore, concurrent implementations of transaction processing is programmed to guarantee that the end result reflects a conflict-free outcome, the same as could be reached if executing the transactions sequentially in any order (a property called serializability). In our example, this means that no matter which transaction was issued first, either the transfer to a different person or the move to the checking account succeeds, while the other one fails. Methodology The basic principles of all transaction-processing systems are the same. However, the terminology may vary from one transaction-processing system to another, and the terms used below are not necessarily universal. Rollback Transaction-processing systems ensure database integrity by recording intermediate states of the database as it is modified, then using these records to restore the database to a known state if a transaction cannot be committed. For example, copies of information on the database prior to its modification by a transaction are set aside by the system before the transaction can make any modifications (this is sometimes called a before image). If any part of the transaction fails before it is committed, these copies are used to restore the database to the state it was in before the transaction began. Rollforward It is also possible to keep a separate journal of all modifications to a database management system. (sometimes called after images). This is not required for rollback of failed transactions but it is useful for updating the database management system in the event of a database failure, so some transaction-processing systems provide it. If the database management system fails entirely, it must be restored from the most recent back-up. The back-up will not reflect transactions committed since the back-up was made. However, once the database management system is restored, the journal of after images can be applied to the database (rollforward) to bring the database management system up to date. Any transactions in progress at the time of the failure can then be rolled back. The result is a database in a consistent, known state that includes the results of all transactions committed up to the moment of failure. Deadlocks In some cases, two transactions may, in the course of their processing, attempt to access the same portion of a database at the same time, in a way that prevents them from proceeding. For example, transaction A may access portion X of the database, and transaction B may access portion Y of the database. If at that point, transaction A then tries to access portion Y of the database while transaction B tries to access portion X, a deadlock occurs, and neither transaction can move forward. Transaction-processing systems are designed to detect these deadlocks when they occur. Typically both transactions will be cancelled and rolled back, and then they will be started again in a different order, automatically, so that the deadlock does not occur again. Or sometimes, just one of the deadlocked transactions will be cancelled, rolled back, and automatically restarted after a short delay. Deadlocks can also occur among three or more transactions. The more transactions involved, the more difficult they are to detect, to the point that transaction processing systems find there is a practical limit to the deadlocks they can detect. Compensating transaction In systems where commit and rollback mechanisms are not available or undesirable, a compensating transaction is often used to undo failed transactions and restore the system to a previous state. ACID criteria Jim Gray defined properties of a reliable transaction system in the late 1970s under the acronym ACID—atomicity, consistency, isolation, and durability. Atomicity A transaction's changes to the state are atomic: either all happen or none happen. These changes include database changes, messages, and actions on transducers. Consistency Consistency: A transaction is a correct transformation of the state. The actions taken as a group do not violate any of the integrity constraints associated with the state. Isolation Even though transactions execute concurrently, it appears to each transaction T, that others executed either before T or after T, but not both. Durability Once a transaction completes successfully (commits), its changes to the database survive failures and retain its changes. Implementations Standard transaction-processing software, such as IBM's Information Management System, was first developed in the 1960s, and was often closely coupled to particular database management systems. Client–server computing implemented similar principles in the 1980s with mixed success. However, in more recent years, the distributed client–server model has become considerably more difficult to maintain. As the number of transactions grew in response to various online services (especially the Web), a single distributed database was not a practical solution. In addition, most online systems consist of a whole suite of programs operating together, as opposed to a strict client–server model where the single server could handle the transaction processing. Today a number of transaction processing systems are available that work at the inter-program level and which scale to large systems, including mainframes. One effort is the X/Open Distributed Transaction Processing (DTP) (see also Java Transaction API (JTA). However, proprietary transaction-processing environments such as IBM's CICS are still very popular, although CICS has evolved to include open industry standards as well. The term extreme transaction processing (XTP) was used to describe transaction processing systems with uncommonly challenging requirements, particularly throughput requirements (transactions per second). Such systems may be implemented via distributed or cluster style architectures. It was used at least by 2011. References Further reading Gerhard Weikum, Gottfried Vossen, Transactional information systems: theory, algorithms, and the practice of concurrency control and recovery, Morgan Kaufmann, 2002, Jim Gray, Andreas Reuter, Transaction Processing—Concepts and Techniques, 1993, Morgan Kaufmann, Philip A. Bernstein, Eric Newcomer, Principles of Transaction Processing, 1997, Morgan Kaufmann, Ahmed K. Elmagarmid (Editor), Transaction Models for Advanced Database Applications, Morgan-Kaufmann, 1992, External links Nuts and Bolts of Transaction Processing (1999) Managing Transaction Processing for SQL Database Integrity Transaction Processing Database management systems
Transaction processing
Technology,Engineering
1,959
41,367,291
https://en.wikipedia.org/wiki/2MASS%20J0523%E2%88%921403
2MASS J0523−1403 is a very-low-mass red dwarf about 40 light-years from Earth in the southern constellation of Lepus, with a very faint visual magnitude of 21.05 and a low effective temperature of 2074 K. It is visible primarily in large telescopes sensitive to infrared light. 2MASS J0523−1403 was first observed as part of the Two Micron All-Sky Survey (2MASS). Characteristics 2MASS J0523−1403 has a luminosity of , a radius of , and an effective temperature of 1,939 K. This makes this star one of the smallest and coolest main sequence stars. It has a stellar classification of L2.5 and a V−K color index of 9.42. The mass is estimated to be (), though the CARMENES input catalogue estimates a mass around . Observation with the Hubble Space Telescope has detected no companion beyond 0.15 arcsecond. Sporadic radio emissions were detected by the VLA in 2004. H-alpha (Hα) emissions have also been detected, a sign of chromospheric activity. Hydrogen burning limit Members of the RECONS group have recently identified 2MASS J0523−1403 as representative of the smallest possible stars. Its small radius is at the local minimums of the radius–luminosity and radius–temperature trends. This local minimum is predicted to occur at the hydrogen burning limit due to differences in the radius-mass relationships of stars and brown dwarfs. Unlike hydrogen-burning stars, brown dwarfs decrease in radius as mass increases due to their cores being supported by degeneracy pressure. As the mass increases an increasing fraction of the brown dwarf is degenerate causing the radius to shrink as mass increases. The minimum stellar mass is estimated to be between 0.07 and 0.077 , comparable to the mass of 2MASS J0523−1403. See also OGLE-TR-122B OGLE-TR-123B EBLM J0555-57Ab VB 10 References Lepus (constellation) L-type stars J05233822-1403022 TIC objects
2MASS J0523−1403
Astronomy
458
44,538,513
https://en.wikipedia.org/wiki/%28a%2C%20b%29-decomposition
In graph theory, the (a, b)-decomposition of an undirected graph is a partition of its edges into a + 1 sets, each one of them inducing a forest, except one which induces a graph with maximum degree b. If this graph is also a forest, then we call this a F(a, b)-decomposition. A graph with arboricity a is (a, 0)-decomposable. Every (a, 0)-decomposition or (a, 1)-decomposition is a F(a, 0)-decomposition or a F(a, 1)-decomposition respectively. Graph classes Every planar graph is F(2, 4)-decomposable. Every planar graph with girth at least is F(2, 0)-decomposable if . (1, 4)-decomposable if . F(1, 2)-decomposable if . F(1, 1)-decomposable if , or if every cycle of is either a triangle or a cycle with at least 8 edges not belonging to a triangle. (1, 5)-decomposable if has no 4-cycles. Every outerplanar graph is F(2, 0)-decomposable and (1, 3)-decomposable. Notes References (chronological order) Graph invariants Graph theory objects
(a, b)-decomposition
Mathematics
293
57,116,987
https://en.wikipedia.org/wiki/NGC%201683
NGC 1683 is a spiral galaxy in the constellation Orion. The object was discovered in 1850 by the Irish astronomer William Parsons. See also List of NGC objects References Spiral galaxies 1683 016209 Orion (constellation)
NGC 1683
Astronomy
44
18,638,946
https://en.wikipedia.org/wiki/List%20of%20parties%20to%20the%20Biological%20Weapons%20Convention
The list of parties to the Biological Weapons Convention encompasses the states which have signed and ratified or acceded to the Biological Weapons Convention (BWC), a multilateral treaty outlawing biological weapons. On 10 April 1972, the BWC was opened for signature. The Netherlands became the first state to deposit their signature of the treaty that same day. The treaty closed for signature upon coming into force on 26 March 1975 with the deposit of ratification by 22 states. Since then, states that did not sign the BWC can only accede to it. A total of 197 states may become members of the BWC, including all 193 United Nations member states, the Cook Islands, the Holy See, the State of Palestine and Niue. As of July 2024, 187 states have ratified or acceded to the treaty, most recently the Federated States of Micronesia in July 2024. As well, the Republic of China (Taiwan), which is currently only recognized by , deposited their instruments of ratification of the BWC with the United States government prior to the US's decision to switch their recognition of the sole legitimate government of China from the Republic of China (ROC) to the People's Republic of China (PRC). A further four states have signed but not ratified the treaty. Several countries made reservations when ratifying the agreement declaring that it did not imply their complete satisfaction that the BWC allows the stockpiling of biological agents and toxins for "prophylactic, protective or other peaceful purposes", nor should it imply recognition of other countries they do not recognise. States Parties According to the treaties database maintained by the United Nations Office for Disarmament Affairs, as of July 2024, 187 states have ratified or acceded to the BWC. Multiple dates indicate the different days in which states submitted their signature or deposition, varied by location. This location is noted by: (L) for London, (M) for Moscow, and (W) for Washington D.C. Notes State with limited recognition, abiding by treaty The Republic of China (Taiwan), which is currently only recognized by , deposited their instruments of ratification of the BWC with the United States government prior to the US's decision to switch their recognition of the sole legitimate government of China from the Republic of China (ROC) to the People's Republic of China (PRC) in 1971. When the PRC subsequently ratified the treaty, they described the ROC's ratification as "illegal". The ROC has committed itself to continue to adhere to the requirements of the treaty, and the United States has declared that they still consider them to be "bound by its obligations". Signatory states The following four states have signed, but not ratified the BWC. Notes Non-signatory states The following six UN member states have neither signed nor ratified the BWC. Notes Succession of colonies to the BWC The status of several former dependent territories of a state party to the BWC, whose administrating power ratified the Convention on their behalf, with regards to the Convention following their independence is currently unclear. According to the Vienna Convention on Succession of States in respect of Treaties (to which 22 states are party), "newly independent states" (a euphemism for former colonies) receive a "clean slate", such that the new state does not inherit the treaty obligations of the colonial power, but that they may join multilateral treaties to which their former colonizers were a party without the consent of the other parties in most circumstances. Conversely, in "cases of separation of parts of a state" (a euphemism for all other new states), the new state remains bound by the treaty obligations of the state from which they separated. To date, this Convention has only been ratified by 22 states. The United Kingdom attached a territorial declaration to their instrument of ratification of the BWC in 1975 stating in part that it applied to: This declaration bound the territory of Kiribati to the terms of the Convention. Following its independence, it has not made an unambiguous declaration of succession to the BWC. Dominica, Tuvalu and Vanuatu's statuses were likewise ambiguous from their independence until they formally submitted instruments of accession or succession to the treaty. Kiribati In 1979, Kiribati gained their independence and subsequently, the President of Kiribati sent a note to the UNSG stating that: Since then, none of the depositaries for the BWC has received an instrument of accession or succession to the Convention from Kiribati. However, the Government of Kiribati has made statements suggesting that it does not consider itself a party to the treaty. Dominica After becoming independent in 1978, the Prime Minister of Dominica sent a note to the Secretary-General of the United Nations (UNSG) stating that: The Government of Dominica later stated that it did not consider itself bound by the Convention. However, Dominica was listed as a state party to the BWC in documents from the Meetings of the States Parties to the BWC. The UK Treaty Office (as depositary) did not receive an instrument of succession from Dominica until 2016. Tuvalu Following independence in 1978, the Prime Minister of Tuvalu sent a note to the UNSC stating that: Tuvalu acceded to the treaty as an independent state in 2024. Vanuatu In 1980, the territory gained their independence. Vanuatu was listed as a state party to the BWC in documents from the Meetings of the States Parties to the BWC, however the Government of Vanuatu made statements suggesting that it did not consider itself a party to the treaty and the UK depositary had no record of receiving an instrument of succession to the BWC from Vanuatu until 2016. See also List of parties to the Chemical Weapons Convention List of parties to the Convention on Certain Conventional Weapons List of parties to the Comprehensive Nuclear-Test-Ban Treaty List of parties to the Treaty on the Non-Proliferation of Nuclear Weapons List of parties to the Treaty on the Prohibition of Nuclear Weapons List of parties to the Ottawa Treaty List of parties to the Partial Nuclear Test Ban Treaty List of parties to weapons of mass destruction treaties References Arms control treaties Biological warfare Lists of parties to treaties
List of parties to the Biological Weapons Convention
Biology
1,269
16,458,569
https://en.wikipedia.org/wiki/GRB%20080319B
GRB 080319B was a gamma-ray burst (GRB) detected by the Swift satellite at 06:12 UTC on March 19, 2008. The burst set a new record for the farthest object that was observable with the naked eye: it had a peak visual apparent magnitude of 5.7 and remained visible to human eyes for approximately 30 seconds. The magnitude was brighter than 9.0 for approximately 60 seconds. If viewed from 1 AU away, it would have had a peak apparent magnitude of −67.57 (21 quadrillion times brighter than the Sun seen from Earth). It had an absolute magnitude of −38.6, beaten by GRB 220101A with −39.4 in 2023. Overview The GRB's redshift was measured to be 0.937, which means that the explosion occurred about 7.5 billion () years ago (the lookback time), and it took the light that long to reach Earth. This is roughly half the time since the Big Bang. The first scientific paper submitted on the event suggested that the GRB could have easily been seen to a redshift of 16 (essentially to the time in the universe when stars were just being formed, well into the age of reionization) from a sub-meter sized telescope equipped with near-infrared filters. The afterglow of the burst set a new record for the "most intrinsically bright object ever observed by humans in the universe", 2.5 million times brighter than the brightest supernova to date, SN 2005ap. Evidence suggests that the afterglow was particularly bright because its gamma jet pointed directly at Earth. This allowed an unprecedented examination of the jet structure, which appears to have consisted of a narrowly focused cone and a wider secondary one. If this is the norm for GRB jets, it follows that most GRB detections only capture the fainter wide cone, which means that most distant GRBs are too faint to detect with current telescopes. This would imply that GRBs are a far more common phenomenon than so far assumed. A record for the number of observed bursts with the same satellite on one day, four, was also set. This burst was named with the suffix B since it was the second burst detected that day. There were five GRBs detected in a 24-hour period, including GRB 080320. Until this gamma-ray burst event, the galaxy M83, at a distance of about 15 million light years, was the most distant object visible to the naked eye, albeit only under excellent conditions. The galaxy remains the most distant permanent object viewable without aid. The plot below shows the brightness in both the optical wavelengths and at higher energy for the event. The first optical exposure started about 2 seconds before the source was first observed by the SWIFT telescope and lasted for 10 seconds. The emission in both curves then peaks at around 15–30 seconds before a long exponential decay. See also GRB 080916C References Citations Database references External links Hubble Pinpoints Record-Breaking Explosion SkyWatch Show 165: Brightest Explosion Ever Seen (mp3) AAVSO Alert Notice 372 Possible naked-eye gamma-ray burst detected (GRB 080319B) ESO Press Release 080319B Boötes 20080319 March 2008
GRB 080319B
Astronomy
685
69,044,083
https://en.wikipedia.org/wiki/Porsche%20Indy%20V8%20engine
The Porsche Indy V8 engine (internal designation: Porsche Typ 9M0) is a 90-degree, four-stroke, single-turbocharged, 2.65-liter, V-8 Indy car racing engine, designed, developed and produced by Porsche, for use in the CART PPG Indy Car World Series; between 1980 and 1990. The engine was used in March chassis cars. Applications Interscope IR01 March 88C March 89P March 90P Porsche 2708 References Engines by model Gasoline engines by model Porsche IndyCar Series Champ Car V8 engines Porsche in motorsport
Porsche Indy V8 engine
Technology
117
34,484,408
https://en.wikipedia.org/wiki/Endothelial%20activation
Endothelial activation is a proinflammatory and procoagulant state of the endothelial cells lining the lumen of blood vessels. It is most characterized by an increase in interactions with white blood cells (leukocytes), and it is associated with the early states of atherosclerosis and sepsis, among others. It is also implicated in the formation of deep vein thrombosis. As a result of activation, enthothelium releases Weibel–Palade bodies. Mechanical sensing and responses Elevating shear stress induces a vascular response by triggering nitric oxide synthesis and mechanotransduction pathways of endothelial cells. The synthesis of nitric oxide facilitate shear stress mediated dilation in blood vessels and maintains a homeostatic status. Additionally, physiologic shear stress levels at the vessel wall upregulate the presence of antithrombotic agents through the mechano-signal transduction of mechano-recepting transmembrane proteins, junctional proteins, and subendothelial mechanosensors. Shear stress causes endothelial cell deformation which activates transmembrane ion channels Elevated wall shear stress caused by exercise is understood to promote mitochondrial biogenesis in the vascular endothelium indicating the benefits regular exercise may have on vascular function. Alignment is recognized as an important mechanism and determinant of shear-stress induced vascular response; in vivo testing of endothelial cells has demonstrated that their mechanotransductive response is direction dependent as endothelial nitric oxide synthesis is preferentially activated under parallel flow while perpendicular flows activates inflammatory pathways like reactive oxygen species production and nuclear factor-κB. Therefore, disturbed/oscillating flow and low flow conditions, which create an irregular and passive shear stress environment, result in inflammatory activation due to a limited alignment capability of the endothelial cells. Regions in the vasculature with low shear stress are vulnerable to elevated monocyte adhesion and endothelial cell apoptosis. However, unlike oscillatory flow, both laminar(steady) and pulsatile flow and shear stress environments are often considered together as mechanisms of maintaining vascular homeostasis and preventing inflammation, reactive oxygen species formation, and coagulatory pathways. High, uniform laminar shear stress is known to promote a quiescent endothelial cell state, provide anti-thrombotic effects, prevent proliferation, and decrease inflammation and apoptosis. At high shear stress levels (10 Pa), the endothelial cell response is distinct from upper normal/physiological values; high wall shear stress causes a promatrix remodeling, proliferative, anticoagulant, and anti-inflammatory state. Yet, very high wall shear stress values (28.4 Pa) prevent endothelial cell alignment and stimulate proliferation and apoptosis although the endothelial response to shear stress environments was determined to be dependent on the local wall shear stress gradient. See also Endothelial dysfunction References Further reading Circulatory system
Endothelial activation
Biology
642
34,990,231
https://en.wikipedia.org/wiki/Water%20fluoridation%20in%20Australia
Australia is one of many countries that have water fluoridation programs currently operating (see Water fluoridation by country). As of March 2012, artificially fluoridated drinking water is administered to 70% or more of the population in all states and territories. The acceptance of the benefits of water fluoridation occurred in Australia in December 1953, roughly two years after acceptance in the United States. Many of Australia's drinking water supplies subsequently began fluoridation in the 1960s and 1970s. By 1984 almost 66% of the Australian population had fluoridated drinking water, represented by 850 towns and cities. Some areas within Australia have natural fluoride levels in the groundwater, which was estimated in 1991 to provide drinking water to approximately 0.9% of the population. A key difference between the implementation of drinking water fluoridation in the United States and Australia was the impact of temperature and climate on water consumption. Temperatures are a key factor in the establishment of legislative requirements, such as the Water Fluoridation Regulation 2008 in Queensland, that prescribes concentrations of fluoride to be added to the water. Consequently, areas with higher average temperatures require less fluoride to be added to the drinking water to achieve the same oral health benefits. The tropical conditions found in parts of Australia, such as Queensland, also make it difficult to maintain fluoridation equipment due to higher levels of corrosion caused by the wet climate. The addition of fluoride to a drinking water supply is generally governed by the Australian Drinking Water Guidelines. The Guidelines recommend a health-related guideline value (maximum concentration) of 1.5 mg/L for fluoride, which mirrors the World Health Organization Guidelines for Drinking Water Quality 2006. Guidance on the concentration of fluoride has been present in the Guidelines since 1983. The National Health and Medical Research Council of Australia (NHMRC) issued a Public Statement in 2017 on Water Fluoridation and Human Health in Australia. The statement says “There is reliable evidence that community water fluoridation helps to prevent tooth decay. The consequences of tooth decay are considerable: dental pain, concern about appearance, costs due to time off school and work, and costs of dental treatment. There is no reliable evidence of an association between community water fluoridation at current Australian levels and any health problems.” Tasmania Fluoridation in Tasmania was initially regulated under the Public Health Act of 1953. The Tasmanian Government set up a Royal Commission to look into Fluoridation in 1966 and the report was published in 1968. The Royal Commission recommended that fluoridation should be a state responsibility. The Royal Commission stated that "Fluoridation must be a decision of the State Government. It is not a decision for a referendum or for local councils as people simply do not have the expertise in that. For a State Government to refer this decision off to a referendum or to local government would be an abrogation of the State's responsibility." The Fluoridation Act of 1968 was passed and gained royal assent in January 1969 this act regulates the fluoridation of drinking water in Tasmania. Almost all (98%) all public water supplies in Tasmania are fluoridated although approximately 10% of the residents do not have access to public water supplies. Under the Act, the need to add fluoride to a water supply is assessed by a fluoridation committee, which then provides a recommendation to the Health Minister. The Health Minister may then choose to direct the water authority to add fluoride to the water. The first town in Australia to fluoridate its water supply was Beaconsfield, Tasmania in 1953. It is understood that the impetus to fluoridate the water came from the Municipal chemist, Frank Grey, who was prompted to act when an opera singer advised him not to let his daughter's teeth be pulled if he wished her to continue singing. This was after a visiting dentist (to the local school) had extracted a tooth from his daughter. New South Wales The use of fluoride in New South Wales (NSW) is regulated by the Fluoridation of Public Water Supplies Act 1957, and the Fluoridation of Public Water Supplies Regulation 2007. The legislation allows for the Fluoridation of Public Water Supplies Advisory Committee and prescribes its membership under section 4 of the Act. It is chaired by the NSW Chief Dentist as the Minister of Health representative. Under the Act and regulations the local area council must make a request to the NSW Health Department that their water supplies be fluoridated. However, subsequent to that, if a council wishes to discontinue fluoridation then that decision rests with the Secretary of the Department of Health (Section 6B Discontinuance of Fluoridation). Approximately 95% of the NSW population has fluoridated water (September 2011). Fluoridation commenced in New South Wales with Yass in 1956. Sydney began fluoridation in 1968. One of the earliest locations to receive fluoridation was Grafton in 1964. However, the night before fluoridation was to commence the equipment was blown-up. The equipment was reinstalled and Grafton has fluoridated water. Fluoridation has not been implemented by some council areas and water utilities that service multiple council areas. They include: Boorowa (Hilltops), Brewarrina, Byron Shire, Carrathool, Central Darling, Coonamble, Gunnedah, Gwydir, Jerilderie, Liverpool Plains, Murrumbidgee, Narrabri, Narrandera, Narromine, Upper Hunter, Wakool, Walgett, Wentworth and Water NSW. Narrabri, Narrandera, Narromine have naturally occurring levels of fluoride and do not supplement their water supplies. In 2018 the Bega Shire Council voted 6 to 2 to add fluoride the areas that were unfluoridated. Oberon Shire Council voted 5 to 3 to add fluoride to the local water supply in July 2018 and the instrument of approval was issued in October 2018. Liverpool Plains Shire Council has been considering fluoridation of its water supply since 2018. In November 2013, Byron Shire Council decided to not add fluoride to its water supply. Australian Capital Territory Fluoride was initially recommended to be added to the Canberra water supply in December 1961 by the ACT Advisory Council, however the recommendation was not accepted. The ACT Advisory Body continued to lobby the government and fluoridation of the water supplies in Canberra and the City of Queanbeyan commenced in May, 1964. Queanbeyan, while in New South Wales shares its water supply with Canberra. There was a brief period in 1989 where fluoridation was suspended following a formal review of the effectiveness of fluoridation on oral health. As only one water supplier provides all of the water for these areas, the percentage of the population with fluoridated water has always been 100% during the times in which it was added. Western Australia Water fluoridation in Western Australia (WA) is regulated by the Fluoridation of Public Water Supplies Act 1966. The act is administered by the Western Australian Department of Health through the Fluoridation of Public Water Supplies Advisory Committee. The minister for Health can only direct water be fluoridated on the advice of the committee. Water fluoridation was introduced in Western Australia 1968. As of 2016, around 92% of the population is administered fluoridated water through a drinking water supply. Western Australia has a number of areas where no additional fluoride is added to reach effective levels; these include: Halls Creek, Marble Bar, Onslow, Paraburdoo, Tom Price, Meekatharra, Carnarvon, Bremer Bay, Leonora and Laverton. The water supply in Dunsborough, in the south west of Western Australia, is de-fluoridated to the optimal level ( 0.6 to 0.9 milligram per litre). Dunsborough gets its water from two aquifers and only the Sue Aquifer has fluoride above the optimum level. South Australia Water fluoridation in South Australia (SA) is administered through government policy rather than legislation. It is the responsibility of SA Water to administer fluoridation with South Australian water Supplies. In many cases water derived from bores is not fluoridated. Water fluoridation commenced in Adelaide in 1971. There is no legal requirement to add fluoride to drinking water supplies. Currently, March 2020, SA Health states that "90% of the state’s communities have access to reticulated water with appropriate levels of fluoride". Northern Territory The addition of Fluoride to public water supplies in the Northern Territory (NT) is done via government policies. In 2010 the NT Department of Health published a position paper that strongly encourages water providers to add fluoride where possible but it is not mandated. The Department of Health believes that communities with greater than 600 persons and naturally occurring fluoride of less than 0.5 mg/L should receive fluoridation based on a cost benefit analysis. The fluoridation of NT water supplies is the responsibility of the Power and Water Corporation. The Power and Water Corporation supplies water to 92 locations, 7 have fluoride added to their water and 71 have sufficient natural fluoride levels. As of 2012, 70% of the population in the Northern Territory has fluoridated water. Approximately 9% of the population have naturally fluoridated water. Fluoride has been added to public water supplies in Darwin since 1972. Katherine, Angurugu, Maningrida, Umbakumba, Wadeye and Wurruniyanga (Nguiu) also have fluoridated water. As of 2019 fourteen other areas are being considered for fluoridation. Nhulunbuy, in north east Arnhem Land does not fluoridate and a review carried out in 2020 has not resolved the issue. In 2019 the East Arnhem Clinical & Public Health Advisory Group sent an open letter to the Power and Water Corporation supporting water fluoridation. Supplies south of Elliott have naturally occurring fluoride at levels sufficient to provide an oral health benefit. Most communities in the Barkly and Southern regions have fluoride levels between 0.5 mg/L and 1.5 mg/L. In 2017-18 three locations recorded above accepted maximum levels at Alpurrurulam (1.7 mg/L), Nyirripi (1.8 mg/L) and Yuelamu (1.6 mg/L). Concern has been raised about fluoride levels below acceptable levels in Alice Springs and Yulara. Victoria Fluoride was first added to the drinking water for the Victorian town of Bacchus Marsh in 1962, with Melbourne beginning fluoridation in 1977. The towns of Portland and Port Fairy have naturally occurring fluoride in their drinking water. As of October 2024, approximately 90% of the Victorian population had fluoridated water. The fluoridation of Victoria's drinking water supplies is regulated by the Health (Fluoridation) Act 1973, by the Department of Health. While 90% of Victorians have fluoridated drinking water, there are still many rural towns in Victoria that do not, including in some outer suburbs of the city of Mildura. Queensland In Queensland (QLD) prior to the Fluoridation of Public Water Supplies Act 1963 (Qld), some councils had fluoridated town water supplied under the Local Government Acts. These Acts used general competency clauses that gave councils the ability to use discretionary powers if the action was not specifically covered by legislation. Under this Fluoridation of Public Water Supplies Act 1963 (Qld) only 5% of drinking water supplies were fluoridated. Queensland was unique in that it did not pursue water fluoridation like all the other Australian States and Territories and only 7 of the 850 Australian fluoridated water supplies operating in 1984 were located in Queensland. The reason for the low uptake of fluoridation in Queensland "lies with the Fluoridation of Public Water Supplies Act 1963 (Qld), which gives real power to the minister for Local Government, local authorities and 10 percent of electors, who can all request a referendum on fluoridation proposals. This law has given opponents of fluoridation tactical advantages, which they have used consistently." Under Queensland Premier Anna Bligh, the Labor government announced on 5 December 2007 that the mandatory fluoridation of most of Queensland's water supplies will begin in 2008. When it was enacted the Water Fluoridation Act 2008 required the addition of fluoride to any water supply providing potable water to at least 1000 members of the public, unless an exemption is granted based on safety or naturally occurring levels that meet the required levels. The Act received bipartisan support. The fluoridation of drinking water supplies is regulated by Queensland Health. Prior to this legislation Queensland was the only Australian state without a formal statewide program for the addition of fluoride to drinking water; unlike other states responsibility for fluoridation lies with the Minister for Local Government rather than the Minister for Health. The accompanying Water Fluoridation Regulation 2008 listed 134 drinking water supplies that were to be fluoridated by 31 December 2012. Of the drinking water supplies listed in the Regulation, 32 comprised the SEQ Water Grid located in Southeast Queensland. The fluoridation of these supplies by the end of 2009 accounted for the largest increase in people currently receiving fluoridated water in Queensland (approximately 2.6 million people in 2006 or 68% of the Queensland population). On 29 November 2012 the Queensland Parliament, with a Liberal National Party government, reversed the previous Labor government's mandate requiring certain public potable water supplies to add fluoride to the water. Annastacia Palaszczuk, current Premier of Queensland (as at 2021) attacked the decision of the government at that time but later stated in 2016 that "there is no present intention of reversing the 2012 decision." This decision is seen as a regional political issue taking precedence over the government's positive stance on fluoridation. As a consequence of these changes local councils in Queensland have the choice to add fluoride to drinking water supplies, similar to the conditions in place under the previous legislation. Five local council areas have naturally occurring fluoride in their water supplies: Bulloo Shire, Diamantina Shire, Kowanyama Aboriginal Shire, McKinlay Shire and Quilpie Shire. Both Birdsville in the Dimantina Shire and Julia Creek in the McKinlay Shire have naturally occurring fluoride level that exceed safe levels. Several areas of Queensland, generally in areas above the Great Artesian Basin, are known to have naturally occurring fluoride present in their drinking water, a characteristic that has been studied since the late 1920s. In summary, of the 77 Local Councils in Queensland, five have naturally high levels of fluoride, 26 fluoridate some or all of their water and the remainder do not fluoridate their water. In South East Queensland water is supplied to a number of council areas by SEQ Water who continue to fluoridate their water. Since November 2012 the major regional centres of Cairns, Mackay, Rockhamption, Gladstone and numerous other councils have stopped fluoridation of their water. After Mackay Shire Council decided to cease fluoridation of their water supply, Mayor Greg Williamson stated "... as a local council, public health is not our domain. We shouldn't be in this situation, but we are, so we made a decision." While Councilor Manning from the Cairns Shire Council stated "“It is also clear that fluoride is a State issue and if you believe that children in this state have poorer oral health than others, surely it’s time for Queensland Health to take responsibility for this issue as it does for other public health matters." References Australia Water in Australia Public policy in Australia
Water fluoridation in Australia
Chemistry
3,275
49,293,556
https://en.wikipedia.org/wiki/Peziza%20ammophila
Peziza ammophila is a fungus in the cup fungus genus Peziza. It grows on sand dunes and beaches. As it gets older, it comes out from the sand and splits. It grows in winter and spring, and there are records from North America, Argentina, Chile, the UK, Europe, Australia and South Africa. It is cup shaped. The Dutch call these Zandtulpjes, or "sand tulips." References Pezizaceae Fungi described in 1841 Fungus species
Peziza ammophila
Biology
106
9,463,447
https://en.wikipedia.org/wiki/CTCF
Transcriptional repressor CTCF also known as 11-zinc finger protein or CCCTC-binding factor is a transcription factor that in humans is encoded by the CTCF gene. CTCF is involved in many cellular processes, including transcriptional regulation, insulator activity, V(D)J recombination and regulation of chromatin architecture. Discovery CCCTC-Binding factor or CTCF was initially discovered as a negative regulator of the chicken c-myc gene. This protein was found to be binding to three regularly spaced repeats of the core sequence CCCTC and thus was named CCCTC binding factor. Function The primary role of CTCF is thought to be in regulating the 3D structure of chromatin. CTCF binds together strands of DNA, thus forming chromatin loops, and anchors DNA to cellular structures like the nuclear lamina. It also defines the boundaries between active and heterochromatic DNA. Since the 3D structure of DNA influences the regulation of genes, CTCF's activity influences the expression of genes. CTCF is thought to be a primary part of the activity of insulators, sequences that block the interaction between enhancers and promoters. CTCF binding has also been both shown to promote and repress gene expression. It is unknown whether CTCF affects gene expression solely through its looping activity, or if it has some other, unknown, activity. In a recent study, it has been shown that, in addition to demarcating TADs, CTCF mediates promoter–enhancer loops, often located in promoter-proximal regions, to facilitate the promoter–enhancer interactions within one TAD. This is in line with the concept that a subpopulation of CTCF associates with the RNA polymerase II (Pol II) protein complex to activate transcription. It is likely that CTCF helps to bridge the transcription factor-bound enhancers to transcription start site-proximal regulatory elements and to initiate transcription by interacting with Pol II, thus supporting a role of CTCF in facilitating contacts between transcription regulatory sequences. This model has been demonstrated by the previous work on the beta-globin locus. Observed activity The binding of CTCF has been shown to have many effects, which are enumerated below. In each case, it is unknown if CTCF directly evokes the outcome or if it does so indirectly (in particular through its looping role). Transcriptional regulation The protein CTCF plays a heavy role in repressing the insulin-like growth factor 2 gene, by binding to the H-19 imprinting control region (ICR) along with differentially-methylated region-1 (DMR1) and MAR3. Insulation Binding of targeting sequence elements by CTCF can block the interaction between enhancers and promoters, therefore limiting the activity of enhancers to certain functional domains. Besides acting as enhancer blocking, CTCF can also act as a chromatin barrier by preventing the spread of heterochromatin structures. Regulation of chromatin architecture CTCF physically binds to itself to form homodimers, which causes the bound DNA to form loops. CTCF also occurs frequently at the boundaries of sections of DNA bound to the nuclear lamina. Using chromatin immuno-precipitation (ChIP) followed by ChIP-seq, it was found that CTCF localizes with cohesin genome-wide and affects gene regulatory mechanisms and the higher-order chromatin structure. It is currently believed that the DNA loops are formed by the loop extrusion mechanism, whereby the cohesin ring is actively being translocated along the DNA until it meets CTCF. CTCF has to be in a proper orientation to stop cohesin. Regulation of RNA splicing CTCF binding has been shown to influence mRNA splicing. DNA binding CTCF binds to the consensus sequence CCGCGNGGNGGCAG (in IUPAC notation). This sequence is defined by 11 zinc finger motifs in its structure. CTCF's binding is disrupted by CpG methylation of the DNA it binds to. On the other hand, CTCF binding may set boundaries for the spreading of DNA methylation. In recent studies, CTCF binding loss is reported to increase localized CpG methylation, which reflected another epigenetic remodeling role of CTCF in human genome. CTCF binds to an average of about 55,000 DNA sites in 19 diverse cell types (12 normal and 7 immortal) and in total 77,811 distinct binding sites across all 19 cell types. CTCF's ability to bind to multiple sequences through the usage of various combinations of its zinc fingers earned it the status of a “multivalent protein”. More than 30,000 CTCF binding sites have been characterized. The human genome contains anywhere between 15,000 and 40,000 CTCF binding sites depending on cell type, suggesting a widespread role for CTCF in gene regulation. In addition CTCF binding sites act as nucleosome positioning anchors so that, when used to align various genomic signals, multiple flanking nucleosomes can be readily identified. On the other hand, high-resolution nucleosome mapping studies have demonstrated that the differences of CTCF binding between cell types may be attributed to the differences in nucleosome locations. Methylation loss at CTCF-binding site of some genes has been found to be related to human diseases, including male infertility. Protein-protein interactions CTCF binds to itself to form homodimers. CTCF has also been shown to interact with Y box binding protein 1. CTCF also co-localizes with cohesin, which extrudes chromatin loops by actively translocating one or two DNA strands through its ring-shaped structure, until it meets CTCF in a proper orientation. CTCF is also known to interact with chromatin remodellers such as Chd4 and Snf2h (SMARCA5). References Further reading External links https://www.ctcfemory.com/ A Group for families affected by CTCF mutations Transcription factors Gene expression Nuclear organization
CTCF
Chemistry,Biology
1,302
12,622,594
https://en.wikipedia.org/wiki/Departments%20of%20Labor%2C%20Health%20and%20Human%20Services%2C%20and%20Education%2C%20and%20Related%20Agencies%20Appropriations%20Act%2C%202008
The Department of Health and Human Services Appropriations Act of 2008 or the HHS-Labor-Education Appropriations Bill () is a bill introduced in the House of Representatives during the 110th United States Congress by Rep. David Obey. President G.W. Bush vetoed the act because of the cost and because it would ban the use of childhood flu vaccines that contain thimerosal, a mercury-based preservative that has been falsely claimed to cause autism. Status The bill passed the House of Representatives and the U. S. Senate. It was vetoed by President Bush on November 13, 2007. The House failed to achieve a two-thirds majority to override the president's veto by two votes on a 277–141 vote. References Proposed legislation of the 110th United States Congress Vaccination law United States federal appropriations legislation Thiomersal and vaccines Vaccination in the United States
Departments of Labor, Health and Human Services, and Education, and Related Agencies Appropriations Act, 2008
Biology
185